text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
InAs/InP(100) quantum dot waveguide photodetectors for swept-source optical coherence tomography around 1.7 μm : In this paper a study of waveguide photodetectors based on InAs/InP(100) quantum dot (QD) active material are presented for the first time. These detectors are fabricated using the layer stack of semiconductor optical amplifiers (SOAs) and are compatible with the active-passive integration technology. We investigated dark current, responsivity as well as spectral response and bandwidth of the detectors. It is demonstrated that the devices meet the requirements for swept-source optical coherent tomography (SS-OCT) around 1.7 µ m. A rate equation model for QD-SOAs was modified and applied to the results to understand the dynamics of the devices. The model showed a good match to the measurements in the 1.6 to 1.8 µ m wavelength range by fitting only one of the carrier escape rates. An equivalent circuit model was used to determine the capacitances which dominated the electrical bandwidth. Introduction Since its first proposal and demonstration [1], optical coherence tomography (OCT) has been proven to be an excellent solution for in vivo medical imaging without physical contact. It shows large potential in a wide range of applications such as ophthalmology, intravascular imaging, dermatology, and developmental biology. The swept-source OCT (SS-OCT) is one of the more successful techniques among other OCT schemes. In SS-OCT, the image along the depth of the sample to be investigated (e.g., tissue) is reconstructed by measuring a spectrally resolved interferometer signal [2] using a swept narrow-band laser source. The interferometer is Michelson type where the reference arm length is fixed and the second arm is lead to the sample. The reflected lights from the two arms are combined and will interfere. As the laser source sweeps its frequency, the intensity of the output of the interferometer is detected by a photodetector, the signal from which is recorded in real time. The information of reflections along the depth of the sample is then extracted by performing a Fourier transform of the recorded interference signal. The SS-OCT offers several advantages over other types of OCT systems, e.g., higher sensitivity [3], lower sensitivity roll-off with imaging depth [4] and a more simple optical design. The dominating limitation of imaging depth of SS-OCT in biological samples is the intensity reduction of the reflected light as the depth increases. In the wavelength regions that are currently used, this signal fading with depth is dominated by scattering of light in the sample and less so by the absorption [5]. One way to reduce scattering is to use longer wavelengths compared to the more commonly used 0.8 µm or 1.3 µm wavelength regions. The wavelength range from 1.6 to 1.8 µm lies in between two strong water absorption peaks and the scattering should be reduced even more. An improvement of imaging depth of up to 80% in the sample is predicted [6] with respect to the 1.2 to 1.3 µm range. An improvement in imaging depth of up to 40% was demonstrated in [7] for OCT in the 1.5 to 1.7 µm range. This was despite the fact that the water absorption at 1.5 µm is still significant. Two main components are required to open up OCT imaging in an SS-OCT system in this long wavelength range. The first is a swept laser source. In previous work [8], we have developed an experimental monolithically integrated tunable laser around the 1.7 µm wavelength region for the SS-OCT application. It is integrated on a single InP chip and uses InAs/InP(100) quantum dots (QDs) for its optical amplifiers. The laser can achieve a tuning range of 60 nm and a scanning speed of 0.5 kHz. The output power of the laser was however limited (0.1mW) and measurements could only be done in a free-space Michelson interferometer setup. Even the measurements on this set-up were seriously hampered by the commercial photodetector either by the high noise-level of spectrally optimized detector or by the limited spectral range of a standard low noise detector optimized for 1.55 µm. In both cases the signal-to-noise ratio (SNR) deteriorates to such an extent as to wipe out any advantage expected by the use of the long wavelengths through the reduced scattering in the sample. A suitable photodetector is thus the second main component that is essential to realize the potential for OCT imaging in the 1.6 to 1.8 µm range. Such detectors are not readily available commercially. The limited spectral range of the detector used in [7] was the reason for limiting the studies to the less suitable 1.5 to 1.7 µm range. In this paper we demonstrate that InAs/InP(100) QD waveguide photodetectors meet all the requirements for application in an OCT system operating in the 1.6 to 1.8 µm wavelength range that can achieve an improved imaging depth. These photodetectors thus open up the long wavelength region for OCT imaging. The technology of the integrated tunable QD laser discussed above allows for the monolithic integration of components of the interferometer with the laser. In particular the detector can be integrated provided that the QD active structure used for the detectors is compatible with the laser. Thus the detector could then be fabricated in the same process as the laser and be fully integrated with the laser. This is the main advantage of the QD active material and motivation for the investigation presented in this paper. In order to maximize the performance of the OCT system, the photodetectors should have a high responsivity, preferably higher than 0.5 A/W over the whole wavelength range around 1.7 µm where the quantum limit is 1.37 A/W. A high responsivity is the first condition to obtain a high extinction ratio (ER) of the beating signal, thus improving the SNR of the images. A flat spectral response is also preferable to ensure the efficient use of the entire wavelength range of the swept laser and to obtain the highest spatial resolution. To match the state-of-the-art 10 µm resolution in tissue, the 1.7 µm OCT systems require at least a bandwidth of 100 nm [9]. The dark current of the photodetectors should be low at room temperature. The level of the dark current will limit the minimum detectable optical power and thus maximum imaging depth. The noise level of typical commercial photodetectors used for OCT application [10] is associated with a dark current of the photodetectors of around 30 nA maximum. Thus we choose the maximum tolerable dark current as 30 nA. The signal bandwidth of the photodetectors must be sufficient for 3D SS-OCT imaging where the requirement of the repetition rate is the highest. For a swept laser source with a scanning speed of 20 kHz and 4000 wavelength samples in each scan, the minimum bandwidth of the photodetectors should be approximately 200 MHz (e.g., a 3D image of 200 by 200 pixels could then be recorded in approximately 2 s). We believe a laser with 20 kHz rate is feasible in the technology of the integrated laser system which we have demonstrated. Therefore we consider a bandwidth of 200 MHz to be sufficient. Since most of the proposed OCT realisations are based on fiber systems, the detector should be fiber coupled. Thus a waveguide photodetector is promising among other types of detectors. Several semiconductor material systems can be used in principle for photodetectors working in the 1.6 to 1.8 µm wavelength range. Photodetectors based on InGaAs, shortwaveinfrared (SWIR) InGaAs, Ge, PbS or HgCdTe have been studied intensely [11][12][13][14][15] and are commercially available [16][17][18][19]. But they have limitations on the performance such as low responsivity, high wavelength depencency of the spectral response, high dark current or high complexity of cooling. Work on QD photodetectors reported in literature focus on single photon detection [20]. Such devices are typically operated at cryogenic temperatures. QD photodetectors have also been developed in the mid-infrared wavelength region [21,22]. But the responsivities of them are low (~0.1 A/W at peak wavelength) thus not suitable for OCT application. To our knowledge this is the first publication on QD photodetectors in the 1.6 to 1.8 µm wavelength range. In this paper, we present InAs/InP(100) QD waveguide photodetectors sensitive in the 1.6 to 1.8 µm wavelength range. These devices have advantages of a sufficiently low dark current, high responsivity, flat spectral response and sufficient bandwidth. We will discuss the performance of our photodetectors in relation to the long-wavelength SS-OCT application. The results are however also relevant to other applications. The devices have an identical structure to the QD optical amplifiers in this wavelength region, and they have been fabricated using the same technology as the QD tunable laser demonstrated earlier and can therefore be easily integrated with it. This makes the QD photodetectors particularly promising for monolithically integrated OCT systems. In Section 2, the structure and the layout of the devices are presented. In Section 3, measurement methods and results on the dark current, device-length-related and spectrally resolved responsivities, the absorption spectra of the dots and the signal bandwidth of the QD waveguide photodetectors are presented. To interpret and analyze the measurements a theoretical model based on rate equations and an equivalent electrical model of the devices are presented in Sections 4 and 5 respectively. Comparisons between the measurement and theory are also discussed. From this work conclusions which are presented in Section 6 can be drawn on the usability of the InAs/InP(100) QD material for the purpose of photodetection. Both advantages and disadvantages over other material systems are discussed. Device structure and layout The QD waveguide photodetectors are realized by applying a reverse-bias voltage on a shallowly etched ridge waveguide QD semiconductor optical amplifier (QD-SOA). The QD-SOA structure (shown in Fig. 1(a)) is fabricated using a technology that is fully compatible with the active-passive optical integration scheme of the Inter-University Research School on Communication Technologies Basic Research and Applications (COBRA) at our university [23]. The QD active material is grown on an n-doped InP(100) substrate by metal-organic vapor-phase epitaxy (MOVPE) [24]. Five InAs QD layers are stacked with each layer (3 monolayers (MLs)) grown on top of an ultrathin GaAs interlayer (1 ML). The GaAs interlayers are used to control the size of the QDs. Between each QD layer, 40 nm InGaAsP separation layer is used. This stack of active materials is then placed in the center of the InGaAsP (Q1.25) waveguiding layer with a total thickness of 500 nm. During the MOVPE growth, the average size of the InAs QDs is tuned to have the emission (absorption) spectrum around 1.7 µm [25]. The waveguiding layer is sandwiched by a bottom cladding of a 500 nm n-type InP buffer and a top cladding of 1.5 µm p-type InP layer with a compositionally graded 300 nm p-type InGaAs contact layer. The single-mode shallowly etched waveguide with a width of 2 µm is formed by etching 100 nm into the InGaAsP waveguiding layer using a reactive ion etching (RIE) process. The RIE process is also used to etch isolation sections on the ridge waveguide in order to create sections of varying length and to provide electrical isolation between adjacent sections. The isolation sections are formed by etching away the top cladding layer until 200 nm above the waveguiding layer (see Fig. 1(a)). The structure is then planarized using polyimide before creating the top and backside metal contacts. The structure is cleaved perpendicularly to the waveguide and no coating is applied. The whole layer stack of the QD photodetector is the same as the layer stack of a QD-SOA [23], and is compatible with a butt-joint active-passive integration process for further integration [26]. It opens a possible way to the monolithic integration of the swept laser, the photodetector and the interferometer structure in a single chip. A photograph of the fabricated chip is shown in Fig. 1(b). A single chip contains an array of 26 QD-SOAs, each of which consists of two sections. The strips of metallization that lie on top of the waveguides are clearly visible. The shorter sections are reversely biased as the QD waveguide photodetectors. The longer sections are used to absorb the residual optical power passing through the photodetectors and prevent reflections from the back facets. The ratio of the lengths of the shorter and longer sections is varied such that a series of devices with different lengths are realized on a single chip. In this paper, we will present the measurement results from two chips with 52 devices in total. One chip has a total length of 4 mm and device lengths of the photodetectors range from 200 µm to 1160 µm, the other has a total length of 6 mm and device lengths range from 300 µm to 1740 µm. Note that the metallization pattern is optimized for optical amplifier operation. Measurement methods The dark current, the length-dependent responsivity, the absorption spectra, the spectral response and the optical response to the modulated signal have been measured for the QD waveguide photodetectors. A diagram of the measurement set-up used is shown in Fig. 2. The measurement of the dark current, photocurrent and responsivity of the photodetectors are done with a constant optical input. The measurement of the response to the modulated optical input is done with a sine wave modulation on the input signal. A polarization-maintaining (PM) lensed fiber is used to input linearly polarized light from a tunable laser into the waveguide photodetectors with a coupling loss of 4 ± 0.2 dB. The orientation of this PM lensed fiber can be rotated such that the polarization state of the excitation light launched into the waveguide can be controlled. A 1 GHz high-speed probe is attached to the anode (top contact) of the photodetector. It is then connected to the core of an SMA connector. The case of the SMA connector is connected to the cathode (substrate contact) of the photodetector. The DC measurements are realized by applying a reverse-bias voltage on the SMA connector (i.e., on the photodetector) from a source meter. The source meter also reads out the current values generated in the detector. The dynamic measurement is done by recording and analyzing the temporal response of the photodetector to the modulated optical input. A commercial photodiode transimpedance amplifier module (Ultrafastsensors, CIT735SP) [27] is used to convert the current into a voltage signal and amplify it such that the output voltage signal can be recorded in an oscilloscope (1 GHz bandwidth; 4 GHz/s sampling rate). The bandwidth of the amplifier module is 210 MHz, which is higher than required for the bandwidth of an SS-OCT system. The reverse bias of the photodetector under dynamic measurement is directly supplied by the amplifier module. Dark current The dark current consists of the current generated randomly in the diode in the absence of the photon input plus any leakage current that may run along the sides of the ridges under reverse bias. The dark currents of the QD waveguide photodetectors are measured for 52 devices. The results for four devices of different lengths and for a range of reverse-bias voltages are shown in Fig. 3. Twenty devices turned out to have an excessively high dark current (in the order of mA at a few volts' bias) which we attribute to a failing of the surface passivation of those devices. For a fixed device length, the dark current of the properly functioning devices increases exponentially with the reverse-bias voltage. The mechanism behind this phenomenon is not certain. This might be attributed to the increase of the side-wall leakage current or to the Zener effect at higher voltages. Clearly the dark current increases proportionally with the increasing device length, i.e., the surface area of the diode. For devices shorter than 1000 µm, the dark current can stay below 10 nA when the reverse-bias voltage is lower than 2 V. The dark current increases up to 30 nA when reverse-bias voltage is 3 V. Thus from a practical view, the device length should be shorter than 1000 µm and reverse-bias voltage be lower than 3 V in order to maintain a sufficiently low dark current (< 30 nA). When the voltage is increased further the dark current increases rapidly and the operating point becomes impractical. Responsivity As discussed in the introduction, the responsivity is a key characteristic of a photodetector for the OCT application. The relationship between responsivity and various operational parameters (e.g., reverse-bias voltage and device length) is important for choosing the most suitable device. In order to explore the responsivity property, a series of measurements are done for 32 out of total 52 devices with different lengths. A laser with a 1640 nm wavelength and 0 dBm optical power is used as the light source. The photocurrents are recorded for each of the 32 devices for four different reverse-bias voltages and for both polarizations. The responsivities are then calculated by calibrating the photocurrents with the input optical power (with a coupling loss of 4 dB). Figure 4(a) shows the responsivities of all measured devices in all conditions. It is shown in the figure that as the device length increases, the responsivity also increases since more photons are absorbed in the detector. When the device length increases to a certain value, the increment of the responsivity becomes less and the trend becomes flatter. This indicates that after reaching a certain length, almost all the photons are absorbed. Thus little improvement will be seen for longer devices. Here we define the absorption length to be the length where the responsivity reaches 95% of its maximum. As can be seen in Fig. 4(b), the absorption length is inversely proportional to the reverse-bias voltage. This indicates that the photon absorption of the diode becomes stronger at higher reverse-bias voltages. This phenomenon will be further explored in Section 3.4. It is also obvious that the absorption length of TM polarization is longer than that of TE polarization. This indicates a relatively lower absorption for TM polarization which will be proven by fitting the simulation to the measured data in Section 4.2. The response of the photodetectors to a scan of input optical power has also been investigated. The wavelength of the input laser is fixed at 1640 nm with an output power of 0 dBm. The laser output is then directed to a tunable attenuator which provides attenuations from −60 dB to 0 dB. The total loss before entering the lensed fiber (including the insertion loss of the attenuator) is 6.04 dB. A coupling loss of 4 dB between the lensed fiber and waveguide is assumed. As the attenuation of the optical power is scanned from −60 dB to 0 dB, the photodetectors generate corresponding photocurrents. The photocurrents are then linearly fitted. We found that within the attenuation range from −20 dBm to 0dBm, the relative deviation between measured and fitted data is less than 5% for both short (280 µm) and long (1120 µm) devices and for both TE and TM polarizations. When the attenuation increases further, the relative deviation becomes larger because the influence of the dark current cannot be neglected. Saturation is not observed for both short (280 µm) and long (1120 µm) devices. Absorption spectra As is observed in the previous section that the photon absorption becomes stronger at higher reverse-bias voltages, the absorption spectra of the photodetectors have been measured in order to investigate the absorption behavior of the QDs. The measurement is done by reverse biasing the short SOA section (the photodetector) and injecting a current into the long section by forward biasing (see Fig. 1(b)). The injected current density is set to be 3000 A/cm 2 such that the long SOA section provides an amplified spontaneous emission (ASE) output the power of which is sufficient such that the residual optical power passed through the photodetector is recognizable. The ASE spectrum P 0 (λ) from the output facet of the long section is collected by the lensed fiber and recorded by an optical spectrum analyzer (OSA) with a resolution of 0.1 nm. Thereafter the residual optical power P r (λ) which passes through the photodetector is collected from the output facet of the short section for a number of reverse-bias voltages. Figure 5(a) shows the measured spectra for P 0 (λ) and P r (λ) for a device length of 600 µm. The relation between P 0 (λ) and P r (λ) can be expressed as follows, where α(λ) is the absorption coefficient, L PD the length of the photodetector, L SOA the length of the long SOA section, P ASE (λ) the ASE spectrum generated in the long SOA section and η the coupling efficiency between the lensed fiber and the waveguide. The facet reflection R f is taken into account in this calculation since part of the light at the output facet of the long SOA will be reflected back into the SOA. The reflected light will be amplified by the small signal gain of the SOA and will contribute to the input power of the photodetector. Since we have only two sections available in each device we have to make assumptions in the analysis. We assume that the fiber coupling losses of the short and long sections are identical. It should also be noticed that the measured absorption spectra also include the pure propagation loss of the optical mode in the short detector section. When a method with four SOA sections in the device is used [28] a higher accuracy can be achieved. By applying Eq. (3) to the measured data shown in Fig. 5(a), the absorption spectra for different reverse-bias voltages can be derived (see Fig. 5(b)). It can be found in the figure that the absorption increases with increasing reverse-bias voltage. It is a result of the increased carrier extraction rate in the diode. As the reverse-bias voltage increases, the carriers flow faster out of the QDs which can then absorb another photon. The probability of the QDs being occupied with a carrier pair decreases which results in an increase of the absorption. Spectral response The QD waveguide photodetectors have a flat response over a whole 300 nm wavelength range as shown in Fig. 6(b). This well satisfies the requirement for the OCT application. The responsivities of the QD waveguide photodetectors have been measured as a function of wavelength. This is done by scanning the wavelength of the optical input. A device with a length of 960 µm is used. The choice of this length is that this device has a good dark current performance and the length is long enough to absorb most (95%) of the light (see Fig. 4(b)). The photocurrents are recorded for every 5 nm wavelength step and are calibrated with the original optical power of each wavelength. The light sources used for the measurement are a commercial tunable laser which covers a wavelength range from 1.44 µm to 1.64 µm, and our QD tunable laser proposed earlier [8] which covers a wavelength range from 1.685 µm to 1.745 µm. The measurements are done for four reverse-bias voltages at which the darkcurrents are sufficiently low and for both TE and TM polarizations. It should also be mentioned that there is an increasing uncertainty in the data in the wavelength range between 1.685 µm to 1.745 µm since the power meter used in the measurement is not calibrated for wavelengths longer than 1.65 µm. The spectral dependency of coupling loss between the fiber and the waveguide facet is also investigated. It is done by calculating the overlap integral between the fundamental mode of the waveguide and the Gaussian mode profile of the lensed fiber as a function of wavelength. The wavelength-dependent waveguide modes are calculated by a finite-difference method (FDM) mode solver. The wavelength-dependent modes and focal lengths of the lensed fiber can be found in [29]. The calculation shows very little wavelength dependency of the coupling loss (see Fig. 6(a)). As in the 1.4 to 1.8 µm wavelength range, the coupling loss is almost wavelength insensitive. The difference between minimum and maximum loss is less than 0.5 dB. The polarization dependency is also very low (< 0.2 dB). Thus the coupling loss has little influence on the spectral responses. It is clear in Fig. 6(b) that the responsivity of the photodetector increases dramatically with the reverse-bias voltage. When the reverse-bias voltage increases from 0 V to 1 V, the responsivity shows the largest increment. If the reverse-bias voltage increases further, the increase of responsivity is less but still significant. This phenomenon can also be observed in Fig. 4(a). It can be explained as the increase of the carrier extraction rate from the QDs as will be shown and discussed in detail using a QD rate equation model in Section 4. The increase of reverse-bias voltage will enhance the electric field in the depletion region of the diode. As a result, the carrier extraction rate from the QDs increases. It is also obvious from Fig. 6(b) that over the whole wavelength range, the slope of the spectral response curves stays almost the same when the reverse-bias voltage varies. It indicates that when the reverse-bias voltage changes the shape of the absorption spectra of the photodetector does not change (see Fig. 5(b)). It is also clear that the device shows a polarization dependency. This is mainly due to the physical nature of the strained QDs [25,30]. The absorption coefficient at TM polarization will be less than that of TE polarization, thus causing the difference in responsivities. As a result, when the reverse-bias voltage is 3V, the photodetector provides a very flat response to a wide wavelength range (300 nm) with an average responsivity as high as 0.7 A/W. The high responsivity and flat spectral response show good perspective in the application of SS-OCT systems. Bandwidth To determine the signal bandwidth of the photodetector, sine wave amplitude modulated laser light at a wavelength of 1.55 µm and a power of 3 dBm was launched into the detector with a varying modulation frequency. The modulation depth which is defined as max where V max and V min represent the average upper and lower envelopes of the signal, is 20% in our measurement. Figure 7 shows the frequency response of a 280 µm-long photodetector under different reverse-bias voltages. It is clear that the 3 dB bandwidth increases as the reverse-bias voltage increases. This is possibly due to the increase of the depletion region thus decrease of the junction capacitance. As the voltage increases up to 3 V, the 3 dB bandwidth can reach 75 MHz. The 3 dB bandwidth is also related to the length of the device. The longer the device, the larger the junction and metal contact capacitances. Thus a shorter device will provide a higher bandwidth (shown in Fig. 8). When the device is as long as 1 mm at 2 V, the 3 dB bandwidth is only 30 MHz. While the device length shrinks to 200 µm, the bandwidth increases significantly to 83 MHz. It should be noticed that, since the lower limit of the modulation frequency in our setup is limited to 10 MHz we had to align the curves at 10 MHz. Therefore the actual bandwidth will be slightly lower than the measured one. The measured bandwidth is relatively low compared to other waveguide photodetectors [31,32]. The main reason is the large capacitance of the structure. This capacitance is relatively large due to the fact that an n-doped substrate is used in combination with the large 220 µm wide metal contacts as well as the relatively long devices that are needed due to the low absorption of the material. However the measured bandwidth is already close to being sufficient for the OCT application. In Section 5, the main sources which limit the bandwidth will be analyzed by applying an equivalent circuit model on the photodetectors. Rate equation model In this section, a modified rate equation model analyzing the absorption behavior of the QDs will be presented and analyzed. The rate equations together with the modifications and parameters will be presented first. The responsivities of various devices will be calculated and compared to the measurement results which have been shown in Section 3.3. The spectral response as well as the absorption behavior of the QDs is also simulated and discussed. Modified model and parameters Various rate equation models have been proposed for understanding the gain properties of the QD-SOAs and lasers [23,33,34]. Obviously they are used for the gain analysis with a current injection. Here we present a rate equation model based on a QD-SOA model [23,34] that has been modified for the simulation of photodetection with current extraction. In Ref [23], a rate equation model was applied to the analysis of the gain in QD-SOA in the 1.6 to 1.8 µm wavelength range. A good match was achieved between the model and the measured small signal gain spectra. It also explained the shape of the gain as a function of the injected current density. Since we used the same QD active material and layer stack for the photodetector, the QD rate equation model and several of the parameters used in [23] were modified to simulate the behavior of the photo-absorption of the QDs. The whole schematic of the structure of the energy band diagram is depicted in Fig. 9 where all the carrier dynamics are also indicated. The photo-generated carriers in the GS are transferred to ES with an escape rate of 1/τ eGS . Then together with the carriers generated in ES, they escape from ES to WL with a rate of 1/τ eES . The escape process from WL to SCH (1/τ qe ) is assumed to be strongly dependent on the reverse-bias voltage. Finally the carriers are extracted by a very fast process (1/τ esc ) due to the high electric field. The carriers can also flow back from SCH to WL, from WL to ES and from ES to GS with capture rates of 1/τ s , 1/τ c and 1/τ d respectively. The carriers also experience radiative or non-radiative processes in SCH, WL and two energy states in QDs with a rate of 1/τ sr , 1/τ qr and 1/τ r respectively. The resulting rate equations are as follows: The rate equation model consists of one equation representing the SCH (Eq. (4)), one representing the WL (Eq. (5)), N equations for ES (Eq. (6)), and N equations for GS (Eq. (7)). The rate equations are then coupled with one equation for the photons (Eq. (8)) where spontaneous emission (βN spon /τ r ) and pure photon loss (S/τ p ) are also included. The major modification from [23] is the change from current injection to current extraction. We set the current injection in the model to zero, and add an additional carrier escape rate 1/τ esc from the SCH layer out of the QDs. This parameter presents the fast extraction of the photo-generated carriers due to the high electric field and it is assumed to be much faster than other capture and escape rates in the model. Another modification is that the multiple photon groups used for representing different wavelength components in the gain spectra is reduced to only one photon group which represents a single wavelength as in the measurements. In the model, the carrier escape times of ES and GS are related with the carrier capture times τ c0 and τ c0 in the following way [34]: During the calculation, the absorption coefficients of ES and GS of the QDs are calculated with the following equations: (14) where Γ is the confinement factor of the QD active layer. All the values of the parameters fixed in the rate equation model are summarized in Table 1. During the simulation, τ qe is used to represent the different carrier extraction rate due to different reverse-bias voltages. This is the only parameter that is adjusted to match the calculated responsivities to the measured ones. For TM polarization, the transition matrix element |P σ ES, GS | 2 is also adjusted to fit the simulations to the measurements. This is because only TE polarization was considered and described in the simulation of QD-SOAs in [23]. The rate equations are solved in the time domain. They are integrated in time till a steady state has been reached [23]. During the simulation, an instant carrier extraction from SCH out of the QDs is assumed (τ esc = 0.1 ps) due to the high electric field. All the carriers escaping from WL to SCH will be instantly extracted by this fast process. As long as τ esc is shorter than 10 ps, it does not significantly influence the simulation results any further. The escape rate from WL to SCH is determined to be dependent on the reverse-bias voltage to represent the voltage-dependency of the carrier extraction rate mentioned in Section 3.3. The optical confinement factors of the active region for TE and TM polarizations are calculated over the whole wavelength range (1.4 µm to 1.8 µm) using an FDM mode solver (see Fig. 10(a)). The wavelength dependency of the internal modal loss α i is also considered as shown in Fig. 10(a). One can clearly see the loss for longer wavelengths is much higher than that for shorter wavelengths due to the higher overlap of the optical mode and the highly doped p-type cladding layers. Comparison with experimental results The simulation is first performed at an optical input with fixed wavelength and fixed optical power for TE polarization. The device length is scanned from 200 µm to 2000 µm. The carrier escape rate from WL to SCH (1/τ qe ) is the only parameter that is adjusted to match the simulation with the measured data for one particular value of the reverse-bias voltage. The relation between 1/τ qe and the reverse-bias voltage is shown in Fig. 10(b). The increase in the rate represents the increase of the reverse-bias voltage. The simulated results for all the devices are shown in Fig. 4(a) in dashed curves. As can be seen in the figure, the simulation results match very well with the measured data. For TM polarization, we use the same set of τ qe since the carrier dynamics is not expected to change with the polarization state of the incident light. The optical confinement factors for TM polarization need to be used. The transition matrix elements are also adjusted for TM polarization to represent the polarization dependency of the QDs. After the simulated result being fitted to the measured data, the transition matrix elements decrease from 2.70·m 0 ·E EG,GS for TE polarization (as shown in Table 1) to 2.30·m 0 ·E EG,GS . The lower transition matrix elements for TM polarization indicate lower absorption coefficients (α ES, TM and α GS, TM ) according to Eq. (12) and Eq. (13). Simulations for the spectral behavior of the photodetectors are also performed. The spectral simulations for a 960 µm-long device under TE polarization are performed (as shown in Fig. 11(a)). The situation is similar for TM polarization. It can be seen in the figure that for wavelengths longer than 1.6 µm, the simulation matches well with the measured spectrum. But for the shorter wavelengths below 1.6 µm, there is a clear deviation between simulation and measurement. The reason of this deviation is that the absorption coefficient of the QDs in the shorter wavelength region is underestimated. As can be seen in Fig. 11(b), the photon absorption α (Eq. (14)) of the device is calculated for TE polarization. It shows that the calculated photon absorption in the shorter wavelength region is lower than that in the longer wavelength region. On the other hand, the measured photon absorption in the shorter wavelength region is higher than that in the longer wavelength region (see Fig. 5(b)). It is possible that the photon absorption is underestimated due to the exclusion of the contributions from higher energy states in the QDs. Such states would not show up in the ASE spectra in forward bias, but can play a role in absorption. Also absorption in the WL was not included in our model. The energy level of the WL corresponds to a wavelength of 1.47 µm [23] and is indicated as a blue dashed line in Fig. 11(a). This wavelength is in the wavelength region where a clear deviation occurs. Thus the exclusion of absorption from WL is the most likely reason for the deviation. According to our simulation, the photodetectors still provide high responsivities for wavelengths beyond 1.8 µm (e.g., 0.6 A/W at 2 µm, 3 V). Equivalent circuit model In this section, an equivalent electrical circuit model is applied on the QD waveguide photodetectors to analyze the factors that limit the bandwidth of the devices. The simulated bandwidth is matched to the experimental results in order to determine the capacitances in the circuit. Equivalent circuit modeling The QD waveguide photodetectors can be modeled by an equivalent circuit method [35] as shown in Fig. 12. I o (ω) is the photon-generated current paralleled by the junction capacitance C pd and junction resistance R d . A time-independent reverse-bias voltage V R is applied on the photodetector. The device is in series with a series resistance R s and then in parallel with the probe pad capacitance C p . The photodetector is connected with the commercial photodiode amplifier module [27], which has an input capacitance (C i = 7 pF) and an input load (R L = 50 Ω). The series resistance R s is estimated to be about 7.6 Ω using the method presented in [31]. The junction resistance R d can be neglected when the dark current is very low (<< µA). The C pd and C p are adjusted during the simulation to match the simulated bandwidth to that of the measured one. The frequency response of this circuit is the product of the frequency response of the RC circuit and the frequency response of the amplifier module. According to [31], the frequency response of the RC circuit can be written as The frequency response of the amplifier module H 2 (ω) can be directly measured by a network analyzer. Thus the overall frequency response is the product of H 1 (ω) and H 2 (ω). The total bandwidth of the photodetector can be written as Comparison with experimental results The junction capacitance C pd and probe pad capacitance C p are first determined for a device with a fixed length and a range of reverse-bias voltages. The C pd should be inversely proportional to the applied voltage since as the voltage increases the junction capacitance will decrease due to an expansion of the depletion region. The C p on the other hand will not change as the voltage. Thus by matching the simulated 3 dB bandwidth with the measured one, the value of C p and voltage dependency of C pd can be determined. The frequency responses of the amplifier module, the equivalent circuit and the overall model of a 280 µmlong device under 2 V reverse bias are shown in Fig. 13(a). The determined values of C pd and C p under different reverse-bias voltages are shown in Fig. 13(b). As C p does not change as voltage, it keeps a relative high value (as high as 46 pF). This is mainly due to the large area of the metal contact used in our devices (220 µm wide and same length as the device). It can also be seen that there is a significant improvement on C pd as the reverse-bias voltage increases. The error bars of the C pd in Fig. 13(b) indicate the variability of C pd when the ± 5% relative deviation of the bandwidth values used in the calculation is considered. After determining the capacitances for a device with a certain length, the capacitances for devices with other lengths can be easily estimated because both C pd and C p are linearly proportional to the device length. Then the relation between the 3 dB bandwidth and the device length can be simulated based on the results obtained at 280 µm length and compared with the measured results. As can be seen in Fig. 14, the simulated bandwidths match well to the measured ones. The figure shows that the length of the device strongly affects the 3 dB bandwidth. According to our model, the 3 dB bandwidth of the device is mainly limited by C p . Thus the bandwidth can be easily improved by optimizing the metallization of the photodetectors. Conclusion In this paper we have presented the QD waveguide photodetectors and have shown them to meet requirements for application in OCT in the 1.6 to 1.8 µm wavelength range. By choosing a relatively short device (280 µm) and applying a reverse-bias voltage (3 V) these requirements can be met. A low dark current (~15 nA) and flat spectral response (> 0.5 A/W over 300 nm wavelength span) can be achieved. The dark current is of the same magnitude as the InGaAs detectors and much smaller than that of the SWIR InGaAs detectors. The responsivity is also much higher than photodetectors of the InGaAs type. And the flatness of the spectral response is more advantageous compared to all other candidates (e.g., InGaAs, SWIR InGaAs and Ge types). The rate equation model was applied to understand the carrier dynamics in the QD material. The model explains well the absorption behavior of the QDs in the 1.6 to 1.8 µm wavelength range and shows a good match to the experimental results for the length-dependent responsivities. High responsivities for wavelengths beyond 1.8 µm can still be expected according to this model. An equivalent circuit model was also applied. By matching the simulated bandwidths with the measured ones, the capacitances which dominate the bandwidth are estimated and analyzed. The device provides a 3 dB bandwidth of about 70 MHz which is sufficient for OCT application. According to the simulation, bandwidths well over 200 MHz should be achievable with optimized metallization. The QD waveguide photodetectors have also shown potential in other applications such as near-infrared spectroscopy and gas sensing. There are also several improvements which can be made. For instance, the photon absorption of the active material can be improved by using a new QD material with higher dot density [36]. This will significantly shorten the length of the device and metal contact and will thus help to improve the bandwidth. The layer stack of the photodetector can also be adjusted for less overlap between the optical mode and the highly-doped InP contact layer. The improvement on the propagation loss will increase the maximum achievable responsivity. However this will reduce the performance of the layerstack when being used as an optical amplifier. A spot-size converter might also be included to improve the coupling efficiency between the waveguide and the optical fiber.
9,965.2
2012-02-13T00:00:00.000
[ "Physics" ]
Ocean color measurements with the Operational Land Imager on Landsat-8: implementation and evaluation in SeaDAS Abstract. The Operational Land Imager (OLI) is a multispectral radiometer hosted on the recently launched Landsat8 satellite. OLI includes a suite of relatively narrow spectral bands at 30 m spatial resolution in the visible to shortwave infrared, which makes it a potential tool for ocean color radiometry: measurement of the reflected spectral radiance upwelling from beneath the ocean surface that carries information on the biogeochemical constituents of the upper ocean euphotic zone. To evaluate the potential of OLI to measure ocean color, processing support was implemented in Sea-viewing Wide Field-of-View Sensor (SeaWiFS) Data Analysis System (SeaDAS), which is an open-source software package distributed by NASA for processing, analysis, and display of ocean remote sensing measurements from a variety of spaceborne multispectral radiometers. Here we describe the implementation of OLI processing capabilities within SeaDAS, including support for various methods of atmospheric correction to remove the effects of atmospheric scattering and absorption and retrieve the spectral remote sensing reflectance (Rrs; sr−1). The quality of the retrieved Rrs imagery will be assessed, as will the derived water column constituents, such as the concentration of the phytoplankton pigment chlorophyll a. Introduction We define ocean color as the spectral distribution of reflected visible solar radiation upwelling from beneath the ocean surface.Variations in this water-leaving remote sensing reflectance distribution, RrsðλÞ, the ratio of radiance emerging from beneath the ocean surface to the solar irradiance reaching the ocean surface, are governed by the optically active biological and chemical constituents of the upper ocean through their absorption and scattering properties.A primary driver for variations in ocean color is the concentration of the phytoplankton pigment chlorophyll a (Ca; mg m −3 ), and bio-optical algorithms have been developed that relate measurements of RrsðλÞ to Ca that provide a proxy for phytoplankton biomass. 1 As marine phytoplankton account for roughly half the net primary productivity on Earth, 2 ocean color measurements are critical to our understanding of planetary health and the global carbon cycle.Other bio-optical and biogeochemical properties that can be inferred from RrsðλÞ include spectral absorption by colored dissolved organic matter (CDOM), concentrations of total suspended sediments, measurements of water clarity, such as marine diffuse attenuation coefficient and euphotic depth, and the presence of harmful algal blooms.Ocean color, thus, also provides a valuable tool for monitoring water quality and changes in the marine environment that can directly impact human health and commerce, especially in coastal areas and near lakes and inland waterways, where much of the human population resides. Landsat-8 was launched into a sun-synchronous polar orbit on February 11, 2013, carrying with it the Operational Land Imager 3 (OLI, Table 1).Prelaunch simulations based on radiometric performance specifications demonstrated that the sensor, while primarily designed for land applications, has the potential to provide useful measurements of aquatic environments, including the separation and quantification of Ca, CDOM, and suspended sediments in the water column. 4,5A significant advantage of OLI over existing global ocean color capable missions is the 30 m spatial resolution, which is more than an order of magnitude higher than NASA's Moderate Resolution Imaging Spectroradiometer 6 (MODIS) currently operating from the Aqua spacecraft (MODISA) and the Sea-viewing Wide Field-of-View Sensor 7 (SeaWiFS) that operated from 1997 to 2010.Increased spatial resolution is of particular benefit in studying heterogeneous coastal and inland waters, where the typical 1 km resolution of existing global sensors cannot resolve the fine spatial structure of the water constituents or separate water from land near coasts and in narrow rivers and bays.Thus, OLI on Landsat-8 has the potential to make a valuable contribution to ocean color science and environmental monitoring capabilities for aquatic ecosystems, especially in coastal environments and inland waters. The measurement of ocean color from spaceborne instruments is challenging because the water-leaving signal is only a small fraction of the total signal reflected by the Earth into the sensor field of view.Approximately 90% of the visible radiation observed by Earth-viewing satellite sensors is sunlight reflected by air molecules and aerosols in the atmosphere.The removal of this atmospheric signal to retrieve RrsðλÞ is referred to as atmospheric correction.NASA's Ocean Biology Processing Group distributes a software package called the SeaWiFS Data Analysis System (SeaDAS) 8 that provides the research community with a standardized tool for the production, display, and analysis of ocean color products from a host of Earth-viewing multispectral radiometers.SeaDAS contains within it the multisensor level 1 to level 2 generator (l2gen) that can read level 1 observed top-of-atmosphere (TOA) radiances from a variety of sensors, perform the atmospheric correction process, and retrieve RrsðλÞ and various derived geophysical properties.The l2gen code can be adapted to work with any sensor that has a sufficient set of spectral bands covering the blue to green region of the visible spectrum (i.e., 400 to 600 nm), with at least two bands in the near-infrared (NIR) to shortwave IR (SWIR) to support the atmospheric correction. OLI has a sufficient set of spectral bands for ocean color retrievals (Table 2).Precise atmospheric correction also requires that the radiometric performance (signal relative to noise) and digital resolution (number of bits available to encode the observed radiance) are sufficiently high to detect the relatively small water-leaving radiance signal above the sensor noise.In the sections that follow, we assess the radiometric performance of the OLI instrument for ocean color applications and detail the adaptation of l2gen in SeaDAS to support OLI atmospheric correction.We also present results of an initial system-level vicarious calibration, where match-ups to in situ radiometry are used to refine the RrsðλÞ retrieval performance of the combined OLI instrument and atmospheric correction process.Finally, we show some results of ocean color retrieval over the coastal and inland waters of Chesapeake Bay and compare them with coincident MODISA retrievals and in situ measurements. Data and Sensor Characteristics OLI data are freely available for direct download or bulk ordering from the website of the U.S. Geological Survey (USGS), which operates the Landsat-8 mission.The observed TOA radiances (Level-1T) are provided in GeoTIFF format, with each spectral band in a separate file that has been mapped to a common Universal Transverse Mercator (UTM) projection.The full suite of spectral band files are packaged into a compressed tape archive (tar) file that also includes a Landsat Metadata (MTL) file in text form containing scene-specific time and location information. OLI is a push-broom design with 14 separate detector assemblies aligned across the orbit track to create a swath of ∼185 km width or ∼7000 pixels.The 14 detector assemblies alternate between slightly forward-pointing and slightly aft-pointing, and the spectral bands are aligned along track such that the amount of forward and aft pointing varies by band. 9The effect is that each spectral band views the same point on the Earth at a slightly different time and with a slightly different atmospheric path (characterized by sensor zenith and azimuth angles), and the path angles alternate fore and aft across the swath.Cross-track variations in geolocation and spectral band registration are effectively removed by mapping of the native observations to a common UTM projection in the Level-1T product, but the TOA radiances retain characteristics of their observational geometry that must be considered in ocean color retrieval. Accurate atmospheric correction over the comparatively dark ocean requires precise knowledge of the solar and viewing path geometry.For the bands of interest to ocean color, the spectral variation in view zenith and azimuth angle is on the order of 0.2 deg and can be safely ignored, but mean variation in solar and view zenith and azimuth across the swath must be known, and the fore and aft variation between detector assemblies must be accounted for or significant along-track banding artifacts due to atmospheric correction error will be evident in the RrsðλÞ retrievals.The Level-1T data product does not include this additional geometry information, but USGS has developed software to estimate it for each scene pixel for each sensor band from information contained in the scene MTL file.This USGS software has been incorporated into SeaDAS/l2gen to automatically produce band-averaged solar and viewing geometry sufficient for ocean color retrieval. Accurate atmospheric correction and RrsðλÞ retrieval also requires a sensor with sufficient signal-to-noise ratio (SNR) over ocean waters to detect and differentiate the water-leaving signal.For relatively clear, low-productivity waters typical of the open oceans, an inadequate SNR will contribute to a large relative error in RrsðλÞ in the green and red spectral range, where pure water absorption and minimal particle scattering contributions produce a comparatively small waterleaving reflectance.In contrast, turbid coastal and inland waters with high sediment loads present less of a challenge in the green and red due to the high particle scattering contributions, but for these waters, a low SNR will often lead to a high relative error in Rrs(443) retrievals, as high absorption by Ca and CDOM depresses the water-leaving signal in the blue.Finally, depending on the atmospheric correction approach, low SNR in the NIR or SWIR channels can contribute to noise and systematic bias across the visible spectral range, due to error in estimating the aerosol contribution to the observed signal. 10LI is the most advanced radiometer ever flown on a Landsat platform, with SNRs roughly an order of magnitude higher than the predecessor Enhanced Thematic Mapper Plus (ETMþ) instrument of Landsat-7 and 12-bit rather than 8-bit digital resolution. 11The SNRs reported for OLI or ETMþ, however, are based on radiances typical of land observations.To assess potential OLI performance over oceans, the sensor noise model 11 was applied to typical radiances (Ltyp) interpolated from values reported in Ref. 10, which were themselves based on average radiances observed by MODISA over ocean targets at ∼45 deg solar zenith angle.Using these Ltyps, the derived OLI SNRs (Table 2) can be directly compared to those of SeaWiFS and MODISA, as also reported in Ref. 10.In general, the OLI SNRs are lower than those of SeaWiFS or MODISA, but visible-band SNRs are within 50% of SeaWiFS (specified or observed), and OLI SWIR-band SNRs are equally similar to comparable MODISA SWIR bands.The biggest discrepancy is at 865 nm, where the OLI SNR of 67 is substantially lower than the SeaWiFS specification (287), but is still within a factor of 3 of the observed SeaWiFS SNR. 10 It should also be recognized that OLI observations are at a much higher spatial resolution than SeaWiFS or MODIS, and spatial averaging over a few pixels could significantly increase the SNRs, as demonstrated in Ref. 5. Similar SNR results have been previously reported by Pahlevan et al., based on statistical analysis of uniform OLI scenes over oceans. 12Given the SNR equivalency with successful heritage ocean color sensors, the OLI radiometric performance appears sufficient for many ocean color applications. For context, the SNRs of ETMþ are also reported in Table 2 for equivalent spectral bands using the same Ltyps and the high-gain noise model developed for the instrument. 13Results confirm the significant advancement of the OLI radiometric performance over ETM+.The numbers also suggest that the NIR and SWIR bands of ETMþ are effectively unusable for the atmospheric correction approach that we propose here for OLI, as noise in the NIR channel is equivalent to 10% of the signal and noise in the longest SWIR spectral band actually exceeds the typical radiance over oceans. 3 Processing Approach Atmospheric Correction Algorithm While l2gen supports a variety of atmospheric correction methods and variations, [14][15][16][17][18] many of which can be applied to OLI, the default approach described here follows the NASA standard processing in use for all global ocean color missions.Namely, the TOA radiance over water, LtðλÞ, is modeled as the sum of atmospheric, surface, and subsurface contributions as where λ is a sensor spectral band wavelength, LrðλÞ is multiple scattering by air molecules in the absence of aerosols (Rayleigh scattering), LaðλÞ includes multiple scattering from aerosols in the absence of Rayleigh as well as Rayleigh-aerosol interactions, tðλÞ and TðλÞ are diffuse and direct atmospheric transmittance from surface to sensor, LfðλÞ is the contribution from whitecaps and foam on the surface that is diffusely transmitted to the TOA, LgðλÞ is the specular reflection (glint) from the surface that is directly transmitted to the sensor field of view, and LwðλÞ is the water-leaving radiance that is diffusely transmitted to the TOA.All terms are dependent on the viewing and solar path geometries.Gaseous transmittance terms are not shown for clarity, but atmospheric transmittance losses due to ozone and NO2 are also considered.The TOA radiances collected by OLI are measured over the full spectral band-pass of each sensor band, thus all terms on the right-hand side of Eq. ( 1) must be modeled or derived for the sensor-specific spectral response functions (SRFs).OLI spectral response functions were obtained from Ref. 19.The Rayleigh scattering term, which is the dominant contribution over the visible spectral regime, is determined from precomputed look-up tables (LUT) of Rayleigh reflectance that were derived through vector radiative transfer simulations spanning a wide range of realistic solar and viewing geometries. 14The OLI SRFs were used to derive band-pass-integrated solar irradiances (F 0 ), Rayleigh optical thicknesses (τ r ) and depolarization factors (D p , Table 3), where solar irradiance is taken from Ref. 20 and hyperspectral Rayleigh optical thickness was computed using the model of Bodhaine et al., 21 and assuming a standard pressure of 1013.25 mb, temperature of 288.15 K, and CO 2 concentration of 360 ppm.The bandintegrated optical thicknesses and depolarization factors were then used in the radiative transfer simulations to derive the OLI sensor-specific Rayleigh reflectance tables for a wind-roughened ocean surface, including effects of multiple scattering and polarization.In application, the Rayleigh reflectances retrieved from the LUT are selected based on geometry and wind speed and then adjusted to account for changes in surface pressure, 22 including the effect of terrain height as needed to support retrievals over inland lakes and rivers.The glint 23 and whitecap 24,25 contributions are modeled from knowledge of the environmental conditions (pressure, wind speed) and the sensor SRFs. The primary unknowns in Eq. ( 1) are the water-leaving radiances that we wish to retrieve and the aerosol radiance, which is highly variable and must be inferred from the observations.The estimation of aerosol radiance follows the method of Gordon and Wang, 15 with updated aerosol models and the selection approach described in Ref. 14.This approach uses a pair of bands in the NIR or SWIR, where water is highly absorbing, thus water-leaving radiance is negligible or can be accurately estimated, 16 allowing aerosol radiance to be directly retrieved.The spectral slope in measured aerosol radiance between the two NIR-SWIR bands is used to select the aerosol type from a set of precomputed aerosol models, 14 where the aerosol models were derived from vector radiative transfer simulations specific to the OLI spectral band centers (Table 2), and include effects of multiscattering by aerosols as well as Rayleigh-aerosol interactions.The retrieved aerosol model is then used to extrapolate the measured aerosol radiance into the visible spectral regime. In practice, any pair of bands can be used in the aerosol model selection process, with the only requirement being that the water-leaving radiance signal can be considered negligible or known.Using our initial SeaDAS implementation, Vanhellemont et al. 26 explored several combinations of OLI bands 5, 6, and 7 in the NIR and SWIR with comparable results.For this analysis, we chose to use the combination of OLI bands 5 and 7 (865 and 2201 nm, respectively), with any non-negligible water-leaving radiance derived using the iterative bio-optical modeling approach of Bailey et al. 16 This choice takes advantage of the longest SWIR wavelength, where water absorption is strongest, to help separate the radiometric contribution of in-water sediments from aerosol contributions, while using the higher SNR and spectral separation of the NIR channel to determine aerosol type. With LaðλÞ known at all spectral bands, the water-leaving radiance can be computed as in Eq. ( 2) and then normalized to derive the water-leaving reflectance as in Eqs. ( 3) and ( 4), where EdðλÞ is the down-welling solar irradiance just above the sea surface, t 0 ðλÞ is the atmospheric diffuse transmittance from Sun to surface, F 0 ðλÞ is the mean extraterrestrial solar irradiance averaged over the OLI SRF, f 0 ðλÞ is the Earth-Sun distance correction for the time of the observation, and θ 0 is the solar zenith angle.Finally, BðλÞ is a bidirectional reflectance correction to account for effects of inhomogeneity of the subsurface light field and reflection and refraction through the air-sea and sea-air interface. 27λÞLwðλÞ ¼ LtðλÞ − ½LrðλÞ þ LaðλÞ þ tðλÞLfðλÞ þ TðλÞLgðλÞ; (2) To remove the effect of sensor-specific spectral response from RrsðλÞ, the full-band-pass water-leaving radiances are adjusted to that for square 11-nm band-passes 28 located at the nominal band centers (Table 2) using the model of Werdell et al. 29 These nominal-band water-leaving radiances are then converted to RrsðλÞ using nominal-center-band mean solar irradiances. 30 Bio-Optical Algorithm The retrieved RrsðλÞ at each visible sensor wavelength provide the basis for many derived geophysical product algorithms.The standard NASA algorithm for Ca is a three-band empirical RrsðλÞ band ratio algorithm (OC3) 1 that transitions to an empirical band-difference algorithm (OCI) 31 in clear waters.For OLI, the empirical coefficients were tuned using the NASA Bio-Optical Marine Algorithm Dataset (NOMAD) 32 to adjust for the difference in center wavelengths relative to past sensors.The Ca algorithm uses the 443-, 561-, and 655-nm bands for the band difference and the 443-, 482-, and 561-nm bands for the band ratio.It should be noted that NOMAD is the same dataset used to tune the MODIS and SeaWiFS Ca algorithms, and that no OLI data or coincident in situ measurements were used in the algorithm development. Vicarious Calibration Given the stringent accuracy requirements of satellite ocean color retrievals for both the instrument calibration and the atmospheric correction algorithm, an additional vicarious calibration was derived.This temporally independent but wavelength-specific calibration minimizes residual bias and enhances spectral consistency of the sensor þ algorithm system under idealized conditions. 33The primary vicarious calibration source for all NASA ocean color missions is the marine optical buoy (MOBY) 34 near Lanai, Hawaii, which has been continuously operated by NOAA since 1996.A time-series of all OLI scenes covering the Lanai region was collected and filtered to find cases of relatively clear, cloud-free atmospheric conditions and negligible Sun glint.The full screening and averaging process is detailed in Ref. 33.Two scenes were found to pass all screening criteria (Fig. 1), and vicarious calibration gains were derived for each (Table 4).For this initial evaluation, the calibration of the atmospheric correction bands at 865 and 2201 nm was not altered. The change in color between the two images in Fig. 1, which were collected about one month apart, is due to the difference in solar geometry and a change in the aerosol conditions.Notably, the vicarious calibration was highly consistent between the two scenes, suggesting that the atmospheric modeling compensated well for the changes observed between the two dates.The average vicarious gain in each band (Table 4) was implemented for all subsequent processing. Results and Discussion The atmospheric correction approach discussed above and the vicarious calibration from Table 4 were applied to a series of OLI Level-1T scenes collected over the Chesapeake Bay region.Rrs (443) and Rrs(561) retrievals from a partial OLI scene on September 5, 2013, focusing on the mouth of the Bay from Cape Charles to Virginia Beach and the inlets of the James, York, and Rappahannock Rivers, show good agreement with coincident Rrs(443) and Rrs(547) retrievals from MODISA (Fig. 2).The MODISA data were collected on the same day and processed with the same atmospheric correction approach, but using the sensor-specific spectral response functions and a sensor-specific vicarious calibration.Also evident in this comparison is the enhanced information content that 30 m spatial resolution provides relative to the >1 km resolution of MODIS, allowing observations closer to the coasts and further into rivers and bays, and better resolving the spatial variability of optically active constituents within the water bodies. The RrsðλÞ retrievals from OLI were applied to the empirical Ca algorithm and compared with the equivalent product from MODISA (Fig. 3).In general, OLI Ca retrievals for this day over the main stem of the lower Chesapeake Bay region are lower than those retrieved from MODISA.Assuming MODISA is correct, this would suggest that the Rrs(443) or Rrs(482) retrievals are too high relative to Rrs(561), i.e., the spectral dependence is biased toward the blue, which may be due to uncertainty in the vicarious calibration or error in the aerosol retrieval.Unfortunately, there is also considerable uncertainty in the MODISA instrument calibration in the latter period of the mission, 35 so interpretation of this result as error in the OLI retrieval should be made with caution. The spatial detail of the OLI ocean color retrievals is well illustrated in Fig. 4, where the red, green, and blue RrsðλÞ products at 655, 561, and 443 nm, respectively, have been combined into a quasi true-color image.This image, collected on February 28, 2014, shows striking detail of the presence of suspended sediments and other optically active biogeochemical constituents around coastal landforms and where rivers enter the Bay.Sediment plumes, for example, are clearly evident offshore of the Potomac and Rappahannock Rivers despite February 2014 being an average year with regard to streamflow 36 and free of any notable winter storms.The barrier islands between Hills Bay and Winter Harbor (between the Rappahannock and York Rivers) show significant suspended sediment loads, likely either from advective oceanward transport from the Rappahannock River or from wind-driven resuspension.Likewise, some of the shallowest areas of Chesapeake Bay, e.g., east of Smith and Tangier Islands, show substantial (re)suspended sediment loads.The high spatial resolution and relatively high SNR of OLI makes it possible to resolve the spatial structure of these estuarine features. To further demonstrate the advantage that OLI spatial resolution provides over MODIS, Fig. 5 shows the same RrsðλÞ composite zoomed in to the mouth of the Potomac River, with MODISA scan-pixel boundaries for the same day overlaid.The OLI images show fine detail in ocean color that cannot be resolved by the larger MODIS pixels.OLI, thus, provides an unprecedented opportunity to directly observe this MODIS subpixel variability in suspended sediments and organic material, which can provide valuable insight into uncertainties in MODIS ocean color retrievals 37 and improved understanding of differences observed in validation matchups to localized in situ measurements. RrsðλÞ composite images and Ca retrievals were generated for five scenes obtained over Chesapeake Bay between September 2013 and April 2014 (Fig. 6).The general similarity Fig. 2 Images of water-leaving reflectances, Rrs, for OLI bands at (a) 443 nm and (b) 561 nm, retrieved over Chesapeake Bay on 5 September 2013, with MODIS Aqua retrievals of (c) 443 nm and (d) 547 nm shown for comparison.The MODIS data was collected on the same day, about 3 h later, and RrsðλÞ was retrieved using standard NASA ocean color processing in SeaDAS. Fig. 3 Images of chlorophyll a concentration retrieved from OLI and MODIS Aqua over Chesapeake Bay on September 5, 2013.The MODIS data were collected on the same day, about 3 h later.The chlorophyll a concentration was retrieved using standard NASA ocean color processing in SeaDAS. Fig. 4 Three-band water-leaving reflectance, RrsðλÞ, composite image over the mouth of Chesapeake Bay showing detailed distribution patterns of sediments and colored organic matter that can be retrieved from OLI using standard NASA ocean color processing in SeaDAS.The composite was generated using the red, green, and blue reflectances at 655, 561, and 443 nm, respectively. of the ocean color images suggests good temporal stability of the OLI calibration and good performance of the atmospheric correction algorithm over a wide range of solar geometries.These five scenes represent all available relatively cloud-free, glint-free scenes of Chesapeake Bay currently available from OLI, thus, Fig. 6 provides an indication of the frequency at which a mid-latitude location may be monitored with OLI, considering cloudy days and the 16-day repeat cycle of Landsat-8.Due to the solar and viewing path geometry, OLI observations between late spring and early fall over mid-latitude oceans of the northern hemisphere are heavily contaminated by specular reflection of the Sun by the sea surface.Our algorithm attempts to remove this Sun glint contribution, 23 but residual error in high glint conditions (glint-favorable geometries) can dominate the subsurface signal and substantially degrade RrsðλÞ retrieval quality, or cause the retrieval process to fail completely due to contamination of the aerosol selection bands. For a more quantitative assessment of OLI ocean color retrieval performance, the distribution of Ca and RrsðλÞ over Chesapeake Bay was compared to same-day retrievals from MODISA.Following Werdell et al., 38 data from the five scenes of Fig. 6 were geographically stratified into lower and middle Bay regions to produce the regional frequency distributions of Fig. 7. Also shown is the mean distribution from SeaWiFS over the mission lifespan (1997 to 2010), to provide additional context on expected range of values.Results show relatively good agreement between OLI and MODISA RrsðλÞ distributions, especially in the green [i.e., Rrs(561) of OLI compared with Rrs(547) of MODISA), and both sensors are in good agreement with the SeaWiFS mission mean.Agreement is not quite as good for the bluest band, with OLI Rrs (443) being elevated relative to MODISA Rrs(443).This gives rise to a larger blue/green ratio from OLI, and thus lower Ca retrievals relative to MODISA, as previously presented in Fig. 3.These differences may simply be the result of uncertainty in the OLI vicarious calibration derived from just two MOBY measurements, or the as yet uncorrected atmospheric correction bands, but the discrepancy may also arise from error in MODISA retrievals due to degradation in temporal calibration, 35 or possibly contamination by stray light where the high contrast in the NIR between dark water and adjacent bright land can lead to overestimation of aerosol contributions and, thus, underestimation of Rrs(443) for MODISA. 39Instrumental stray light contamination has been found to be minimal in OLI. 11he Ca values from OLI and MODISA in the middle Bay do straddle the range of values measured over previous years by SeaWiFS, and the OLI retrievals of Ca in the lower Bay are in very good agreement with expectation based on historical SeaWiFS retrievals.Also shown in Fig. 7 is the distribution of in situ Ca measurements collected at regular spatial and temporal sampling intervals over a 26-year period from 1984 to 2010, 40 showing that these OLI Ca retrievals fall within the range of values expected from historical field observations.Fig. 6 OLI true-color images, RrsðλÞ composite images, and chlorophyll a retrievals from all available clear scenes over Chesapeake Bay. Conclusions While there is a long history of efforts to utilize earlier Landsat missions and associated sensors, such as ETMþ on Landsat-7, for water quality assessment of coastal and inland waters, 41,42 the comparatively poor radiometric performance (demonstrated here by the much lower SNR of ETMþ relative to OLI) has largely restricted these efforts to retrieval of suspended sediments only, where high backscatter provides a sufficiently robust signal to compensate for sensor noise and digitization error.These past efforts also generally relied on minimal atmospheric correction or simplifying assumptions, such as attributing the Rayleigh-subtracted reflectance in the SWIR as glint þ aerosol reflectance and removing this from the visible bands by assuming a flat spectral dependence. 42Based on an analysis of the sensor signal to noise for typical ocean radiances, and comparison with successful heritage ocean color sensors, we conclude that OLI has the requisite spectral bands and sufficient radiometric performance to support the standard atmospheric correction approach used for NASA's global ocean color missions, including determination and removal of aerosol contributions based on realistic aerosol models and OLI observations in the NIR and SWIR, and to enable the quantitative retrieval of water column constituents from the derived spectral water-leaving reflectance distributions. NASA's standard atmospheric correction and ocean color retrieval algorithms, as originally developed for SeaWiFS and MODIS, were modified to support the OLI data format and sensor spectral characteristics, and an initial vicarious calibration was performed.Evaluation of OLI ocean color retrieval over a time-series of Chesapeake Bay scenes demonstrated relatively good agreement with other ocean color sensors and with historical field measurements in the region.The observed agreement may further improve as instrument temporal calibration is refined and additional MOBY measurements are incorporated to reduce uncertainty in the OLI vicarious calibration, but these initial results demonstrate that OLI can be a valuable tool for ocean color science and environmental monitoring applications. We showed that a primary advantage of OLI over heritage global ocean color imagers, such as MODIS, is the much higher spatial resolution that allows resolving the fine-scale distribution of suspended sediments and bio-optical water constituents in coastal and estuarine environments.A limitation of OLI for routine observation and monitoring of these dynamic regions is the narrow swath and relatively infrequent 16-day repeat cycle, coupled with observational losses due to cloud cover and the confounding effects of Sun glint.Use of high spatial resolution OLI in combination with the more frequent (one-to-two day) repeat cycle of existing wide-swath, moderate-resolution global imagers can, thus, provide complementary observations to better understand and monitor spatial and temporal ecosystem dynamics in nearshore environments, as well as for understanding the inherent uncertainty in moderate-resolution sensors due to unresolved subpixel variability. The higher spatial resolution of OLI does lead to additional challenges relative to moderateresolution sensors.For example, our algorithm for Sun glint correction is based on a statistical model developed by Cox and Munk, 43 which provides the probability distribution function for surface facets being oriented in the specular direction, parameterized as a function of wind speed.The validity of this statistical relationship degrades as resolution increases and individual wave orientations are resolved, as is the case for OLI.Alternative methods based on observed radiometry have been investigated for higher spatial resolution sensors (see Ref. 44 for a review), and we suggest that development of an improved glint correction approach for OLI should be a focus of future work. 6][47] The adjacency effect impacts both moderate and higher spatial resolution ocean color observations, as the influence of land reflectance on water observations can extend over 20 km from the coast, 46 but the impact increases with proximity to the bright reflecting source, thus, it is a still greater concern for the higher-resolution OLI observations that extend closer to the land/water boundary, and particularly for observations in narrow rivers and small lakes that are surrounded by land.More work is needed to assess the impact of atmospheric adjacency and develop a viable correction strategy to mitigate this effect. Support for atmospheric correction and ocean color product retrieval from OLI has now been incorporated into l2gen, a component of NASA's open-source SeaDAS software package that is made freely available to the research and applications community for the processing, visualization, and analysis of satellite radiometry from a host of ocean color capable sensors.In addition to the processing algorithms described here, SeaDAS provides many alternative methods and variations that are applicable to OLI, as well as a wide range of derived product algorithms beyond Ca.For atmospheric correction, for example, use of alternate band pairs for aerosol selection, fixed aerosol type based on in situ knowledge, and alternate methods for resolving the water-leaving radiance contribution in the NIR to SWIR can now be evaluated. 17,18Additional products that can be derived from the retrieved Rrs(λ) within SeaDAS include inherent optical properties (e.g., absorption coefficient of phytoplankton and CDOM and particle backscattering coefficient) using various inversion models [48][49][50] and measures of water clarity, such as marine diffuse attenuation and euphotic depth. 51With OLI support now in SeaDAS (version 7.2), these and other applications of OLI can now be operated to further explore the potential of the sensor for ocean color science and aquatic ecosystem monitoring applications. Fig. 1 Fig. 1 Operational Land Imager (OLI) images over Lanai, Hawaii, on (a) January 9, 2014, and (b) February 10, 2014, showing location of the NOAA marine optical buoy (MOBY).Colocated data from MOBY and OLI on these dates were used in the OLI vicarious calibration. Fig. 5 Fig. 5 Three-band water-leaving reflectance composite image from OLI at the location where the Potomac River enters Chesapeake Bay.MODIS Aqua scan pixel boundaries for the same date are overlaid to demonstrate the subpixel variability revealed by the higher spatial resolution of OLI. Fig. 7 Fig.7Comparison of OLI, MODIS Aqua, and Sea-viewing Wide Field-of-View Sensor (SeaWiFS) chlorophyll a and RrsðλÞ retrieval distributions in the middle and lower Chesapeake Bay, following Ref.38.OLI (red) and MODIS (blue) data were collected over the same five dates shown in Fig.6.SeaWiFS data (gray shaded) show the average over the mission lifetime (1997 to 2010).In situ chlorophyll a measurements shown in black were collected within the same region at regular spatial and temporal sampling intervals over the period from 1984 to 2010 (Chesapeake Bay Program40 ).
7,627.2
2015-01-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Enriching Word Embeddings with Temporal and Spatial Information The meaning of a word is closely linked to sociocultural factors that can change over time and location, resulting in corresponding meaning changes. Taking a global view of words and their meanings in a widely used language, such as English, may require us to capture more refined semantics for use in time-specific or location-aware situations, such as the study of cultural trends or language use. However, popular vector representations for words do not adequately include temporal or spatial information. In this work, we present a model for learning word representation conditioned on time and location. In addition to capturing meaning changes over time and location, we require that the resulting word embeddings retain salient semantic and geometric properties. We train our model on time- and location-stamped corpora, and show using both quantitative and qualitative evaluations that it can capture semantics across time and locations. We note that our model compares favorably with the state-of-the-art for time-specific embedding, and serves as a new benchmark for location-specific embeddings. Introduction The use of word embeddings as a form of lexical representation has transformed the use of natural language processing for many applications such as machine translation (Qi et al., 2018) and language understanding (Peters et al., 2018). The changing of word meaning over the course of time and space, termed semantic drift, has been the subject of long standing research in diachronic linguistics (Ullmann, 1979;Blank, 1999). Additionally, the emergence of distinct geographically-qualified English varieties (e.g., South African English) has given rise to salient lexical variation giving several English words different meanings depending on the geographic location of their use, as documented in studies on World Englishes (Kachru et al., 2006;Mesthrie and Bhatt, 2008). Considering the multiplicity of meanings that a word can take over the span of time and space owing to inevitable linguistic, and sociocultural factors among others, a static representation of a word as a single word embedding seems rather limited. Take the word apple as an example. Its early to near-recent mentions in written documents referred only to a fruit, but in the recent times it is also the name of a large technology company. Another example is the title for the head of government, which is "president" in the USA, and is "prime minister" in Canada. Naturally, we expect that one word should have different representations conditioned on the time or location. In this paper, we study how word embeddings can be enriched to encode their semantic drift in time and space. Extending a recent line of research on time-specific embeddings, including the works by Bamler and Mandt and Yao et al., we propose a model to capture varying lexical semantics across different conditions-of time and location. A key technical challenge of learning conditioned embeddings is to put the embeddings (derived from different time periods or geographical locations) in the same vector space and preserve their geometry within and across different instances of the conditions.Traditional approaches involve a two-step mechanism of first learning the sets of embeddings separately under the different conditions, and then aligning them via appropriate transformations (Kulkarni et al., 2015;Hamilton et al., 2016;Zhang et al., 2016). A primary limitation of these methods is their inadequate representation of word semantics, as we show in our comparative evaluation. Another approach to conditioned embedding uses a loss function with regularizers over word embeddings across conditions for their smooth trajectory in the vector space (Yao et al., 2018). However, its scope is limited to modeling semantic drift over only time. We propose a model for general conditioned embeddings, with the novelty that it explicitly preserves embedding geometry under different conditions and captures different degrees of word semantic changes. We summarize our contributions below. 1. We propose an unsupervised model to learn condition-specific embeddings including timespecific and location-specific embeddings; 2. Using benchmark datasets we demonstrate the state-of-the-art performance of the proposed model in accurately capturing word semantics across time periods and geographical regions; 3. We provide the first dataset 1 to evaluate word embeddings across locations to foster research in this direction. Related Work Time-specific embeddings. The evolution of word meaning with time has been a widely studied problem in sociolinguistics (Ullmann, 1979;Tang, 2018). Early computational approaches to uncovering these trends have relied on frequency-based models, which have used frequency changes to trace semantic shift over time (Lijffijt et al., 2012;Choi and Varian, 2012;Michel et al., 2011). More recent works have sought to study these phenomena using distributional models (Kutuzov et al., 2018;Huang and Paul, 2019;Schlechtweg et al., 2020). Recent approaches on time-specific embeddings can be divided into three broad categories: aligning independently trained embeddings across time, joint training of time-dependent embeddings and using contextualized vectors from pre-trained models. Approaches of the first kind include the works by Kulkarni et al., Hamilton et al. and Zhang et al.. They rely on pre-training multiple sets of embeddings for different times independently, and then aligning one set of embeddings with another set so that two sets of embeddings are comparable. The second approach-joint training-aims to guarantee the alignment of embeddings in the same vectors space so that they are directly comparable. Compared with the previous category of ap-proaches, the joint learning of time-stamped embeddings has shown improved abilities to capture semantic changes across time. Bamler and Mandt used a probabilistic model to learn time-specific embeddings (Bamler and Mandt, 2017). They make a parametric assumption (Gaussian) on the evolution of embeddings to guarantee the embedding alignment. Yao et al. learned embeddings by the factorization of a positive pointwise mutual information (PPMI) matrix. They imposed L2 constraints on embeddings from neighboring time periods for embedding alignment (Yao et al., 2018). Rosenfeld and Erk proposed a neural model to first encode time and word information respectively and then to learn time-specific embeddings (Rosenfeld and Erk, 2018). Dubossarsky et al. aligned word embeddings by sharing their context embeddings at different times (Dubossarsky et al., 2019). Some recent works fall in the third category, retrieving contextualized representations from pretrained models such as BERT (Devlin et al., 2018) as time-specific sense embeddings of words (Hu et al., 2019;Giulianelli et al., 2020). These pretrained embeddings are limited to the scope of local contexts, while we learn the global representation of words in a given time or location. The underlying mathematical models of these previous works on temporal embeddings are discussed in the supplementary material. Our model belongs to the second category of joint embedding training. Different from previous works, our embedding is based on a model that explicitly takes into account the important semantic properties of time-specific embeddings. Embedding with spatial information. Lexical semantics is also sensitive to spatial factors. For example, the word denoting the head of government of a nation may be used differently depending on the region. For instance, the words can range from president to prime minister or king depending on the region. Language variation across regional contexts has been analyzed in sociolinguistics and dialectology studies (e.g., (Silva-Corvalán, 2006;Kulkarni et al., 2016)). It is also understood that a deeper understanding of semantics enhanced with location information is critical to location-sensitive applications such as content localization of global search engines (Brandon Jr, 2001). Some approaches towards this have included, a latent variable model proposed for geographical linguistic variation (Eisenstein et al., 2010) and a skip-gram model for geographically situated language (Bamman et al., 2014). The current study is most similar to (Bamman et al., 2014) with the overlap in our intents to learn location-specific embeddings for measuring semantic drift. Most studies on location-dependent language resort to a qualitative evaluation, whereas (Bamman et al., 2014) resorts to a quantitative analysis for entity similarity. However, it is limited to a given region without exploring semantic equivalence of words across different geographic regions. To the extent we are aware, this is the first study to present a quantitative evaluation of word representations across geographical regions with the use of a dataset constructed for the purpose. Model We now introduce the model on which the condition-specific embedding training is based in this section. We assume access to a corpus divided into sub-corpora based on their conditions (time or location), and texts in the same condition (e.g., same time period) are gathered in each sub-corpus. For each condition, the co-occurrence counts of word pairs gathered from its sub-corpus are the corpus statistics we use for the embedding training. We note that because these sub-corpora vary in size, we scale the word co-occurrences of every condition so that all sub-corpora have the same total number of word pairs. We term the scaled value of word co-occurrences of word w i and w j in condition c as X i,j,c . A static model (without regard to the temporal or spatial conditions) proposed by Arora et al. provides the unifying theme for the seemingly different embedding approaches of word2vec and GloVe. In particular, It reveals that corpus statistics such as word co-occurrences could be estimated from embeddings. Inspired by this, we proposed a model for conditioned embeddings, and characterize such a model by its ability to capture the lexical semantic properties across different conditions. Properties of Conditioned Embeddings Before exploring the details of our model for condition-specific embeddings, we discuss some desired semantic properties of these embeddings. We expect the embeddings to capture time-and location-sensitive lexical semantics. We denote by c the condition we use to refine word embeddings, which can be a specific time period or a location. We then have temporal embeddings if the condition is time period, and spatial embeddings if the condition is location. For a word w, the condition-specific word embedding for condition c is denoted as v w,c . The key semantic properties of the condition-specific word embedding, which we consider in our model are: (1) Preservation of geometry. One geometric property of static embeddings is that the difference vector encodes word relations, i.e., v bigger v big ⇡ v greater v great (Mikolov et al., 2013). Analogously, for the condition-specific embedding of semantically stable words across conditions, given word pairs (w 1 , w 2 ) and (w 3 , w 4 ) with the same underlying lexical relation, we expect the following equation to hold in any condition c. (1) This property is implicitly preserved in approaches aligning independently trained embeddings with linear transformations (Kulkarni et al., 2015). (2) Consistency over conditions. Most word meanings change slowly over a given condition, i.e., their condition-specific word embeddings should be highly correlated (Hamilton et al., 2016). When the condition is time period, for example, c 1 is the year 2000, and c 2 is the year 2001, we expect that for a given word, v w,c 1 and v w,c 2 have high similarity given their temporal proximity. The consistency property is preserved in models which jointly train embeddings across conditions (e.g., (Yao et al., 2018)). (3) Different degrees of word change. Although word meanings change over time, not all words undergo this change to the same degree; some words change dramatically while others stay relatively stable across conditions (Blank, 1999). In our formulation, we require the representation to capture the different degrees of word meaning change. This property is unexplored in prior studies. We incorporate these semantic properties as explicit constraints into our model for conditionspecific embeddings, which we formulate as an optimization problem. Model We propose a model that generates embeddings satisfying the semantic properties as discussed above. Writing the embedding v w,c of word w in condition c as a function of its condition-independent representation v w , condition representation vector q c and deviation embedding d w,c : where is Hadamard product (i.e., elementwise multiplication). We decompose the conditioned representation into three component embeddings. This novel representation is motivated by the intuition that a word w usually carries its basic meaning v w and its meaning is influenced by different conditions represented by q c . Moreover, words have different degrees of meaning variation, which is captured by the deviation embedding d w,c . We begin with a model proposed by Arora et al. for static word embeddings regardless of the temporal or spatial conditions (Arora et al., 2016). Let v w be the static representation of word w. For a pair of words w 1 and w 2 , the static model assumes that where P(w 1 , w 2 ) is the co-occurrence probability of these two words in the training corpus. Let P c (w 1 , w 2 ) be the co-occurrence probability of word pair (w 1 , w 2 ) in the condition c. Based on the static model in Eq. (3), for a condition c we have Here, borrowing ideas from previous embedding algorithms including word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014), we use two sets of word embeddings {v w,c } and {u w,c } for a word w 1 and its context word w 2 respectively in condition c. Accordingly, we have two sets of condition-independent embeddings {v w } and {u w }, and two sets of deviation vectors {d w,c } and {d 0 w,c }. The condition-specific embeddings in Eq. (2) can be written as: By combining Eq. (4) and (5), we derive the model for condition-specific embeddings: (6) This model can be simplified as where b w 1 ,c and b 0 w 2 ,c are bias terms introduced to replace the terms kv w 1 ,c k 2 and ku w 2 ,c k 2 respectively. We document the derivation details of Eq. (7) in the supplementary material. Optimization problem. This model enables us to use the conditioned embeddings to estimate the word co-occurrence probabilities in a specific condition. Conversely, we can formulate an optimization problem to train the conditioned embeddings from the word co-occurrences based on our model. We count the co-occurrences of all word pairs (w 1 , w 2 ) in different conditions based on the respective sub-corpora. For example, we count word co-occurrences over different time periods to incorporate temporal information into word embeddings, and we count word pairs in different locations to learn spatially sensitive word representations. Recall that X i,j,c is the scaled co-occurrence counts of w i and w j in condition c. Denote by W the total vocabulary and by C the number of conditions, where C is the number of time bins for the temporal condition or the number of locations for the location condition. Suppose that V is an (m ⇥ |W |) condition-independent word embedding matrix, where each column corresponds to an m-dimension word vector v w . Matrix U is an (m ⇥ |W |) basic context embedding matrix with each column as a context word vector u w . Matrix Q is an (m ⇥ C) matrix, where each column is a condition vector q c . As for deviation matrices, D m⇥|W |⇥C and D 0 m⇥|W |⇥C consist of m-dimension deviation vectors d w,c and d 0 w,c respectively for word w in condition c. Our goal is to learn embeddings U, Q and D so as to approximate the word co-occurrence counts based on the model in Eq.(7). Here, we design a loss function to be the approximation error of the embeddings, which is the mean square error between the condition-specific co-occurrences counted from the respective sub-corpora and their estimates from the embeddings. To satisfy the property 2 of condition-specific embeddings, we impose L 2 constraints kq a q b k 2 on the embeddings of condition a and b to guarantee the consistency over conditions. For timespecific embeddings, the constraints are for adjacent time bins. As for location-sensitive embeddings, the constraints are for all pairs of location embeddings. Furthermore, to account for the slow change in meaning of most words across conditions (as in time periods or locations) listed as property 3 of conditioned embeddings, we also include L 2 constraints kDk 2 and kD 0 k 2 on the deviation terms to penalize big changes. Putting together the approximation error, constraints on condition embeddings and deviations, we have the following loss function: In addition to ensuring a smooth trajectory of the embeddings, the penalization on the deviations D and D 0 is necessary to avoid the degenerate case that Q c = 0, 8c. We note that, for the constraint on condition embeddings in the loss function L, for time-specific embeddings we use C 1 P c=1 kQ c+1 Q c k 2 , whereas for location-specific embeddings, the constraint Model Properties. We have presented our approach to learning conditioned embeddings. Now we will show that the proposed model satisfies the aforementioned key properties in Section 3.1. We start with the property of geometry preservation. For a set of semantically stable words S = {w 1 , w 2 , w 3 , w 4 }, it is known that d w,c ⇡ 0 for w 2 S. Suppose that the relation between w 1 and w 2 is the same as the relation between w 3 and w 4 , i.e., v w 1 v w 2 = v w 3 v w 4 . Given Eq. (2) for any condition c, it holds that As for the second property of consistency over conditions, we again consider a stable word w. Its conditioned embedding v w,c in condition c can be written as v w,c = v w q c . As is shown in Eq. (8), the L 2 constraint kq a q b k 2 is put on different condition embeddings. The difference between word embeddings of w under two conditions a and b are: According to Cauchy-Schwartz inequality, the L 2 constraint on condition vectors q a q b also acts as a constraint on word embeddings. With a large coefficient ↵, it prevents the embedding from differing too much across conditions, and guarantees the smooth trajectory of words. Lastly we show that our model captures the degree of word changes. The deviation vector d w,c we introduce in the model captures such changes. The L 2 constraint on kd w,c k shown in Eq. (8) forces small deviation on most words which are smoothly changing across conditions. We assign a small coefficient to this constraint to allow sudden meaning changes in some words. The hyperparameter setting is discussed below. Embedding training. We have hyperparameters ↵ and as weights on the word consistency and the deviation constraints. We set ↵ = 1.5 and = 0.2 in time-specific embeddings, and ↵ = 1.0 and = 0.2 in location-specific embeddings. At each training step, we randomly select a nonzero element x i,j,c from the co-occurrence tensor X. Stochastic gradient descent with adaptive learning rate is applied to update V, U, Q, D, D 0 , d and d 0 , which are relevant to x i,j,c to minimize the loss L. The complexity of each step is O(m), where m is the embedding dimension. In each epoch, we traverse all nonzero elements of X. Thus we have nnz(X) steps where nnz(·) is the number of nonzero elements. Although X contains O(|W | 2 ) elements, X is very sparse since many words do not co-occur, so nnz(X) ⌧ |W | 2 . The time complexity of our model is O(E · m · nnz(X)) for E epoch training. We set E = 40 in training both temporal and spatial word embeddings. Postprocessing. We note that embeddings under the same condition are not centered, i.e., the word vectors are distributed around some non-zero point. We center these vectors by removing the mean vector of all embeddings in the same condition. The centered embeddingṽ w,c of word w under condition c is:ṽ The similarity between words across conditions is measured by the cosine similarity of their centered embeddings {ṽ w,c }. Experiments In this section, we compare our condition-specific word embedding models with corresponding state-Across time the, in, to, a, of, it, by, with, at, was, are, and, on, who, for, not, they, but, he, is, from, have, as, has, their, about, her, been, there, or, will, this, said, would Across regions in, from, at, could, its, which, out, but, on, all, has, so, is, are, had, he, been, by, an, it, as, for, was, this, his, be, they, we, her, that, and, with, a, of, the of-the-art models combined with temporal or spatial information. The dimension of all vectors is set as 50. We have the following baselines: (1) Basic word2vec (BW2V). It is word2vec CBOW model, which is trained on the entire corpus without considering any temporal or spatial partition (Mikolov et al., 2013); (2) Transformed word2vec (TW2V). Multiple sets of embeddings are trained separately for each condition. Two sets of embeddings are then aligned via a linear transformation (Kulkarni et al., 2015). (4) Dynamic word embedding (DW2V): This approach proposes a joint training of word embeddings at different times with alignment constraints on temporally adjacent sets of embeddings (Yao et al., 2018). We modify this baseline for location based embeddings by putting its alignment constraints on every two sets of embeddings. Training Data We used two corpora as training data-the timestamped news corpus of the New York Times collected by (Yao et al., 2018) to train time-specific embeddings and a collection of location-specific texts in English, provided by the International Corpus of English project (ICE, 2019) for locationspecific embeddings. New York Times corpus. The news dataset from New York Times consists of 99, 872 articles from 1990 to 2016. We use time bins of size oneyear, and divide the corpus into 27 time bins. International Corpus of English (ICE). The ICE project collected written and spoken material in English (one million words each) from different regions of the world after 1989. We used the written portions collected from Canada, East Africa, Hong Kong, India, Ireland, Jamaica, the Philippines, Singapore and the United States of America. Deviating from previous works, which remove both stop words and infrequent words from the vo-cabulary (Yao et al., 2018), we only remove words with observed frequency count less than a threshold. We keep the stop words to show that the trained embedding is able to identify them as being semantically stable. The frequency threshold is set to 200 (the same as (Yao et al., 2018)) for the New York Times corpus, and to 5 for the ICE corpus given that the smaller size of ICE corpus results in lower word frequency than the news corpus. We evaluate the enriched word embeddings for the following aspects: 1. Degree of semantic change. As mentioned in the list of desired properties of conditioned embeddings, words undergo semantic change to different degrees. We check whether our embeddings can identify words whose meanings are relatively stable across conditions. These stable words will be discussed as part of the qualitative evaluation. 2. Discovery of semantic change. Besides stable words, we also study words whose meaning changes drastically over conditions. Since a word's neighbors in the embedding space can reflect its meaning, we find the neighbors in different conditions to demonstrate how the word meaning changes. The discovery of semantic changes will be discussed as part of our qualitative evaluation. 3. Semantic equivalence across conditions. All condition-specific embeddings are expected to be in the same vector space, i.e., the cosine similarity between a pair of embeddings reflects their lexical similarity even though they are from different condition values. Finding semantic equivalents with the derived embeddings will be discussed in the quantitative evaluation. Qualitative Evaluation We first identify words that are semantically stable across time and locations respectively. Cosine similarity of embeddings reflects the semantic similarity of words. The embeddings of stable words should have high similarity across conditions since their semantics do not change much with conditions. Therefore, we average the cosine similarity of words between different time durations or locations as the measure of word stability, and rank the words in terms of their stability. The most stable words are listed in Table 1. We notice that a vast majority of these stable words are frequent words such as function words. It may be interpreted based on the fact that these are words that encode structure (Gong et al., 2017(Gong et al., , 2018, and that the structure of well-edited English text has not changed much across time or locations (Poirier, 2014). It is also in line with our general linguistic knowledge; function words are those with high frequency in corpora, and are semantically relatively stable (Hamilton et al., 2016). Next we focus on the words whose meaning varies with time or location. We first evaluate the semantic changes of embeddings trained on timestamped news corpus, and choose the word apple as an example (more examples are included in the supplementary material). We plot the trajectory of the embeddings of apple and its semantic neighbors over time in Fig. 1(a). These word vectors are projected to a two-dimensional space using the locally linear embedding approach (Roweis and Saul, 2000). We notice that the word apple usually referred to a fruit in 1990 given that its neighbors are food items such as pie and pudding. In recent years, the word has taken on the sense of the technology company Apple, which can be seen from the fact that apple is close to words denoting technology companies such as google and microsoft after 1998. We also evaluate the location-specific word embeddings trained on the ICE corpus on the task of semantic change discovery. Take the word president as an example. We list its neighbors in different locations in Fig. 1(b). It is close to names of the regional leaders. The neighbors are president names such as bush and clinton in USA, and prime minister names such as harper in Canada and gandhi in India. This suggests that the embeddings are qualitatively shown to capture semantic changes across different conditions. Quantitative Evaluation We also perform a quantitative evaluation of the condition-specific embeddings on the task of semantic equivalence across condition values. The joint embedding training is to bring the time-or location-specific embeddings to the same vector space so that they are comparable. Therefore, one key aspect of embeddings that we can evaluate is their semantic equivalence over time and locations. Two datasets with temporally-and spatially-equivalent word pairs were used for this part. Dataset Temporal dataset. Yao et al. created two temporal testsets to examine the ability of the derived word embeddings to identify lexical equivalents over time (Yao et al., 2018). For example, the word Clinton-1998 is semantically equivalent to the word Obama-2012, since Clinton was the US president in 1998 and Obama took office in 2012. The first temporal testset was built on the basis of public knowledge about famous roles at different times such as the U.S. presidents in history. It consists of 11, 028 word pairs which are semantically equivalent across time. For a given word in specific time, we find the closest neighbors of the time-dependent embedding in a target year. The neighbors are taken as its equivalents at the target time. The second testset is about technologies and historical events. Annotators generated 445 conceptually equivalent word-time pairs such as twitter- Spatial dataset. To evaluate the quality of location-specific embeddings, we created a dataset of 714 semantically equivalent word pairs in different locations based on public knowledge. For example, the capitals of different countries have a semantic correspondence, resulting in the word Ottawa-Canada that refers to the word Ottawa for Canada to be equivalent to the word Dublin-Ireland that refers to the word Dublin used for Ireland. Two annotators chose a set of categories such as capitals and governors and independently came up with equivalent word pairs in different regions. Later they went through the word pairs together and decided the one to include. We will release this dataset upon acceptance. Evaluation metric In line with prior work (Yao et al., 2018), we use two evaluation metrics-mean reciprocal rank (MRR) and mean precision@k (MP@K)-to evaluate semantic equivalence on both temporal and spatial datasets. MRR. For each query word, we rank all neighboring words in terms of their cosine similarity to the query word in a given condition, and identify the rank of the correct equivalent word. We define r i as the rank of the correct word of the i-th query, and MRR for N queries is defined as Note that we only consider the top 10 words, and the inverse rank 1/r i of the correct word is set as 0 if it does not appear among the top 10 neighbors. MP@K. For each query, we consider the top-K words closest to the word in terms of cosine similarity in a given condition. If the correct word is included, we define the precision of the i-th query P@K i as 1, otherwise, P@K i = 0. MP@K for N queries is defined as Results Temporal testset. We report the ranking results on the two temporal testsets in Table 2, and report results on the spatial testset in Table 3. Our condition-specific word embedding is denoted as CW2V in the tables. In the temporal testset 1, our model is consistently better than the three baselines BW2V, TW2V and AW2V, and is comparable to DW2V in all metrics. In the temporal tesetset 2, CW2V outperforms BW2V, TW2V and AW2V in all metrics and is comparable to DW2V with respect to precision in the top 1 and top 3 words, but falls behind DW2V in MP@5 and MP@10. This lower performance may actually be a misrepresentation of its actual performance, since the word pairs in testset 2 are generated based on human knowledge and is potentially more subjective than testset 1. As an illustration, consider the case of website-2014 in testset 2. Our embeddings show abc, nbc, cbs and magazine as semantically similar words in 1990. These words are reasonable results since a website acts as a news platform just like TV broadcasting companies and magazines. The ground truth neighbor of website-2014 is the word address. Another example is bitcoin-2015. The semantic neighbors of our embeddings are currency, monetary and stocks in 1992. These words are semantically similar to bitcoin in the sense that bitcoin is cryptocurrency and a form of electronic cash. However, the ground truth is investment in the testset. Spatial testset. Considering the evaluation on the spatial testset in Table 3, our condition-specific embedding achieves the best performance in finding semantic equivalents across regions. We note that the approaches which align independently trained embeddings such as TW2V and AW2V have poor performance. Due to the disparity in word distributions across regions in the ICE corpus, words with high frequency in one region may seldom be seen in another region. These infrequent words tend to have low-quality embeddings. It hurts the accurate alignment between locations and further degrades the performance of location-specific embeddings. DW2V, the jointly trained embedding, does not perform well on the spatial testset. It puts alignment constraints on word embeddings between two regions to prevent major changes of word embeddings across regions. This may lead to an interference between regional embeddings especially in cases where there is a frequency disparity of the same word in different regional corpora. In such cases, the embedding of the frequent word in one region will be affected by the weak embedding of the same word occurring infrequently in another region. Our model decomposes a word embedding into three components: a condition-independent component, a condition vector, and a deviation vector. The condition vector for each region takes care of the regional disparity, while the conditionindependent vectors are not affected. Therefore, our model is more robust to such disparity in learning conditioned embeddings. Conclusion We studied a model to enrich word embeddings with temporal and spatial information and showed how it explicitly encodes lexical semantic properties into the geometry of the embedding. We then empirically demonstrated how the model captures language evolution across time and location. We leave it to future work to explore concrete downstream applications, where these time-and locationsensitive embeddings can be fruitfully used.
7,470.4
2020-10-02T00:00:00.000
[ "Computer Science" ]
Decrease in the Sensitivity of Myocardium to M3 Muscarinic Receptor Stimulation during Postnatal Ontogenisis. Type 3 muscarinic receptors (M3 receptors) participate in the mediation of cholinergic effects in mammalian myocardium, along with M2 receptors. However, myocardium of adult mammals demonstrates only modest electrophysiological effects in response to selective stimulation of M3 receptors which are hardly comparable to the effects produced by M2 stimulation. In the present study, the effects of selective M3 stimulation induced by application of the muscarinic agonist pilocarpine (10 μM) in the presence of the selective M2 blocker methoctramine (100 nM) on the action potential (AP) waveform were investigated in isolated atrial and ventricular preparations from newborn and 3-week-old rats and compared to those in preparations from adult rats. In the atrial myocardium, stimulation of M3 receptors produced a comparable reduction of AP duration in newborn and adult rats, while in 3-week-old rats the effect was negligible. In ventricular myocardial preparations from newborn rats, the effect of M3 stimulation was more than 3 times stronger compared to that from adult rats, while preparations from 3-week old rats demonstrated no definite effect, similarly to atrial preparations. In all studied types of cardiac preparations, the effects of M3 stimulation were eliminated by the selective M3 antagonist 4-DAMP (10 nM). The results of RT-PCR show that the amount of product of the M3 receptor gene decreases with the maturation of animals both in atrial and ventricular myocardium. We concluded that the contribution of M3 receptors to the mediation of cardiac cholinergic responses decreases during postnatal ontogenesis. These age-related changes may be associated with downregulation of M3 receptor gene expression. INTRODUCTION Parasympathetic regulation of the heart is extremely important for its proper functioning. The neurotransmitter acetylcholine (ACh) secreted by intramural postganglionic parasympathetic nerve endings is a major effector of the parasympathetic nervous system. ACh affects pacemaker and working cardiomyocytes through type 2 muscarinic receptors (M2 receptors), causing negative chronotropic and inotropic effects, respectively [1]. However, there is plenty of recent evidence of the existence of functionally active type 3 acetylcholine receptors (M3 receptors) in the mammalian myocardium [2][3][4]. While M2 receptors are coupled with G i proteins and the main effects of their stimulation are associated with a decrease in the intracellular levels of cAMP, M3 receptors are coupled with G q proteins, and, therefore, their stimulation results in the activation of the intracellular phosphoinositide signaling cascade [1,2]. In this process, the α-subunit of the G q protein activates phospholipase C, which ultimately leads to an increased intracellular level of Ca 2+ and activation of protein kinase C capable of affecting the functioning of various ion channels by phosphorylation. On the other hand, the channels carrying the potassium current (I KM3 ) are apparently activated by direct interaction with G q protein subunits [3,5]. Stimulation of M3 receptors leads to a decrease in AP duration, which is mainly observed in atrial myocardium of adult rats [6], mice [4], and guinea pigs [7]. Furthermore, M3 receptors mediate a number of ACh effects Decrease in the Sensitivity of Myocardium to M3 Muscarinic Receptor Stimulation during Postnatal Ontogenisis that are not related to electrical activity; in particular its antiapoptotic effect on cardiomyocytes [8,9]. Most research dealing with myocardial M3 receptors are limited to the study of their functions in adult animals, despite the fact that at the early stages of postnatal ontogeny, the role of parasympathetic cardiac regulation is generally higher than in adults due to underdevelopment or lack of sympathetic innervation of myocardium [10]. The results of in vivo experiments on infant rats [11], as well as preliminary results obtained by our group [12] for myocardium of newborn rats, suggest a higher sensitivity of myocardium to M3 receptor stimulation at the early stages of ontogeny. In this regard, the present work included a comparative study of the electrophysiological effects of selective stimulation of M3 receptors in the atrial and ventricular myocardium of newborn rats (NRs) on the first day of life, three-week-old rats (TWRs), and adult rats aged 4 months (ARs). Electrophysiological data were compared to the expression of the M2 and M3 receptor genes measured by real-time PCR (RT-PCR). AP was recorded using a standard method of intracellular recording of bioelectric activity with 25-50 MOhm glass microelectrodes connected to a Neuroprobe-1600 amplifier (AM-Systems, USA). The signal was digitized using a E14-140 analog-to-digital converter (L-Card, Russia) and recorded on a computer using the Powergraph 3.3 software (DiSoft, Russia). Data processing was carried out using the MiniAnalysis v. 3.0.1 software (Synaptosoft, USA). When analyzing the records, we determined AP duration at 50 and 90% repolarization (APD50 and DPD90, respectively), as well as the AP amplitude and resting potential value. In electrophysiological experiments, four compounds were used: selective blockers of the M1, M2, and M3 receptors; pirenzepine, methoctramine, and 4-DAMP, respectively; and the M receptor agonist pilocarpine, having low specificity to the M1 and M3 receptors as compared to the M2 and M4 receptors. All the substances were ordered from Sigma (USA). The concentrations of substances were selected based on data from previous studies [4,7]. Each preparation was used no more than twice to record the pilocarpine effect under normal conditions and in the presence of a blocker. Gene expression levels were compared by RT-PCR. Preparations of right atrial appendage and right ventricular walls from NRs, TWRs, and ARs obtained as described above were used for this purpose. The preparations were placed in a RNA stabilizing solution (In-tactRNA, Evrogen, Russia) for 24 hours at 4C° and then stored at -20C° until RNA isolation. RNA was extracted using the guanidinium thiocyanate-phenol-chloroform method (ExtractRNA, Evrogen, Russia). RNA was purified from genomic DNA using DNase I (2000 act. units/ ml, NEB, USA) for 60 min at 37C°. The RNA concentration was measured using a spectrophotometer (Nanodrop 2000, ThermoScientific, USA). For cDNA synthesis, the resulting RNA purified from genomic DNA was subjected to a reverse transcription reaction using a MMLVRTkit kit (Evrogen, Russia). All manipulations were carried out in accordance with the standard procedures using the protocols recommended by the manufacturer. cDNA was stored at -80C° until RT-PCR. RT-PCR was performed on a BioRad instrument equipped with a CFX96 detection system using a Synthol reagent kit (Russia) and EvaGreen dye (BIOTIUM, USA). We used primers synthesized at Evrogen The amplification program consisted of initial denaturation at 95C°, 5min; followed by 50 cycles of PCR (1 min at 95C°, 30 sec at 60C°, and 30 sec at 72C°); and then the last step at 72C° for 10 min. Data were analyzed by the threshold method using the software supplied with the thermocycler. The results were normalized to the amount of RNA taken for the reverse transcription reaction. The results were statistically processed using the Statistica 6.0 software. The Wilcoxon test was used to assess the statistical significance of the differences for paired samples; The Mann-Whitney test was used for unpaired samples. We used nonparametric tests due to the small sample sizes, which could not provide a normal distribution. RESULTS Muscarinic receptor agonist pilocarpine (10 uM) was used for selective stimulation of M3 receptors in elec- trophysiological experiments. It was applied to the experimental chamber in the presence of the highly selective M2 receptor blocker methoctramine (100 nM). Special preliminary experiments, where pilocarpine was applied in the presence of the selective antagonist pirenzepine (100 nM), were used to eliminate a possible effect of M1 receptor activation. Since there were no differences in the intensity of pilocarpine effects in the presence of methoctramine and in the presence of two blockers, which is consistent with previous data showing the absence of M1 receptors in cardiomyocytes, pilocarpine was further applied in the presence of methoctramine alone for selective stimulation of M3 receptors. In addition to registration of the M3 receptor stimulation effects, we conducted control experiments where pilocarpine was applied in the absence of blocking agents to assess the total effect of M2 and M3 receptor activation in myocardial preparations. It was found that in the absence of blockers, pilocarpine significantly reduces AP duration both at 50% and 90% repolarization levels in the ventricular (Fig. 1, 2A, B) and atrial (Fig. 2C, D) rat myocardium in all three age groups. The maximum effect of pilocarpine developed within 250-300 s after the beginning of the application of the substance. Hereinafter, we will discuss only the maximum values of pilocarpine effects. The effect of selective stimulation of M3 receptors in all series of experiments was qualitatively similar to the effect of pilocarpine in the absence of blockers, but it was significantly less pronounced. However, in the ARs and NRs, APD50 and APD90 were significantly reduced both in the ventricular (Fig. 1A, C, 2A, B) and atrial myocardium (Fig. 2C, D). On the contrary, there was no significant effect of selective stimulation of M3 receptors in the TWR group (Fig. 1B, 2). Almost no effects of the selective stimulation of M3 acetylcholine receptors were observed in the presence of 4-DAMP (10 nM), selective M3 receptor blocker; i.e., these effects were actually mediated by the activation of M3 receptors (Fig. 2). It should be noted that the effect of M3 receptor stimulation in the ventricular myocardium of NRs was threefold stronger compared to that in ARs (Fig. 2A, B), while no significant differences in the intensity of this effect were observed in the atrial myocardium. Thus, the most pronounced effect of M3 stimulation in the ventricular myocardium was observed for NRs, and the least pronounced effect was observed in TWRs. In the atrial myocardium, the main difference between the three age groups was observed in response to pilocarpine applied without blockers. The intensity of the effect increases with animal age, and it is more than twofold higher in ARs compared to NRs. According to the results of RT-PCR, mRNA of both the M2 and M3 receptors is synthesized in the myocardium of animals of all age groups. However, the expression of the M3 receptor gene is much weaker (Fig .3). Furthermore, the expression level of the M3 receptor gene decreases both in the atrial and ventricular myocardium with maturation of the animal (Fig. 3B). Thus, it is higher in the myocardium of TWRs compared to that in ARs. However, the expression level of the M2 receptor gene was highest in the TWR group. There- fore, the ratio of M3 to M2 expression in the ventricular myocardium was higher in ARs compared to that in TWRs: 0.59 vs. 0.19%. In the atrial myocardium, the ratio was nearly the same: 0.16 and 0.18%, respectively. DISCUSSION We were the first to obtain information on the change in the relative contribution of the M3 receptor to the regulation of the electrical activity of the ventricular and atrial myocardium during the postnatal ontogenesis of rats. In electrophysiological experiments, selective stimulation of M3 receptors was achieved using a common method [4,7]; more specifically, the application of 10 mM pilocarpine under conditions of total blockade of M2 receptors with 100 nM methoctramine. Please note that in our previous work, increase in the methoctramine concentration did not alter the pilocarpine effects, and, therefore, the effect observed in the presence of 100 nM of pilocarpine was unrelated to the activation of the residual M2 receptors. This fact is also confirmed by an almost complete elimination of the pilocarpine effect caused by both types of M receptor blockers: methoctramine and 4-DAMP. Electrophysiological data suggest that the effect of M3 receptor stimulation on the electrical activity of the ventricular myocardium is maximal in NRs. In the atrial myocardium, sensitivity to pilocarpine in the absence of M receptor blockers increases with age, while sensitivity to pilocarpine under conditions of blockade of M2 receptors is identical in NRs and ARs. We can assume that the contribution of M2 receptors to electrical ac-tivity regulation increases with age both in atrial and ventricular myocardium, and in the ventricular myocardium of NRs M3 receptors play a key role. The results of RT-PCR generally confirm these assumptions, since they show that expression of the M3 receptor gene decreases with age. It is still unclear why no effect of M3 receptor stimulation is observed in TWRs. On the one hand, this can be explained by the lowest ratio of M3 receptor mRNA to M2 receptor mRNA in this age group. On the other hand, the relative translational levels of M2 and M3 receptor proteins may differ from the expression levels of mRNA of the corresponding genes. CONCLUSION In general, our results suggest an important functional role for the M3 receptor in the ventricles of newborn rats, which is leveled in ARs. Furthermore, M3 receptor functions are not limited to their action on the electrical activity investigated in our studies. For example, M3 receptors can participate in the realization of the cardioprotective effects of ACh [8,9] under oxidative stress conditions experienced by a newborn's body. It is unlikely that change in the role of the M3 receptor is related to the beginning of sympathetic regulation of the myocardium, since there is no effect of M3 receptor stimulation as early as at the age of three weeks, before sympathetic regulation is switched on.
3,045
2016-04-01T00:00:00.000
[ "Biology", "Medicine" ]
Platform-Specific Fc N-Glycan Profiles of an Antisperm Antibody IgG Fc N-glycosylation is necessary for effector functions and is an important component of quality control. The choice of antibody manufacturing platform has the potential to significantly influence the Fc glycans of an antibody and consequently alter their activity and clinical profile. The Human Contraception Antibody (HCA) is an IgG1 antisperm monoclonal antibody (mAb) currently in clinical development as a novel, non-hormonal contraceptive. Part of its development is selecting a suitable expression platform to manufacture HCA for use in the female reproductive tract. Here, we compared the Fc glycosylation of HCA produced in two novel mAb manufacturing platforms, namely transgenic tobacco plants (Nicotiana benthamiana; HCA-N) and mRNA-mediated expression in human vaginal cells (HCAmRNA). The Fc N-glycan profiles of the two HCA products were determined using mass spectrometry. Major differences in site occupancy, glycan types, and glycoform distributions were revealed. To address how these differences affect Fc function, antibody-dependent cellular phagocytosis (ADCP) assays were performed. The level of sperm phagocytosis was significantly lower in the presence of HCA-N than HCAmRNA. This study provides evidence that the two HCA manufacturing platforms produce functionally distinct HCAs; this information could be useful for the selection of an optimal platform for HCA clinical development and for mAbs in general. Introduction Glycosylation is the most common post-translational modification (PTM) of monoclonal antibodies (mAbs) and has a significant role in their biological activity, stability, and antigenicity [1][2][3].Glycosylation can alter the pharmacokinetics and pharmacodynamics of an antibody through interactions with the neonatal Fc receptor, the endocytic mannose receptor, the asialoglycoprotein receptor, and other receptors [4].Glycans on mAbs expressed by platforms such as non-human mammalian cell lines have proven to cause unwanted immunogenic responses [5].Thus, glycan analysis is crucial to the quality control process of therapeutic antibodies, though there is little clarity over the extent to which glycosylation in this context should be regulated [6]. The majority of monoclonal antibodies available on the market are of the isotype subclass IgG1 [7].Human IgGs contain a conserved N-linked glycosylation site at asparagine residue 297 (Asn 297 ) in the heavy chain constant domain 2 (CH2) of the Fc region as a crucial element in IgG structure and function [8] (Figure 1A).N-linked glycosylation features a distribution of oligosaccharides consisting of a varying number of sugar moieties attached to the amide nitrogen of an asparagine residue via a trimannosyl chitobiose core.In humans, these sugar moieties are limited to fucose, mannose, N-acetylglucosamine (Glc-NAc), galactose, and sialic acid residues.The N-glycan lipid precursor, dolichol phosphate, is synthesized in the endoplasmic reticulum (ER) [9].Subsequent transferase reactions complete the assembly of the N-glycan precursor Glc 3 Man 9 GlcNAc 2 on the dolichol phosphate.The precursor glycan is transferred to nascent proteins at an asparagine residue within the sequence Asn-X-Ser/Thr, where X is any amino acid except proline.The glucose residues are removed by the sequential actions of ER glucosidases to leave a Man9 structure (Figure 1B).Proteins are subject to proper folding and may undergo some trimming of mannose residues within the ER before transport to the Golgi for N-glycan maturation. Here, the stepwise actions of highly selective glycosidases and glycotransferases can replace mannose branches with GlcNAc, and additional glycan moieties that may include fucose, galactose, and sialic acid can be added to generate glycan structures such as that shown in Figure 1C.The mature N-glycan falls into one of the following three main classes: oligomannose, complex, or hybrid [10]. 4 articles in important international journals, and more than 30 book chapters.One of the main field of his research is tendon diseases and regenerative medicine (growth factors, stem cells).He is th editor of an entire volume dedicated to hip arthroscopy, an innovative technique for the treatment o hip pathologies in young athletes.In 2017 he acquired the national qualification as Second Lev Professor in diseases of the musculoskeletal system.He is constantly engaged in conference activitie so he participates as a speaker in some of the most important national and international scientifi conferences, including the American Academy of Orthopedic Surgery, the US national conferenc the ISOKINETIC conference, one of the most important European traumatology conferences of spor the ISL&T very important international congress on tendon pathologies, the ISHA congress of th international hip arthroscopy society.Several antibody effector functions are facilitated by the Fc region, including opsonization, antibody-dependent cellular phagocytosis (ADCP), and complement-dependent cytotoxicity (CDC).Depending on the therapeutic goal of an antibody, these effector functions may be desirable or undesirable.While effector functions are beneficial for the efficient clearing of antigens, an exceedingly strong and persistent immune response, especially in sensitive mucosal areas like the vagina, can be damaging.N-glycosylation is a key factor that modulates IgG Fc-mediated functions.The IgG-Fc glycan at residue 297 is necessary for an effector function through the Fcγ receptor (FcγR) binding [8].Since the Fc region is the site of contact for Fc receptors, changes in the structure of the Fc domain conferred by the conserved Fc N-glycan at Asn 297 can have a major impact on receptor binding [11].Moreover, certain glycan moieties can affect binding to FcγRs differently and thus influence an antibody's potential for Fc-mediated functions [12][13][14].Overall, Fc N-glycans can play a significant role in fine-tuning the pharmaceutical properties of an antibody.Efforts in glycoengineering are of increasing interest as a means to customize therapeutic antibody effector functions [15]. The main contraceptive mechanism of the Human Contraception Antibody (HCA), an IgG1 antisperm mAb, in development as a novel non-hormonal contraceptive, is sperm agglutination.HCA binds the male reproductive tract-specific glycoprotein CD52g.However, one Fc-mediated function of HCA that may have important implications is ADCP.We previously demonstrated that HCA is capable of mediating sperm phagocytosis via Fc-FcγR interactions [16].Though ADCP could potentially serve as a secondary contraceptive mechanism to clear sperm, subsequent sperm antigen presentation, which may be possible due to the presence of antigen-presenting cells within the vaginal epithelium [17,18], is undesirable.Various Fcγ receptors are responsible for eliciting ADCP, including FcγRI, FcγRlla, and FcγRIII [19].Interactions between platform-specific HCA Fc N-glycans and these Fcγ receptors may modulate HCA-mediated sperm phagocytosis and antigen presentation, potentially affecting the clinical profile of the antibody. Advances in monoclonal antibody engineering have led to the exploration of novel production methods to increase yield and lower costs.Our group has explored several expression systems for HCA.One such platform is Nicotiana benthamiana, a close relative of the tobacco plant, for the production of "plantibodies" [20,21].This platform allows for the rapid, low-cost, large-scale production of monoclonal antibodies.It is a viralbased, transient expression system that leads to the accumulation of antibodies within days (Figure 2A).Whole mature plants (genetically modified to knock out xylosyl-and α1,3-fucosyl-transferases) are infiltrated with a highly diluted Agrobacterium suspension carrying t-DNA encoding viral replicons.The result is a high copy number of RNA molecules encoding the antibody.Because the Nicotiana plants used in this system are transgenic strains with altered glycosylation pathways, the expressed antibodies contain mammalian glycoforms.This production method is currently being used to manufacture HCA-formulated topical vaginal films for clinical trials [22,23]. Antibodies 2024, 13, x FOR PEER REVIEW 4 Glycosylation is one of the most critical quality attributes that impact the effic safety, and stability of monoclonal antibody therapeutics.It is cell-type-dependent, in ently heterogeneous, and relies on a number of factors that contribute to the final st ture, such as enzyme levels within a cell, the availability of monosaccharide nucleoti and Golgi architecture [28].Thus, the choice of production platform and consequent ty of PTMs can dramatically impact the biophysical properties of antibodies in the solut Antibody production in different systems can result in a variety of heterogeneous coforms at site Asn 297 .For example, the most widely used platform for biopharmaceu production, the Chinese hamster ovary cell (CHO), yields heterogeneous glycans, w Another platform for HCA is synthetic mRNA encoding HCA in vivo in the female reproductive tract (FRT) (Figure 2B).mRNA platforms provide several advantages, including cost-effectiveness, the scalability of mRNA production, high efficiency, reversibility, safety, and durability [24].Synthetic mRNAs encoding HCA are generated via in vitro transcription and modified with a 5 ′ cap and N1-methylpseudouridine substitution to increase stability and evade innate immune sensors [25].The HCA mRNA strands are then used to transfect vaginal epithelium; once mRNA is taken up by the cells, host riboso-mal machinery translates and secretes HCA.This method was previously established for the in vivo expression of anti-RSV antibodies in the lung [26] and anti-HIV antibodies in vaginal mucosa [27].mRNA HCA expression within the vaginal tract bypasses the need to eliminate non-human glycans, as is required in Nicotiana expression.mRNA uptake in human vaginal cells leads to antibody production with glycans native to the host. Glycosylation is one of the most critical quality attributes that impact the efficacy, safety, and stability of monoclonal antibody therapeutics.It is cell-type-dependent, inherently heterogeneous, and relies on a number of factors that contribute to the final structure, such as enzyme levels within a cell, the availability of monosaccharide nucleotides, and Golgi architecture [28].Thus, the choice of production platform and consequent types of PTMs can dramatically impact the biophysical properties of antibodies in the solution.Antibody production in different systems can result in a variety of heterogeneous glycoforms at site Asn 297 .For example, the most widely used platform for biopharmaceutical production, the Chinese hamster ovary cell (CHO), yields heterogeneous glycans, which can lead to inconsistent function [29].With novel expression systems emerging, the extent to which their respective glycosylation patterns affect mAb function is important to understand. We applied mass spectrometry-based glycoproteomic techniques to characterize and compare the HCA Fc N-glycan compositions on HCA produced by two current expression systems, namely Nicotiana benthamiana and synthetic mRNA in human vaginal cells.We demonstrated that differences in the Fc N-glycan site occupancy and glycoform distributions influenced the bioactivity of HCA, specifically antibody-dependent cellular phagocytosis, an Fc-mediated function. mRNA HCA Production (HCA mRNA_VK2 ) VK2 E6/E7 (human vaginal epithelial) cells [30] were obtained from ATCC and cultured in a complete Keratinocyte-Serum Free medium supplemented with the human recombinant epidermal growth factor (rEGF), bovine pituitary extract (BPE), and Pen-Strep (GIBCO-BRL 17005-042).Synthetic mRNA was generated via in vitro transcription and used to transfect cells as previously described [27].Briefly, mRNA strands encoding heavy and light chains of HCA were generated by ordering sequences as DNA gBLocks with 5 ′ and 3 ′ UTRs, which were cloned into a vector.Vectors were purified and in vitro transcribed (IVT) with an N1-methyl-pseudouridine modification.Resultant mRNAs were purified and capped prior to use.The Lipofectamine MessengerMAX reagent (Thermo Fisher Scientific, Inc., Waltham, MA, USA) was used to transfect VK2 cells with the indicated amount of mRNA in Opti-MEM (Gibco) per T75 flask.Following a two-day incubation at 37 • C, IgG molecules (HCA mRNA_VK2 , shorthand HCA mRNA ) were purified from the cell supernatant using tangential flow filtration (TFF) and a 100K molecular weight cutoff cartridge for functional analysis. Nicotiana HCA Production (HCA-N) HCA was produced in Nicotiana benthamiana (HCA-N) by KBio, Inc. (Owensboro, KY, USA) as previously described [31,32].Transgenic strains of Nicotiana plants subjected to fucosyl-and xylosyl-transferase knockout (∆XF) were used [33].Xylosyl-transferase (XylT) knockout prevents the addition of xylose, a non-mammalian glycan residue.The knockout of α1,3-fucosyltransferase (FucT) prevents core α1,3-fucose, which is a nonmammalian linkage of fucose.Briefly, whole mature plants were vacuum-infiltrated with an Agrobacteria suspension carrying t-DNAs encoding viral replicons, resulting in a high copy number of RNA molecules encoding HCA.The plants were then harvested to extract and purify HCA-N. MS analyses were performed in the positive mode with the radio frequency (RF) lens set to 30%, and scans were acquired with the following settings: a 120,000 resolution @ m/z 200, a scan range of m/z 370-2000, 1 µscan/MS, a normalized AGC target at 250%, and a maximum injection time of 50 ms.For high energy collisional dissociation (HCD) analyses, initial MS 2 scans (normalized collision energy (NCE) 30%) were acquired with the following settings: a 15,000 resolution @ m/z 200, a scan range of m/z 100-2000, 1 µscan/MS, AGC target 1 × 10 6 , and a maximum injection time of 100 ms.For the analysis of samples that had not been treated with PNGase F, oxonium ions were used to sense the presence of a glycopeptide and then trigger the generation of an HCD MS/MS spectrum.If two of six common oxonium ions (m/z 204.0867 (HexNAc ion), m/z 138.0545 (HexNAc-CH 6 O 3 ion), m/z 366.1396 (HexNAcHex ion), m/z 168.0653 (HexNAc-2 H 2 O fragment ion), m/z 186.0760 (HexNAc-H 2 O fragment ion), m/z 292.1031 (NeuAc ion), m/z 274.0927 (NeuAc-H 2 O fragment ion) were detected in the HCD spectrum within 15 ppm of mass tolerance, MS 2 spectra were acquired.For HCD-triggered HCD, the triggered-HCD scan was set to a 30,000 resolution @ m/z 200, a scan range of m/z 100-2000, 1 µscan/MS, AGC target 1 × 10 6 , and a maximum injection time of 150 ms.MS/MS data were searched against 20,352 entries in a UniProtKB database restricted to Homo sapiens (downloaded in May 2021) by PMI-Byonic (version v3.8-11,Protein Metrics Inc., Cupertino, CA, USA).Carbamidomethylation (Cys) was set as a fixed modification, whereas Met oxidation and protein N-terminal acetylation were defined as variable modifications.Mass tolerance was set to 10 and 20 ppm at the MS and MS/MS levels, respectively.Enzyme specificity was set to the C-terminal of glutamic acid and C-terminal of arginine and lysine, with a maximum of two missed cleavages.The Protein Metrics 132 human N-glycan library was used for the assignment of N-linked glycosylation.All assignments were verified by manual inspection.For samples that had been treated with PNGase F, HCD MS/MS data were obtained for the 20 most abundant peaks in each MS1 spectrum.Ratios of unlabeled and 18 O-labeled peptides were based on peak heights in the MS1 spectra (Figure S1). Antibody-Dependent Sperm Phagocytosis U937 pro-monocytes were obtained from ATCC and cultured in an RPMI 1640 complete medium supplemented with L-glutamine, Fetal Bovine Serum (FBS), and Pen-Strep (GIBCO-11875093).Cells were seeded at a density of 0.5 × 10 6 /mL in 6-well plates containing sterile glass coverslips.Cells were treated with 100 ng/mL of phorbol-12-myristate-13acetate (PMA) to stimulate macrophage differentiation for 48-72 h at 37 • C, after which activated U937 cells adhered to coverslips at a density of ~1 × 10 6 cells per coverslip.Sperm cells were isolated from human semen samples of healthy men aged 18-45 years.All semen donors provided informed consent prior to collection (Human Subjects Protocols H36843). The ADCP assay was performed as described by Oren-Benaroya et al. with modifications [34].Briefly, 1 × 10 6 sperm cells were suspended in sperm multipurpose handling media (MHM; FUJIFILM Irvine Scientific, Santa Ana, CA, USA) supplemented with either HCA-N, HCA mRNA , Campath (anti-CD52 IgG, positive control, Thermo Fisher Scientific Cat# MA5-16999, Waltham, MA, USA, RRID: AB_2538471), VRC01 (anti-HIV IgG, isotype control, produced in Nicotiana by ZabBio, Inc., San Diego, CA, USA), or medium only (a negative control).Sperm-antibody suspensions were added to wells and were incubated at 37 • C for 30 min.Following a wash step, cells were then incubated in PBS at 37 • C for 30 min and treated with trypsin for 5 min to remove non-specifically-bound sperm.The coverslips were retrieved and treated with a Differential Quik III dye kit (Polysciences, Warrington, PA, USA) to visualize phagocytosis under a light microscope.The number of associated antibody-opsonized sperm per macrophage was counted for each condition.Associated sperm were considered those either opsonized/engaged with the Fc receptor on the activated U937 cells (attached) or partially or completely engulfed. Statistical Analysis Data analysis and graph creation were performed using GraphPad Prism (Version 9.4.1;GraphPad Software Inc.; San Diego, CA, USA).Statistical significance between HCA-N and HCA mRNA ADCP assay results was determined by a two-tailed paired t-test.Data were log-transformed prior to analysis.The definition of statistical significance was p < 0.05. Sequence Coverage Was Sufficient for Both HCA-N and HCA mRNA We applied reversed-phase nanoUPLC-MS/MS to investigate the glycosylation of both HCA-N and HCA mRNA .Data obtained for HCA-N provided the high sequence coverage of HCA heavy and light chains with little to no contamination by other proteins (Table 1), owing to the fact that Nicotiana HCA production was performed in a commercial, GMPgrade facility (KBio, Inc., Owensboro, KY, USA).HCA mRNA , produced from cell culture in an academic lab setting, contained a high percentage of additional proteins.The high abundance of serotransferrin, which made up 66% of the sample, could be attributed to its endogenous expression by VK2 cells and subsequent transfer into the culture supernatant (Table S1).Serotransferrin is also among the 153 proteins produced in the FRT that are included in the "Normal Pap Test Core Proteome" [35].Even so, the sequence coverage of HCA heavy and light chains in HCA mRNA was sufficient to confirm their production, and the abundances of peaks corresponding to glycopeptides containing Asn 297 were sufficient to generate a glycoform profile. Compositional Differences in N-Glycan Profiles Were Observed between HCA-N and HCA mRNA Most human IgGs usually contain N-glycans only at the highly conserved N-glycosylation site on the Fc region of the heavy chain, Asn 297 .There are two potential N-glycosylation sites (NXS/T, X ̸ = P) on the heavy chain of HCA (N 71 and N 297 ), but there are no predicted N-glycosylation sites on the light chain.We determined that the first heavy chain site is largely unoccupied (Figure S2).The second highly conserved glycan site is Asn 297 (Figure 3).The presence of b-and y-type fragment ions provided the full peptide sequence information; the existence of the nearly complete series of b-and y-type product ions in this MS/MS spectrum allowed the assignment of the peptide sequence as 293 EEQYNSTYR 301 .Moreover, the b 5 /Y 1 , y 5 /Y 1 and y 5 /Y 2 fragment ions (the b 5 + HexNAc, y 5 + HexNAc, y 5 + HexAc 2 ions, respectively) precisely defined the glycosylation site as N 297 .Mass spectrometry-based glycoproteomic analysis revealed thirteen unique gly coforms of all three glycan types (i.e., oligomannose, hybrid, complex) at the Asn 297 site on HCA-N (Figure 4).The quantitative analysis of N-glycosylation site occupancy re vealed that only 49.1% of the 293 EEQYNSTYR 301 peptides was the N 297 glycosylation site occupied by glycans (Table 2 and Figure S1).Among the peptides with an occupied site all three glycan classes were well represented.Interestingly, despite the knockout of 1,3 FucT and the absence of other fucosyltransferases within the transgenic Nicotiana plat form, around 1% of the sample was fucosylated, suggesting incomplete knockout.The Mass spectrometry-based glycoproteomic analysis revealed thirteen unique glycoforms of all three glycan types (i.e., oligomannose, hybrid, complex) at the Asn 297 site on HCA-N (Figure 4).The quantitative analysis of N-glycosylation site occupancy revealed that only 49.1% of the 293 EEQYNSTYR 301 peptides was the N 297 glycosylation site occupied by glycans (Table 2 and Figure S1).Among the peptides with an occupied site, all three glycan classes were well represented.Interestingly, despite the knockout of α1,3 FucT and the absence of other fucosyltransferases within the transgenic Nicotiana platform, around 1% of the sample was fucosylated, suggesting incomplete knockout.The most abundant glycan within this sample was G0, which is consistent with previous literature on Nicotiana manufacturing [33,37].For the quantification of site occupancy, digested samples were dissolved in water ( 18 O, 97%) containing 50 mM of NH4HCO3 and deglycosylated with PNGase F prior to MS analysis. HCA-N and HCAmRNA Exhibit Different Levels of Sperm Phagocytosis Activation Both HCA-N and HCAmRNA were capable of mediating sperm opsonization and phagocytosis at a low concentration of 3.33 µg/mL, though neither showed as robust activity as Campath, which is an anti-CD52 positive control for ADCP (Figure 5A,B).HCAmRNA induced significantly more sperm phagocytosis than HCA-N (0.42 and 0.09 sperm per macrophage, respectively, p = 0.0108).We previously showed that at higher concentrations (e.g., 25-50 µg/mL), HCA-N can induce a greater degree of ADCP than seen here [16], but given that a large percentage of the Fc N-glycan site on HCA-N is unoccupied, as seen in For the quantification of site occupancy, digested samples were dissolved in water ( 18 O, 97%) containing 50 mM of NH 4 HCO 3 and deglycosylated with PNGase F prior to MS analysis. The Fc N-glycan profile of HCA-N was compared to HCA mRNA , which contained 11 glycoforms at the Asn 297 site (Figure 4).Interestingly, the occupancy of this N-glycosylation site in HCA mRNA was 96.0% (Table 2).All three glycan types were present; however, there was a bias toward complex types containing a fucose moiety.Specifically, the glycans of highest abundance were G0F, G1F, and G2F, which are all common among human IgGs.As mentioned earlier, core fucose can potentially reduce binding affinity to FcγRs [38].Of note, HCA mRNA contained a sialylated glycan (G2S1F), which is also known to reduce Fc receptor binding but promote a longer half-life.Thus, there were considerable differences between these platform-specific HCAs, not only in site occupancy but also in glycan types and glycoform distribution. Additionally, we observed that monoHexNAc modified the Asn 297 site of the Asn-Ser-Thr sequon of both HCA-N and HCA mRNA .This modification is denoted as "other" in Figure 4. Unlike a typical N-glycosylation of the Asn on NXS/T with a HexNAc 2 Hex 3 core, the monoHexNAc as an N-glycan modifying the Asn 297 site of this sequon is uncommon but probably results from lysosomal degradation. HCA-N and HCA mRNA Exhibit Different Levels of Sperm Phagocytosis Activation Both HCA-N and HCA mRNA were capable of mediating sperm opsonization and phagocytosis at a low concentration of 3.33 µg/mL, though neither showed as robust activity as Campath, which is an positive control for ADCP (Figure 5A,B).HCA mRNA induced significantly more sperm phagocytosis than HCA-N (0.42 and 0.09 sperm per macrophage, respectively, p = 0.0108).We previously showed that at higher concentrations (e.g., 25-50 µg/mL), HCA-N can induce a greater degree of ADCP than seen here [16], but given that a large percentage of the Fc N-glycan site on HCA-N is unoccupied, as seen in Table 2, it is possible that, at a low concentration, there are not enough N-glycans present to interact with FcγRs and induce a stronger response, whereas HCA mRNA is highly glycosylated and active.Overall, we demonstrated a biological difference, specifically a difference in Fc function, between the platform-specific HCAs. Discussion As the IgG-Fc glycan at residue Asn 297 is needed for effector function through FcγR binding, it is important to characterize occupancy and glycoform distribution and to de- Discussion As the IgG-Fc glycan at residue Asn 297 is needed for effector function through FcγR binding, it is important to characterize occupancy and glycoform distribution and to determine how they might differ among expression platforms.Here, we showed that HCAs expressed using two current production methods, transgenic Nicotiana benthamiana and mRNA-mediated expression in vaginal cells, have distinct Fc N-glycan profiles.The choice of mRNA expression in vaginal cells provides the most physiologically relevant in vitro model for studying the development of synthetic mRNA for contraception.Topical delivery of mRNA to the FRT would entail vaginal cellular uptake and subsequent mRNA translation.Mass spectrometry analysis confirmed that the Asn 71 site on the heavy chain of HCA IgG is rarely occupied by glycan, while the Asn 297 site carries a variety of glycoforms.In HCA-N, the most abundant glycoform at site N 297 was G0.In HCA mRNA , most glycoforms were classified as the fucosylated complex type.Additionally, we observed that a glycoform bearing monoHexNAc was present at the Asn 297 site of both HCA-N and HCA mRNA .The uncommon monoHexNAc modification is likely a product of lysosomal degradation; whether or has a biological function is unknown. Each expression platform of HCA has its distinct advantages.On the one hand, mRNAmediated expression of HCA in vaginal cells eliminates the need for the knockout of certain transferases for unwanted non-human glycans and confers native glycans to the antibody.The glycans on HCA mRNA are produced by the host and, therefore, should be compatible with the host.On the other hand, we demonstrated that HCA-N had a simpler glycan profile than that of HCA mRNA ; despite HCA-N containing more N 297 glycoforms than HCA mRNA (13 and 11, respectively), the distribution of these glycoforms was much tighter, favoring one glycoform in particular, compared to fairly evenly distributed glycoforms in HCA mRNA .A narrow glycoform distribution is a preferable characteristic because it facilitates functional consistency in antibodies. We also found that HCA-N and HCA mRNA facilitated varying levels of ADCP.This was not surprising since the N-glycan composition and biodistribution were distinct for each platform.Moreover, site occupancy data revealed that the majority of HCA mRNA at site N 297 was occupied by glycan (96%), compared to that of HCA-N (49.1%).This may have contributed to higher levels of ADCP that were observed using HCA mRNA at a low concentration; however, it is unclear whether improved sperm phagocytosis is due to differences in glycoform distribution or glycan occupancy.Future studies are needed to investigate the functions of different glycoforms. A recently completed phase 1 clinical trial of ZB-06, a vaginal film containing HCA, suggested that certain HCA Fc functions are important for contraception [23].Notably, HCA mediates CDC [16], and high numbers of immotile sperm in cervical mucus in women using the ZB-06 film suggest that CDC may be an important contraceptive effect.HCA also mediates ADCP, which may be an undesirable Fc mechanism of HCA because antigen presentation of sperm proteins could lead to the host production of antisperm antibodies and contraceptive irreversibility.For this reason, the Nicotiana platform may be preferable since it exhibits lower levels of ADCP.The potential for ADCP to occur in vivo is still unclear and will be further evaluated in future HCA clinical trials, though we believe the probability is low due to reduced numbers of macrophages in the lower FRT compared to other tissues [39].We provide evidence that different expression platforms, even for the same biologic, can impact both the bioactivity and safety profile.Therefore, when selecting an expression system for antibodies, platform-specific glycosylation patterns may be an important consideration among other parameters of characterization. Glycan profiles of the Fc region are critical antibody quality attributes.For this reason, the FDA and other regulatory agencies require pharmaceutical companies to characterize and maintain protein drug glycosylation within defined acceptance criteria that limit their deviation from the pattern present in the drug candidate used in clinical trials [40].Current efforts in glycoengineering and tailoring glycosylation patterns may facilitate the development of mAbs with more homogeneous IgG glycans that elicit the desired set of effector functions for their respective therapeutic uses [41].Several methods are in development to tailor glycosylation.One method is through the enzymatic remodeling of IgG glycosylation, either by residue alteration via knockout or the recombinant expression of glycosyltransferases to achieve the addition or removal of the core fucose and/or sialic acids or through the use of transglycosylation via endoglycosidases and glycosynthases [42].Another method is treatment with endoglycosidases, EndoS or EndoS2, for the hydrolysis and removal of the Fc glycan to block certain effector functions [43].Though certain challenges and limitations still persist in this growing field, glycoengineering may be an important tool and future direction that could allow the optimization and homogenization of the Fc N-glycans on HCA produced within a given expression platform. Figure 1 . Figure 1.The conserved IgG Fc N-glycan.(A) Structure of an IgG antibody.(B) N-glycan precursor.(C) Example of a mature biantennary N-glycan with a bisecting GlcNAc.The rectangle indicates the trimannosyl chitobiose glycan core.According to the international agreed convention [10], symbols used in this figure and in Figure 4 to represent glycan moieties are shown to the right: hexose residues are circles, N-acetyl hexose residues are squares, deoxy hexose residues are triangles, and the purple diamond is N-acetylneuraminic acid.Created with Biorender.com. Figure 3 . Figure 3. Representative HCD tandem mass spectrum of the [M + 2H] 2+ precursor ion observed a m/z 1244.4976,corresponding to the N-glycopeptiform obtained from IgG HCA, consisting of the peptide 293 EEQYNSTYR 301 , with its backbone modified at N 297 by HexNAc4Hex3.The blue letters are designations of the fragment ion types, as introduced by Domon & Costello [36] and now the system universally used for this purpose. Figure 3 . Figure 3. Representative HCD tandem mass spectrum of the [M + 2H] 2+ precursor ion observed at m/z 1244.4976,corresponding to the N-glycopeptiform obtained from IgG HCA, consisting of the peptide 293 EEQYNSTYR 301 , with its backbone modified at N 297 by HexNAc 4 Hex 3 .The blue letters are designations of the fragment ion types, as introduced by Domon & Costello [36] and now the system universally used for this purpose. FOR PEER REVIEW 9 of 14 Figure 4 . Figure 4. Composition of Fc N-glycans on platform-specific HCAs.Fc N-glycan profiles of HCA-N (black) and HCAmRNA (pink).Glycans are located on the second potential N-glycosylation site on the HCA chain (N 297 ) with the amino acid sequence 293 EEQYNSTR 300 .The cartoons above selected bars each illustrate only one of the possible glycoforms corresponding to the compositions on the xaxis.Colored symbols representing monosaccharide units are defined on the right side of Figure 1. Figure 4 . Figure 4. Composition of Fc N-glycans on platform-specific HCAs.Fc N-glycan profiles of HCA-N (black) and HCA mRNA (pink).Glycans are located on the second potential N-glycosylation site on the HCA heavy chain (N 297 ) with the amino acid sequence 293 EEQYNSTR 300 .The cartoons above selected bars each illustrate only one of the possible glycoforms corresponding to the compositions on the x-axis.Colored symbols representing monosaccharide units are defined on the right side of Figure 1. Figure 5 . Figure 5. Antibody-dependent sperm phagocytosis of platform-specific HCAs.(A) Number of associated sperm per macrophage-like cell (i.e., engaged with Fcγ receptor/opsonized or partially internalized) for each antibody treatment (3.33 µg/mL).Assay controls are as follows: media-only negative control (n = 2), Campath-positive control (n = 2), and VRC01 isotype control (n = 1).HCA-N and HCAmRNA data are expressed as the mean ± SEM of three independently performed experiments.HCA-N and HCAmRNA log-transformed data were analyzed using a two-tailed paired t-test (* = p < 0.05, p = 0.0108).(B) Images of sperm cells associated with macrophage-like cells during the phagocytosis process.Images were taken at 200X magnification.Arrows point to sperm that are in the process of phagocytosis. Figure 5 . Figure 5. Antibody-dependent sperm phagocytosis of platform-specific HCAs.(A) Number of associated sperm per macrophage-like cell (i.e., engaged with Fcγ receptor/opsonized or partially internalized) for each antibody treatment (3.33 µg/mL).Assay controls are as follows: media-only negative control (n = 2), Campath-positive control (n = 2), and VRC01 isotype control (n = 1).HCA-N and HCA mRNA data are expressed as the mean ± SEM of three independently performed experiments.HCA-N and HCA mRNA log-transformed data were analyzed using a two-tailed paired t-test (* = p < 0.05, p = 0.0108).(B) Images of sperm cells associated with macrophage-like cells during the phagocytosis process.Images were taken at 200× magnification.Arrows point to sperm that are in the process of phagocytosis. Table 1 . Sequence coverage and protein composition of HCA-N and HCA mRNA . Table 2 . Quantification of the Asn 297 N-glycosylation site occupancy. Table 2 . Quantification of the Asn 297 N-glycosylation site occupancy.
7,177.8
2024-03-01T00:00:00.000
[ "Medicine", "Biology" ]
CQE: A Comprehensive Quantity Extractor Quantities are essential in documents to describe factual information. They are ubiquitous in application domains such as finance, business, medicine, and science in general. Compared to other information extraction approaches, interestingly only a few works exist that describe methods for a proper extraction and representation of quantities in text. In this paper, we present such a comprehensive quantity extraction framework from text data. It efficiently detects combinations of values and units , the behavior of a quantity (e.g., rising or falling), and the concept a quantity is associated with. Our framework makes use of dependency parsing and a dictionary of units, and it provides for a proper normalization and standardization of detected quantities. Using a novel dataset for evaluation, we show that our open source framework outperforms other systems and – to the best of our knowledge – is the first to detect concepts associated with identified quantities. The code and data underlying our framework are available at https://github.com/vivkaz/CQE . Introduction Quantities are the main tool for conveying factual and accurate information.News articles are filled with social and financial trends, and technical documents use measurable values to report their findings.Despite their significance, a comprehensive system for quantity extraction and an evaluation framework to compare the performance of such systems are not yet at hand.In the literature, a few works directly study quantity extraction, but their focus is limited to physical and science domains (Foppiano et al., 2019).Quantity extraction is often part of a larger system, where identification of quantities is required to improve numerical understanding in retrieval or textual entailment tasks (Roy et al., 2015;Li et al., 2021;Sarawagi and Chakrabarti, 2014;Banerjee et al., 2009;Maiya et al., 2015).Consequently, their performance is measured based on the downstream task, and the quality of the extractor, despite its contribution to the final result, is not separately evaluated.Therefore, when in need of a quantity extractor, one has to resort to a number of open source packages, without a benchmark or a performance guarantee.Since quantity extraction is rarely the main objective, the capabilities of the available systems and their definition of quantity vary based on the downstream task.As a result, the context information about a quantity is reduced to the essentials of each system.Most systems consider a quantity as a number with a measurable and metric unit (Foppiano et al., 2019).However, outside of scientific domains any noun phrase describing a value is a potential unit, e.g., "5 bananas".Moreover, a more meaningful representation of quantities should include their behaviour and associated concepts.For example, in the sentence "DAX fell 2% and S&P gained more than 2%", the value/unit pair 2, percentage indicates two different quantities in association with different concepts, DAX and S&P, with opposite behaviours, decreasing and increasing.These subtleties are not captured by simplified models.In this paper, we present a comprehensive quantity extraction (CQE) framework.Our system is capable of extracting standardized values, physical and non-physical units, changes or trends in values, and concepts associated with detected values.Furthermore, we introduce NewsQuant, a new benchmark dataset for quantity extraction, carefully selected from a diverse set of news articles in the categories of economics, sports, technology, cars, science, and companies.Our system outperforms other libraries and extends on their capabilities to extract concepts associated with values.Our software and data are publicly available.By introducing a strong baseline and novel dataset, we aim to motivate further research and development in this field. Related Work In literature, quantity extraction is mainly a component of a larger system for textual entailment or search.The only work that solely focuses on quantity extraction is Grobid-quantities (Foppiano et al., 2019), which uses three Conditional Random Field models in a cascade to find value/unit pairs and to determine their relation, where the units are limited to the scientific domain, a.k.a.SI units.(Roy et al., 2015)'s definition of a quantity is closer to ours and is based on Forbus' theory (Forbus, 1984).A quantity is a (value, unit, change) triplet, and noun-based units are also considered.Extraction is performed as a step in their pipeline for quantity reasoning in terms of textual entailment.Although they only evaluate on textual entailment, the extractor is released as part of the CogComp natural language processing libraries, under the name Illinois Quantifier. 1wo prominent open source libraries for quantity extraction are (a) Recognizers-Text (Huang et al., 2017;Chen et al., 2023) from Microsoft and (b) Quantulum3. 2 Recognizers-Text uses regular expressions for the resolution of numerical and temporal entities in ten languages.The system has separate models for the extraction of value/unit pairs for percentages, age, currencies, dimensions, and temperatures and is limited to only these quantity types.Moreover, it cannot proactively distinguish the type of quantity for extraction and the user has to manually select the correct model.Quantulum3 uses regular expression to extract quantities and a dictionary of units for normalization.For units with similar surface forms, a classifier based on Glove embeddings (Pennington et al., 2014) is used for disambiguation, e.g., "pound" as weight or currency.Recognizers-Text is used in the work of (Li et al., 2021) to demonstrate quantity search, where the results are visualized in the form of tables or charts.They define quantity facts as triplets of (related, value & unit, time).Related is the quantitative related information, close to our definition of concept.However, it is not part of their quantity model but rather extracted separately using rules.They utilize the quantity facts for the visualization of results but do not evaluate their system or the quantity extrac-tion module.QFinder (Almasian et al., 2022) uses Quantulum3 in a similar way to demonstrate quantity search on news articles, but does not comment on the extractor's performance.Another system that indirectly considers concepts is Xart (Berrahou et al., 2017), where instances of n-ary relations containing numerical values and unit are extracted and concepts are an argument in these relations.However, the concepts are limited to a domain ontology with specific concepts of a given application domain. A number of other works utilize quantity extraction as part of their system.MQSearch (Maiya et al., 2015) extracts quantities with a set of regular expressions for a search engine on numerical information.Qsearch (Ho et al., 2019) is another quantity search system, based on quantity facts extracted with the Illinois Quantifier.The works by (Banerjee et al., 2009;Sarawagi and Chakrabarti, 2014) focus on scoring quantity intervals in census data and tables. Extraction of Quantities In the following, we describe our quantity representation model and detail our extraction technique. Quantity Representation In general, anything that has a count or is measurable is considered a quantity.We extend upon the definition by (Roy et al., 2015) to include concepts and represent a quantity by a tuple v, u, ch, cn with the following components: 1. Value (v): A real number or a range of values, describing a magnitude, multitude, or duration, e.g., "the car accelerates from 0 to 72 km/h", has a range of v = (0, 72) and, "the car accelerated to 72 km/h" has a single value v = 72.Values come in different magnitudes, often denoted by prefixes, and sometimes containing fractions, e.g., "He earns 10k euros" → v = 10000, or "1/5 th of his earnings"→ v = 0.2. Unit (u): A noun phrase defining the atomic unit of measure.Units are either part of a predefined set of known scientific and monetary types, or in a more general case, are noun phrases that refer to the multitude of an object, e.g., "2 apples" → u = apple (Rijgersberg et al., 2013).The predefined set corresponds either to (a) scientific units for measurement of physical attributes (e.g., "2km" has the scientific unit (u = kilometre)), or (b) currencies, as the unit of money (e.g., "10k euros" refers to a currency).Predefined units can have many textual or symbolic surface forms, e.g., "euro", "EUR", or "C", and their normalization is a daunting task.Sometimes the surface forms coincide with other units, resulting in ambiguity that can only be resolved by knowing the context, e.g., "She weighs 50 pounds" is a measure of weight (u = poundmass) and not a currency. Change (ch): The modifier of the quantity value, describing how the value is changing, e.g., "roughly 35$" is describing an approximation.(Roy et al., 2015) introduce four categories for change: = (equal), ∼ (approximate), > (more than), and < (less than).These categories are mainly describing the bounds for a quantity.We extend this definition by accounting for trends and add two more categories: up and down for increasing and decreasing trends, e.g., "DAX fell 2%" indicates a downward trend (ch = down), while "He weighs more than 50kg" is indicating a bound (ch = '>'). 4. Concept (cn): Concepts are subjects describing or relating to a value.A quantity mentioned in a text is either measuring a property of a phenomenon, e.g., "height of the Eiffel Tower", in which case the phenomenon and the property are the concepts, or an action has been made, involving a quantity, e.g., "Google hired 100 people", in which case the actor is what the quantity is referring to.In the phrase "DAX fell 2%" the quantity is measuring the worth of cn = DAX or in "The BMW Group is investing a total of $200 million" the investment is being made by cn = BM W Group. Sometimes a concept is distributed in different parts of a sentence, e.g., "The iPhone 11 has 64GB of storage." → cn = iP hone 11, storage.A concept may or may not be present, e.g., "200 people were at the concert" has no concept. Quantity Extraction Similar to previous work, we observed that quantities often follow a recurring pattern.But instead of relying on regular expressions, we take advantage of linguistic properties and dependency parsing.The input of our system is a sentence, and the output is a list of detected quantities.• v = 0.1, u = percentage, ch = up, cn = (CAC40, F rance) . Pre-processing The pre-processing stage includes the removal of unnecessary punctuations, e.g., "m.p.h" → "mph", the addition of helper tokens, and other text cleaning steps.An example of a helper token is placing a minus in front of negative values for easy detection in other steps.These steps are done prior to dependency parsing and POS tagging to improve their performance.Numerals that do not fit the definition of a quantity, such as phone numbers and dates, are detected with regular expressions and disregarded in further steps. Tokenization We perform a custom task-specific word tokenization.Our tokenizer is aware of separator patterns in values and units and avoids between-word splitting.For example, in the sentence "A beetle goes from 0 to 80 km/h in 8 seconds.", a normal tokenizer would split km/h → (km, /, h) but we will keep the unit token intact.Another example is a numerical token containing punctuations, e.g., 2.33E-3, where naive tokenization changes the value. Value, Unit, and Change Detection The tokenized text is matched against a set of rules based on a dependency parsing tree and POS tags. A set of 61 rules was created based on patterns observed in financial data and scientific documents and by studying previous work (Maiya et al., 2015;Huang et al., 2017).A comprehensive list of all rules can be found in the repository of our project. The rules are designed to find tokens associated with value, unit, and change. Value/unit pairs are often sets of numbers and nouns, numbers and symbols, or number and adjectives in various sentence structures.For ranges, the rules become more complex, as lower and upper bounds need to be identified using relational keywords such as "from... to" or "between". Changes are often adjectives or verbs that have a direct relation to a number and modify its value.Sometimes symbols before a number are also an indication of a change, e.g., "∼ 10" describes an approximation.In general, there are six change categories.∼ for approximate equality, = for exact equality, > for greater than bounds, < for less than bounds, up denoting an increasing or upward trend, and down for decreasing or downward trend. As an example of the extraction, we look at value, unit and change detection for the two quantities in Example 1.Note that in this stage the surface forms are detected and not normalized values, e.g., "pc" versus "percentage". The NOUN_NUM rule detects the surface form for the first value/unit pair, (0.4, pc).Here, the value has NUM as a POS-tag and is the immediate syntactic dependent of the unit token, which is a noun or proper noun. The LONELY_NUM rule detects the value/unit pair for the second quantity, namely (0.1, None).If all other rules fail to find a value/unit pair, this rule detects the number with the POS-tag NUM.QUANTMOD_DIRECT_NUM detects the change, by looking at the verb or adjective directly before NUM tokens.Here, "fell" is a trigger word for a downward trend.For Example 1, we thus have two extracted triplets with value, unit, and change. More examples are given In Appendix A.1. If no unit is detected for a quantity, its context is checked for the possibility of shared units.For the quantity v = 0.1, u = N one, ch = gained in Example 1 ,"percentage" is the derived unit, although not mentioned in the text.Shared units often occur in similarly structured sub-clauses or after connector words such as "and", "while", or "whereas".The similarity between two sub-clauses is computed using the Levenshtein ratio between the structure of clauses.The structure is represented by POS-tags, e.g., "German DAX fell 0.4 pc" → "JJ NNP VBD CD NN" and "the CAC40 in France gained 0.1" →"DT NNP IN NNP VBD CD".This ratio is between 0 and 100, where larger values indicate higher similarity.If connector words are present and the ratio is larger than 60, the unitless quantity is assigned the unit of the other sub-clause, e.g., N one becomes pc. Finally, the candidate values are filtered by logical rules to avoid false detection of non-quantities, e.g., in "S&P 500", 500 is not a quantity. Concept Detection Concepts are detected in one of the following five ways, ordered by priority: 1. Keywords, such as for, of, at or by before or after a value point to a potential concept.For example, "with carbon levels at 1200 parts per million" results in cn = (carbon, levels). The noun and pronouns before and after such keywords are potential concepts. 2. The entire subtree of dependencies with a number (value) as one of the leaf nodes is inspected to find the closest verb related to the number.If no verb is found, then the verb connected to the ROOT is selected.The nominal subject of the verb is considered as the concept.In Example 1, both "German DAX" and "CAC40 in France" are the nominal subjects of the closest verbs to the values in the text. 3. Sometimes values occur in a relative clause that modifies the nominal, e.g., "maximum investment per person, which is 50000" → cn = (maximum, investment, per, person). In such a case, the noun phrase before the relative clause is the concept, since the relative clause is describing it. 4. If the numerical value in a sentence is not associated with the nominal of the sentence, then it is mostly likely related to the object.Therefore, the direct object of the verb is also a candidate, e.g., "She gave me a raise of $1k", where "raise" is the direct object of the verb. 5. Finally, if the concept is not found in the previous steps, and there is a single noun in the sentence, the noun is tagged as the concept, e.g., "a beetle that can go from 0 to 80 km/h in about 8 seconds, " → cn = (beetle). From the list of candidate tokens for concepts, tokens previously associated with units and values are filtered and stopwords are removed, e.g., "CAC40 in France" results in cn = (CAC40, F rance).Generally, a concept is represented as a list of tokens. Normalization and Standardization The final stage is the normalization of units and changes using dictionaries and standardization of values.The units dictionary is a set of 531 units, their surface forms and symbols gathered from the Quantulum3 library, a dictionary provided by Unified Code for Units of Measure (UCUM) (Lefrançois and Zimmermann, 2018), and a list of units from Wikipedia. 3An example of an entry in this dictionary for "euro" is: {"euro": "surfaces": ["Euro","Euros","euro", "euros"], "symbols": ["EUR","eur", = C]} The detected token span of a unit is normalized by matching against the different surface forms and symbols in the dictionary.The normalized form is the key of the dictionary and is added to the output, e.g., "euro" in the example above or "cm" giving "centimetre".The normalization makes the comparison of different units easier.Note that conversions between metric units is not supported.For example, "centimetre" is kept as the final representation and not converted to "metre". If the detected surface form is shared across multiple units, the unit is ambiguous and requires further normalization based on the context.Since language models are great at capturing contextual information, for this purpose, we train a BERT-based classifier (Devlin et al., 2019).There are 18 ambiguous surface forms in our unit dictionary, and for each a separate classifier is trained that allows to distinguish among units based on the context.If an ambiguous surface form is detected by the system, the relevant classifier is used to find the correct normalized unit. Compound units are also detected and normalized independently.For example, "kV/cm" results in "kilovolt per centimetre', where "kV" and "cm" are normalized based on separate dictionary entries. If no valid match in the dictionary exists, the surface form is tagged as a noun unit and lemmatized, e.g., "10 students" gives u = student.In some cases, the adjective before a noun is also part of the unit, e.g., "two residential suites" results in u = residential suite. Various trigger words or symbols for bounds and trends are managed in the changes dictionary, where detected tokens for change are mapped to one of the allowed categories ∼, =, >, <, up, down.For example, the entry for equality is "=": [ "exactly", "just", "equals", "totalling","="]. Evaluation CQE is compared against Illinois Quantifier (IllQ), Quantulum3 (Q3), Recognizers-Text (R-Txt), Gorbid-quantities (Grbd) and GPT-3 with few-shot learning (Brown et al., 2020).From here on, the abbreviations are used to refer to the respective system.We first compare the functionality of the models, then describe our benchmark dataset and compare the models on precision, recall and F1-score for quantity extraction.Finally, the unit disambiguation module is evaluated on a custom-made dataset against Q3.Our evaluation code and datasets are available at https: //github.com/satya77/CQE_Evaluation.The unit normalization is limited to the quantity types and lacks disambiguation.Grbd model's major shortcoming is the lack of value standardization, where fractions such as "1/3" and scaled values like "2 billion" are not standardized correctly.The system is limited to scientific units, and unit normalization works differently than another system, where the scientific units are con- verted to the base unit, and values are also scaled accordingly.For example, "1mm" is converted to (0,001, metre).GPT-3 has a lot of variability in the output and does not provide concrete and stable functionality like the models discussed in this section.Therefore, it is not further considered in this comparison. NewsQuant Dataset For a qualitative comparison, we introduce a new evaluation resource called NewsQuant, consisting of 590 sentences from news articles in the domains of economics, sports, technology, cars, science, and companies.To the best of our knowledge, this is the first comprehensive evaluation set introduced for quantity extraction.Each sentence is tagged with one or more quantities containing value, unit, change, and concept and is annotated by the two first authors of the paper.Inter-annotator agreements are computed separately for value, unit, change, and concept between the two first authors on a subset of 20 samples.For the first three, the Cohen Kappa coefficient (Cohen, 1960) with values of 1.0, 0.92, and 0.85 is reported.Value detection is a simpler task for humans and annotators have perfect agreement.A concept is a span of tokens in the text and does not have a standardized representation, therefore, Cohen Kappa coefficient cannot be used.Instead, we report Krippendorff's alpha (Krippendorff, 2004), with the value of 0.79.In total, the annotators completely agreed on all elements for 62% of the annotations.We additionally evaluate four datasets available in the repository of R-Txt for age, dimension, temperature, and currencies4 .These datasets contain only unit/value pairs.The original datasets only contained tags for a certain quantity type and would ignore other types, giving the R-Txt model an advantage.For example, in the R-Txt-currencies, only the currencies were annotated, and other quantities were ignored.We added extra annotations for all other types of quantities for a fair comparison.For example, in the sentence "I want to earn $10000 in 3 years" from the currency dataset, where only "$10000" was annotated, we add "3 years".Statistics of the number of sentences and quantities for each dataset are shown in Table 2.The NewsQuant dataset is the largest dataset for this task containing over 900 quantities of various types.NewsQuant is designed to test for the functionalities mentioned in Table1 and includes negative examples with nonquantity numerals. Disambiguation Dataset To train our unit disambiguation system, a dataset of 18 ambiguous surface forms is created using ChatGPT 5 .For each ambiguous surface form, at least 100 examples are generated, and the final training dataset consists of 1,835 sentences with various context information.For more challenging surface forms, more samples are generated.For the list of ambiguous surface forms and the number of samples for each class, refer to Appendix A.3.A test dataset is generated in the same manner using ChatGPT, consisting of 180 samples, 10 samples per surface form.For more information on the dataset creation, please see Appendix A.4. Implementation CQE is implemented in Python 3.10.For dependency parsing, part-of-speech tagging, and the matching of rules SpaCy 3.0.9 6is used.The unit disambiguation module, with BERT-based classifiers, is trained using spacy-transformers 7 for a smooth intergeneration with other SpaCy modules. Parsers were created to align the output format of different baselines so that the differences in output representation do not affect the evaluation.For instance, for IllQ, we normalize the scientific units and account for differences in the representation of ranges in Q3.If a value is detected by a baseline but not standardized or a unit is not normalized to the form present in the dataset, post-processing is applied for a unified output.These steps do not hurt the performance of the baseline models but rather align their output to the format of the benchmark dataset.For more details refer to Appendix A. davinci-003 model from the GPT-3 API8 with a sequence length of 512, temperature of 0.5, and no frequency or presence penalty.For more details, refer to Appendix A.2.We are aware that with extensive fine-tuning and more training examples GPT-3 values are likely to improve.However, the purpose of this paper is neither prompt engineering nor designing training data for GPT-3, and the few-short learning should suffice for a baseline. Analysis of Results All the models are compared on precision, recall, and F1-score for the detection of value, unit, change, and concept.Disambiguation systems are also compared regarding precision, recall, and F1-score of unit classification.Permutation resampling is used to test for significant improvements in F1-scores (Riezler and Maxwell, 2005), which is statistically more coherent in comparison to the commonly paired bootstrap sampling (Koehn, 2004).Results denoted with † mark highly significant improvements over the best-performing baseline with a p-value < 0.01. NewsQuant: Table 3 shows the result on the NewsQuant dataset.Since Q3, Grbd, and R-Txt do not detect changes, respective entries are left empty.CQE beats all baselines in each category by a significant margin, where most of the errors are due to incorrect extraction of the dependency parsing tree and partof-speech tagging.The second best model, Q3, scores highly for value detection, but misses all the noun base units and tends to overgeneralize tokens to units where none exist, e.g., in "0.1 percent at 5884", Q3, detects "at" as percent per ampere-turn.Q3 makes mistakes on different currencies and their normalization.We attribute this to their incomplete unit dictionary.R-Txt works well for the quantity types with dedicated models, but all the other quantities are ignored or misclassified.One has to manually select a quantity type for the R-Txt, therefore, we ran all the available model types on each sentence, where any detected quantity is forced into the available model types, resulting in miss-classifications. IllQ has trouble with compound units, e.g., "$2.1 per gallon" and tends to tag the word after a value as a unit, e.g., in "women aged 25 to 54 grew by 1%", grew by is the detected unit.Although IllQ is The Grdb model detects the correct surface form for values in most cases, however, due to unstable standardization many standardized values are incorrect.Unit normalization is limited to a small subset of units, where percentages and compound units are mainly ignored.GPT-3 achieves a score close to Q3 for the detection of units and values and close to IllQ for changes.Nevertheless, due to extreme hallucination, extensive post-processing of the output is required for evaluation, e.g., many of the values extracted were not actual numbers and units were not normalized.Moreover, GPT-3 often confuses value suffixes with units, e.g., "billion" or "million" and, despite the normalization prompt, fails to normalize units and required manual normalization for most detections. R-Txt Dataset: Evaluation results on the four quantity types of the R-Txt dataset are shown in Table 4, where our model once again outperforms all baselines on value+unit detection for all categories except for temperature.Nevertheless, for temperature, the R-Txt improvement over CQE is not statistically significant.The small size of the age and temperature dataset results in inconsistent significance testing.The closeness of value detection between models is due to the structure of the dataset.Most values have the surface form of a decimal, and the diversity of types like ranges, fractions, and nonquantities is negligible.For more details on the error analysis and common mistakes of each model on NewsQuant and R-Txt, see Appendix A.6. Concept Detection: Finally, concept detection is evaluated on the NewsQuant dataset.Results are shown in Table 5.Following the approach of UzZaman et al. (UzZaman et al., 2013) for evaluation, strict and relaxed matches are compared.A strict match is an exact token match between the source and target, whereas a relaxed match is counted when there is an overlap between the systems and ground truth token spans.Based on the scores we observe that concept detection is harder in comparison to value+unit detection.Even GPT-3 struggles with accurate predictions.Our algorithm for concept detection is limited to common cases and does not take into account the full complexity of human language, leaving room for improvement in future work.Moreover, in many cases, the concept is implicit and hard to distinguish even for human annotators.In general, our approach is more recall-oriented, as we keep any potential candidate from the concept detection step in the final result set, trying to capture as many concepts as possible.Hence, there is a big gap between partial and complete matches.However, since the method is rule-based, rules can be adjusted to be restrictive and precision-focused. Unit Disambiguation: CQE is compared against Q3 (the only other systems with disambiguation capabilities) in Table 6.Since the normalization of units is not consistent in the GPT-3 model and requires manual normalization, GPT-3 is left out of this study.All 18 classifiers are evaluated within a single system.The results are averaged by weighting the score of each class label by the number of true instances when calculating the average.CQE significantly outperforms Q3 on all metrics, and it is easily expendable to new surface forms and units by adding a new classifier.Since the training data is generated using ChatGPT, a new classifier can be trained using our paradigm and data generation steps, as shown in Appendix A.4.For a detailed evaluation of each class, see Appendix A.5. Conclusion and Ongoing Work In this paper, we introduced CQE, a comprehensive quantity extractor for unstructured text.Our system is not only significantly outperforming related methods as well as a GPT-3 neural model for the detection of values, units and changes but also introduces the novel task of concept detection.Furthermore, we present the first benchmark dataset for the comprehensive evaluation of quantity extraction and make our code and data available to the community.We are currently extending the extractor by improving the quality of edge cases and looking at the compatibility of our rule set to other application domains, e.g., medical text. Limitations Despite an extensive effort to account for most common cases, CQE is still mainly a rule-based approach, requiring manual feature engineering and rule-writing for unseen cases.This issue is more prominent in the case of concept extraction, where the order in which we apply the rules has a direct impact on correct extractions.If the rule with higher priority finds a candidate, the rules further down the list are ignored.Although for humans identifying the correct rule to use is easy by considering context and sentence formulation, such delicate difference in language is not easily captured in rule-based systems.Moreover, CQE relies heavily on correct dependency parsing and POS tagging, and any error on the initial extraction propagates through the entire system.Consequently, even changes in the versions of the SpaCy model used for dependency parsing and POS tagging can produce slightly varying results. A Appendix A.1 Value, Unit and Change Detection Rule In this section, we provide two additional examples for value, unit, and change detection and describe the logic behind a few other rules. Example 2: "The Meged field has produced in the past about 1 million barrels of oil, but its last well was capped due to technical problems that have not been resolved." • NUM_NUM detects the compound number of 1 million, where 1, a number, is the child of million, a noun, in the dependency tree. • QUANTMOD_DIRECT_NUM detects the relation between the adjective "about" to the value 1, which is later identified as the change. • NOUN_NUM_ADP_RIGHT_NOUN finds a noun or proper noun that has a number as a child in the dependency tree.If there are prepositions in the children of the noun, they are also considered part of the unit.In this case, [million, barrels, of, oil] are detected using this rule. The naming of the rules is preserved in the repository.From the combination of all rules, the candidate tokens [1, million] for value, [barrels, of, oil] for unit and [about] for change are extracted. Example 3: "They have a $3500 a month mortgage and two kids in private school." • NUM_SYMBOL matches a symbol followed by a number.In this case, $3500 is detected. • NOUN_NUM_QUANT finds a number with a noun or an adverb as its head in the dependency tree. A.2 GPT-3 and Few-shot Learning To tag sentences using GPT-3, we use the fewshot learning paradigm by prompting the model to tag quantities and units in the text, given 10 distinct examples.GPT-3 is mainly advertised as a task-agnostic, few-shot learner, and we have not performed extensive fine-tuning.With the 10 examples, we aim to account for a variety of outputs, e.g., compound units, when no quantity is present, noun-based units, and prefixes for scaling the magnitude of a value.Our full prompt is as follows, where the quantities are output in a numbered list, with an order of change, value, unit surface form, unit, concept.The unit surface form is used in postprocessing if GPT-3 is not able to normalize the unit.{sentence} is replaced with the query sentence to be tagged.Nevertheless, the output of GPT-3 is not consistent and requires extreme post-processing.The post-processing includes cleaning the predicted values to only include numbers, normalization of the units even if the unit is miss-spelled, e.g., "celsiu" instead of "celsius", "ppb" to "partsper-billion", or "C" to "euro". A.3 Ambiguous Surface Forms In our unit dictionary, we encountered 18 ambiguous surface forms with different normalized units and collected at least 100 samples for each.This list is not comprehensive and in different scientific domains, more ambiguous cases might occur.The number of samples per surface form and associated units for each surface form are shown in Table 7. A.4 Disambiguation Prompts To generate the dataset for disambiguation, we experimented with multiple prompts, using ChatGPT. The aim was to create training/test data in JSONformat, where the sentences are not duplicates or too simple.For this purpose, two sentences were formulated (one for each unit, in each surface form) and are used as input examples of different contexts.The prompt explicitly asks for JSON format output and 20 samples, due to the sequence length limitation of ChatGPT.The final prompt is as follows, where UNIT1 and UNIT1 are replaced with different units with the shared surface and "SURFACE_FORM" denotes the ambiguous surface form: Create a training set of 20 samples, for "UNIT1" and "UNIT2", where in the text the surface form of the unit is always "SURFACE_FORM", but the unit is different.Output in JSON format as follows: {"text":"Sentence 1", "unit": "UNIT1" }, {"text":"Sentence 2 ", "unit": "UNIT2" }} The test dataset is created in the same manner.For certain units, multiple generations were required to get more complex sentences.In such cases, we Table 1 compares the functionality of the models in terms of different types of values, units, and changes, as well as normalization techniques.IllQ is the only baseline that is able to detect changes in values but in a limited setting that does not consider upward or downward trends.IllQ performs normalization for currencies, however, scientific units are not normalized.Furthermore, it fails to detect fractional values and ranges.After our approach (CQE), Q3 has the most functionality and is the only model that correctly detects ranges and shared units and performs unit disambiguation.On the other hand, Q3 disregards nounbased units, and although it is capable of detecting a wide range of value types, it makes incorrect detections of non-quantitative values.R-Txt has dedicated models for certain quantity types but fails to detect other types in the text, ignoring ranges, scientific notation, and noun-based units. Table 1 : Comparison of functionality for various extractors. Table 2 : Statistics of the number of sentences, quantities, and sentences with and without quantities in the NewsQuant and R-Txt datasets. Table 3 : Precision, recall, and F1-score for detection of value, unit and change on NewsQuant. Table 4 : Precision, recall and F1-score for detection of value and unit on R-Txt Datasets. Table 5 : Relaxed and strict matching, precision, recall and F1-score for concept detection on the NewsQuant. Table 6 : Weighted micro-average precision, recall and F1-score on the unit disambiguation dataset. Tag quantities and units in the texts: Table 7 : Ambiguous surface forms, units associated with them and the number of samples in the training set for each surface form and unit pair. Table 8 : Error analysis of different extraction systems. Table 9 : Precision, recall and F1-score for unit disambiguation per class.
8,059.8
2023-05-15T00:00:00.000
[ "Computer Science" ]
Global Uncertainty-Sensitivity Analysis on Mechanistic Kinetic Models: From Model Assessment to Theory-Driven Design of Nanoparticles The optimal design of nanoparticle synthesis protocols is often achieved via one-at-a-time experimental designs. Aside from covering a limited space for the possible input conditions, these methods neglect possible interaction between different combinations of input factors. This is where mechanistic models embracing various possibilities find importance. By performing global uncertainty/sensitivity analysis (UA/SA), one can map out the various outcomes of the process vs. different combinations of operating conditions. Moreover, UA/SA allows for the assessment of the model behavior, an inevitable step in the theoretical understanding of a process. Recently, we developed a coupled thermodynamic-kinetic framework in the form of population balance modelling in order to describe the precipitation of calcium-silicate-hydrate. Besides its relevance in the construction industry, this inorganic nanomaterial offers potential applications in biomedicine, environmental remediation, and catalysis most notably due to ample specific surface area that can be achieved by carefully tuning the synthesis conditions. Here, we apply a global UA/SA to an improved version of our computational model in order to understand the effect of variations in the model parameters and experimental conditions (induced by either uncertainty or tunability) on the properties of the product. With the specific surface area of particles as an example, we show that UA/SA identifies the factors whose control would allow a fine-tuning of the desired properties. This way, we can rationalize the proper synthesis protocol before any further attempt to optimize the experimental procedure. This approach is general and can be transferred to other nanoparticle synthesis schemes as well. Introduction Calcium-silicate-hydrate (CaO-SiO2-H2O or C-S-H for short) is the most important phase formed during the hydration of cementitious materials [1]. Aside from its key role in the construction industry [1], C-S-H has recently found diverse applications in environmental clean-up [2][3][4], biomedicine [5][6][7], and even catalysis [8,9]. In the biomedical field, for instance, it offers good bioactivity, biocompatibility, and biodegradability [6,7]. Besides these characteristics, the inherent nanostructured construct of C-S-H, provides high surface areas, and its relatively low-cost preparation warrants further research for applications where interfaces play a major role [4,7]. Recently, we developed a formalism to model the nucleation and growth of C-S-H using a population balance equation (PBE) framework [10]. The theoretical framework was fitted to the experimental data collected on the precipitation of a synthetic C-S-H with Ca:Si = 2, prepared under controlled conditions resembling the process of cement hydration (in terms of temporal supersaturation ratio) [10,11]. We estimated the optimal values for the unknown model parameters and explained procedures for the extraction of various output information from the simulation. Additionally, we assessed the merit of our computations by comparing the optimal physical parameters and various outputs against the literature data, wherever available [10]. Here, we build on our previous work and implement two pivotal refinements to improve the simulation speed, robustness, and generality. Specifically, we replace our ad hoc equilibrium solver in the previous work with PHREEQC, a popular freely-available tool widely used for thermodynamic speciation calculations [12]. This allows for a more straightforward adaptation to new precipitation scenarios and opens up the possibility of utilizing the large thermodynamic databases already included within the software [12]. Additionally, we employ the direct quadrature method of moments (DQMOM) for the solution of the PBE, which has several advantages over our previously used QMOM approach [13][14][15]. We give a detailed derivation of DQMOM and relevant subtleties critical to the robust and reliable performance of the method. Having this improved simulation framework, we assess the behavior of the C-S-H precipitation model by applying global uncertainty/sensitivity analysis (UA/SA) with different model parameters as the source of uncertainty. The propagation of uncertainty into different model outputs such as crystallite dimensions, particle edge length, specific surface areas, and precipitation yield is examined thoroughly using three different methods, namely, PAWN (derived from the developers names, Pianosi and Wagener) [16,17], Elementary Effect Test [18,19], and variance-based sensitivity analysis (VBSA) [19,20]. The application of these complementary methods enables unambiguous appraisal of the model performance, which in turn facilitates complexity reduction, i.e., by fixing uninfluential parameters to reasonable values. This also allows for a more robust calibration during the regression to experimental data [19,21]. Additional implications of such global UA/SA concerns the robustness of model predictions in response to different sources of uncertainty/variability in the model parameters [19,21]. Besides these outcomes, our work provides the first example of UA/SA on a kinetic model of precipitation, and thus can serve as a benchmark for future studies in this direction. Once we obtained a comprehensive understanding about the model structure, we implement another UA/SA on a model of reduced complexity and incorporate uncertainty from different experimental conditions. The goal is to aid the design of nanoparticulate products by gaining insight from computer experiments. Often, optimal operating conditions for a synthesis protocol are found using one-at-a-time (OAT) experimental designs [7,19]. Besides covering a limited space of the possible input conditions, this practice also overlooks the probable interaction effects between various combinations of the inputs. The latter could produce drastically different behaviour compared to when only one input parameter is changed at a time [19]. A global UA/SA circumvents these limitations and offers an inexpensive alternative to examine a wide range of operating conditions varied in an all-at-a-time fashion [19,21]. With this approach, we propose practical recommendations in order to improve the properties of the final product. As an example, we demonstrate the key influence of reagent addition rate, in a well-mixed semi-batch reactor, on the accessible specific surface area of the final product. This can be further compounded with adjustments in the solution chemistry to obtain a product with distinctly higher specific surface area. Computational Details The overall computational workflow for PBE modelling of precipitation processes is explained in detail in our recent articles [10,22,23] and other literature [14,[24][25][26]. Therefore, in this section we will focus on the developments brought forward by the current work. First, we will review the essential characteristics of the precipitation system to be studied, and enumerate the limitations of our previous work. Then, we will briefly explain the application of DQMOM to solve the PBE model for a well-mixed system with crystallite size as the internal coordinate (detailed derivations can be found in the Supporting Information (SI) Section 1). Next, we will describe the coupling to PHREEQC. After that, we summarize the overall simulation workflow including the improvements introduced in this work. Finally, we will present the underlying idea behind different sensitivity measures employed to assess the input-output relationships in the overall coupled thermodynamic-kinetic framework. Additional implementation details are presented in the SI Section 1 and 2. Description of Precipitation System and Limitations of Our Previous Work The precipitation system studied here is the formation of synthetic C-S-H with Ca:Si ratio 2 [10,11]. The precipitate is composed of nanofoils that are a few nm thick and in the order of 100 nm wide (Figure 1(a) and (b)). These two-dimensional nanoparticles are made up of highly defective crystallites, of a thickness typically below 10 nm, which are arranged with liquid crystalline-type orientational order (Figure 1 (b) and (c)). Recently, we proposed a pathway (Figure 1 (d)) for the formation of this nanoparticulate material and tested that by regressing the experimental kinetic data using a computational model based on population balance equation (PBE) modelling. The framework included primary nucleation, true catalytic secondary nucleation, and molecular growth, and accounted for the time evolution of the precipitation driving force by applying thermodynamic equilibrium to the reactions among the aqueous species (local equilibrium assumption [10,14]). From a mathematical perspective, the framework consisted of a set of ordinary differential equations (ODEs) written for the dynamic evolution in the moments of crystallite size distribution (the PBE part) and elemental amounts in the system (the mass balances). Our computational model showed very good plausibility in terms of the goodness of fit, the consistency of the regressed model parameters with respect to the knowledge from the literature, and reasonable mechanistic and kinetic predictions. This includes, for instance, the predicted size of crystallites and particles which were compatible with previous experimental and theoretical observations, or the invariably undersaturated state of solution with respect to portlandite similar to experimental observations [10]. In the computational framework mentioned above, the PBE set was solved using the quadrature method of moments (QMOM) [10,27]. Although QMOM is a reliable and popular method for this task [15,24,28], a widely used variation called direct quadrature method of moments (DQMOM) offers several advantages. For instance, QMOM requires a moment inversion algorithm at every time step to find the discrete approximation to the size distribution. This is often an ill-conditioned problem and reduces the computational efficiency of the method drastically [14,28,29]. On the contrary, DQMOM directly follows the discrete abscissas and weights approximating the size distribution, and employs commonly used numerical methods such as matrix inversion instead of moment inversion algorithms. This makes the method more convergent and extremely fast [13,14,28], to an extent that Haderlein et al. [14] have employed it for the development of "flow-sheeting" software tools. Other benefits of DQMOM over QMOM are its more efficient coupling to fluid dynamics and straightforward extension to multivariate distributions (namely, with more than one internal coordinate) [15,29]. The latter is particularly important in the case of C-S-H as oftentimes the precipitation leads to variable composition solid solutions [30] necessitating the application of at least two internal coordinates, namely, size and composition [10]. Another limitation of our previous work is related to the manner of applying the local equilibrium assumption. There, we calculated the supersaturation ratio at fixed time steps selected depending on how fast the precipitation consumes the precursors [10]. Therefore, supersaturation ratio was an externally calculated quantity provided to the ODE function (the function calculating the derivatives as a function of time and dependent variables [31]). This is very similar to the approach adopted by Myerson et al. who only recalculated the supersaturation ratio when there was significant (more than 0.1%) change in the amount of the precipitate [26]. This approach was adopted to minimize the computational burden of speciation calculations. Additionally, similar to the work by Haderlein et al. [14] and Galbraith and Schneider [32], a bespoke speciation solver was developed to expedite the overall simulation [10]. Even though our equilibrium solver and that by Haderlein et al. are developed in a general fashion, this practice limits the applicability of the developed tools, as it requires setting up a thermodynamic database for every single new scenario. Instead, there are a number of powerful freely-available thermodynamic solvers with huge databases already implemented, such as PHREEQC and GEMS [12,33]. Therefore, in this work we will present a general-purpose protocol for the coupling of PHREEQC to the PBE simulations of precipitation processes with the former imbedded within the ODE function. Figure 1. Summary of synthetic C-S-H precipitation system studied here. (a) Transmission electron micrograph of C-S-H particles with foil like morphology; (b) schematic representation of C-S-H nanofoils composed of defective crystallites nematically ordered in two dimensions; (c) internal structure of C-S-H crystallites from atomistic simulations [11,34]; (d) the proposed precipitation pathway for synthetic C-S-H of Ca:Si=2 [10]. Adapted with permission from Ref. [10] Copyright 2018 The Royal Society of Chemistry. where is a source term embracing all the solid formation/transformation processes such as nucleation, growth, and aggregation, inflows and outflows of crystallites, and possible changes in the volume of reaction liquor. In the kinetic modelling of precipitation processes, the PBE is solved along with differential equations written for mass balances (viz., the conservation of elements inside the reactor). In the direct quadrature method of moments, the NDF is approximated by a discretized distribution as , , 2 where denotes the discretization nodes with weights at sizes (or abscissas in nm) , , is the overall number of the nodes, and is the Dirac delta function [15,29] In these equations, denotes a diagonal matrix, and are the rates of primary homogeneous and true secondary nucleation processes (crystallites.m -3 .s -1 ), * and * are the respective critical nuclei sizes (in m), and is the volume of the reaction suspension. Population Balance Equation and Its Solution Using DQMOM In the current study, three measures have been exercised to make the DQMOM method more robust. Firstly, the matrix is written with abscissas in nm to reduce the condition number and facilitate the solution of the linear system (that is, Eq. (4) which gives the temporal derivatives of the quadrature weights and weighted abscissas). Secondly, additional reduction in the condition number is attained by left preconditioning (SI Eqs. (26 28)) [13,35]. Thirdly, the temporal derivatives of the log10 of and , are integrated rather than the untransformed variables to bring them into an order of unity and improve the convergence and robustness when using the MATLAB's ODE solver [31]. Thermodynamic Speciation via Coupling to PHREEQC PHREEQC is a freely-available geochemical reaction solver capable of simulating a variety of processes including solid-liquid-gas equilibria, surface complexation, ion exchange, and much more [12]. Aside from its carefully developed internal database, there are many comprehensive third-party databases including Cemdata18, specifically developed for cementitious systems [30]. With such a broad range of capabilities and extensive thermodynamic infrastructure, coupling PBE simulations with PHREEQC opens up new avenues in the practical and facile application of this powerful method to the understanding and design of particulate processes. Such coupling is facilitated by an already developed module called IPhreeqc, which enables interfacing with different scripting languages such as MATLAB and Python via Microsoft COM (component object model) [36,37]. Very recently, a number of articles have been published reporting the coupling of PHREEQC with PBE simulations [38][39][40]. Nonetheless, to the best of our knowledge none of these publications provided the corresponding computer code and procedures for the implementation. Here, we developed a function (the file "eqbrmSolver.m" in the SI) that provides a general interface for PHREEQC speciation calculations in MATLAB. Briefly, information about the solution chemistry (e.g., different compounds and their concentrations) and experimental conditions (such as temperature or gases at constant partial pressure in equilibrium with solution) are provided to the interface, which in turn passes the data into PHREEQC solver via the COM object. These inputs are provided using keywords in a fashion similar to PHREEQC syntax ( Figure S 1). This allows the simulation of precipitation in practically unlimited number of systems and scenarios without the necessity to rewrite the speciation code and/or its database. The information passed as the outputs of speciation calculation are the mass of water solvent, solution density, elemental concentrations, species concentrations and activities, pH, ionic strength, and saturation indices ( log with being the ionic activity product [12]) with respect to different solid phases. In the current study, we employed the Cemdata18 database [30] for all the aqueous reactions and the density and solubility product of the precipitate (C-S-H with Ca:Si=2) were taken from our previous paper [10]. ). An example simulation is presented in the "demo.m" script provided in the SI. All the core PBE simulations are handled by "pbe.m" function while the output provided by this function suffices for the calculation of any other output of interest. For instance, knowing the moments allows one to calculate different crystallite and particle characteristics such as size and specific surface area. Similarly, knowing the amount of water solvent (input to "pbe.m") and temporal mole amounts of all the elements, one can back calculate the full speciation using the function "eqbrmSolver.m". Overall Simulation Workflow For a typical scenario that simulates 24 hours of precipitation with known model parameters (from our previous work [10]) on an ordinary HP laptop, with dual-core Intel® Core™ i5-4310M CPU @ 2.70 GHz 2.70 GHz processor and 8.00 GB of RAM, the run time was 80 seconds. This is roughly half the time it took using our previous PBE code (which was using QMOM and bespoke speciation solver). It is worth noting that in the current version, in contrast to our previous code, the speciation calculations are embedded within the ODE function. In other words, the calculation is performed at every time step selected by the ODE solver, which makes the total number of such calculations much greater in number than in our previous code. Therefore, the real speed up due to the application of DQMOM (in the modified form introduced here) in place of QMOM is much larger, consistent with previous reports [14,28]. Figure 2. The overall algorithm for the PBE simulation of a precipitation process. Black boxes comprise the main workflow backbone (e.g., "demo.m" script in the SI), blue represents the content of the ODE function, and red refers to the integration of the ODE set using an appropriate solver (MATLAB's ode15s in this case [31]). All the symbols are defined in the main text. Problem Setting for Uncertainty Assessment To quantitatively apportion the variability in the output of the PBE simulation to various sources of uncertainty in the input (factors), we applied model-independent sensitivity analysis using three popular methods: PAWN [16,17], Elementary Effect Test or the method of Morris (EET) [18,19], and variance-based sensitivity analysis (VBSA) using quasi-Monte Carlo samples generated by the method of Sobol' [19,20]. The analysis was mostly implemented using the SAFE package, an open-source MATLAB toolbox that includes various functions for the generation of input samples, estimation and evaluation of sensitivity indices, and extensive visualization tools [41]. The target of uncertainty/sensitivity analysis, that is, the PBE model for the precipitation of synthetic C-S-H, has five unknown parameters: interfacial tension ( ), cohesion energy ( ), growth rate coefficient ( ), kinetic order of growth ( ), and crystallite aspect ratio ( ) [10]. Below we will discuss the feasible uncertainty domain for each of these parameters. The nominal value of interfacial tension from our previous regression to experimental data was estimated to be in the range of 0.05-0.06 J.m -2 [10]. solutions. Therefore, a lower bound of 0.04 J.m -2 was assumed for . Again, numerical experiments showed that smaller would result in unrealistically early nucleation (at very low supersaturation ratios) or even spinodal decomposition [43]. Theoretical considerations dictate that the relative cohesion energy between an already precipitated solid substrate and secondary nuclei is in the range 0-2 [10,[44][45][46]. Preliminary tests, however, indicate that in our system of interest values larger than unity would give extremely small effective interfacial tensions for secondary nucleation, particularly when the interfacial tension is close to the lower bound set earlier. Additionally, a value of / = 2, which corresponds to coherent interfaces or epitaxial growth, hardly happens in precipitation from liquid solutions because of ions and solvent molecules adsorbed onto the surface of the substrate [46]. This is compounded with the extremely defective nature of C-S-H crystallites hampering the formation of interfaces with matching lattices [10,11]. All things considered, a pragmatic upper bound of 1 was selected for / . In our previous work, we fitted values in the order of 10 -9 to the parameter and 2 for parameter [10]. For the sake of UA/SA, values within one order of magnitude around 10 -9 were considered. As suggested by Marino et al. [47], to sample the variability space more uniformly, considering the variation of over two orders of magnitude, log was preferred for the sampling. As for kinetic order of growth we sampled in the range 1-3 which is the typical variation range covering rough growth, dislocation-controlled mechanisms and surface nucleation regimes [48,49]. Finally, the ratio of crystallite edge length to its thickness regressed in our previous work was 0.5 [10]. Therefore, an input range 1/3 to 1 was considered for the possible variability. For all the input factors uniform probability distribution functions were considered for the generation of the sample [50]. Sensitivity Measures In this section, we will briefly explain the three sensitivity methods employed in the current study to facilitate the comprehension of the results. The interested reader is referred to the relevant literature for a more in depth discussion [19,21]. As Saltelli et al. [19] argued, UA/SA consists in the examination of uncertainty in parameters (input factors) propagating through a mathematical model all the way to the model outputs [19]. One way to do this task is through Monte Carlo analysis, wherein a set of row vectors are generated by sampling the input variability space of different model parameters. The accumulation of these sets gives an input matrix with each row corresponding to a set of model parameters whose introduction to the model allows a single simulation run. Therefore, any UA/SA requires an input sample matrix. In this study, low-discrepancy Sobol' sequences were constructed by first generating an input sample of size 2 , where is the base sample size, and denotes the number of uncertain parameters (e.g., = 5 for five model parameters subject to SA). Sample was generated using Latin hypercube sampling (LHS) strategy [19,21,41]. This sample was then resampled to build three matrices , , and , where and are simply the first and last rows of , respectively, while is a block matrix of recombinations of and where , is an matrix whose columns are all taken from except for the i th column which is taken from [19]. Once we have , , and , the PBE model should be run with all the rows in sample matrices as input model parameters giving a total of 2 sample outputs. Here, the size of crystallites and particles, their specific surface areas (SSA), C-S-H precipitation yield (conversion with respect to equilibrium composition), solution pH, and saturation index with respect to portlandite, all quantities after 12 hours of precipitation, are qualitatively examined as model outputs (uncertainty analysis). Subsequently, quantitative assessment in the form of global sensitivity analysis was performed on three selected outputs: crystallite thickness ( ), particle edge length ( ), and specific surface area of particles ( ). These are of special practical relevance and they can be compared to experimentally measured values [2,5,7,10,51]. We started with a base sample size of N = 2000 (corresponding to 14000 input sample points) and extended the matrix using the SAFE toolbox function "AAT_sampling_extend" to assess the convergence behavior of different sensitivity measures. Throughout this work, bootstrapping over 1000 resamples has been used to estimate the 95% confidence bounds on all the SA indices [41,50]. The first SA method applied here is PAWN, a density-based (or moment-independent) method recently developed by Pianosi and Wagener [16,17]. The central idea behind this method is to compute sensitivity through variations in the cumulative density function (CDF) of the output, induced by fixing one input factor. In practice, this is achieved via estimating the divergence between unconditional output CDF, namely that generated by varying all the input factors, and the conditional CDF generated by fixing an individual factor to a prescribed value. Several values within the input variability space can be assigned to the prescribed value, a practice referred to as multiple conditioning, to generate a number of divergence values that can be aggregated in some kind of statistic [16,17,21]. In PAWN, the divergence is expressed in terms of Kolmogorov-Smirnov (KS) statistics which is the maximum vertical distance between the conditional and unconditional CDFs [16,17]. PAWN, and moment-independent methods in general, are particularly useful in case of highly skewed or multimodal output distributions. In such cases, variance is not an adequate proxy of uncertainty and variance-based methods (see below) can no longer be applied [17]. Another advantage of these methods is that they can be estimated from generic samples, that is, without requiring tailored sampling strategies [17,21]. Therefore, in this work we use the samples generated as described earlier to estimate the average and maximum of KS statistics calculated over 10 conditioning intervals [50]. To distinguish influential and uninfluential input factors, following Khorashadi Zadeh et al. [52] we artificially introduced a dummy input factor that does not appear in the model and, thus has no impact on the output. Therefore, the sensitivity index (maximum of KS statistic) corresponding to this factor defines the threshold for parameter screening [50,52]. The second SA method used here is the Elementary Effect Test [18,19]. In this case, the idea is to correlate model sensitivity with the effect of perturbing the input factors-one at a time-on the model output. An example of this approach is to estimate (e.g., by finite differences) the partial derivatives with respect to different model parameters at their nominal values. In this form, the method is computationally very cheap but only provides local sensitivity information [21]. A global extension of this technique is to compute perturbations from multiple points within the input variability space, followed by aggregating them in some type of statistic. The most popular method in this group uses the average of finite differences (also known as Elementary Effects or EEs) as the sensitivity measure ( ) [19,21]. Here, a refined measure taking the absolute values of EEs is used to avoid cancelation due to sign differences [53]. Beside the average of EEs, standard deviations ( ) provide information about the degree of interaction between the parameters and/or their level of nonlinearity [19,21]. To apply EET using the Sobol' sample we discussed earlier, the input and output were converted to the format required by EET functions. This was done using the function "fromVBSAtoEET" in the SAFE toolbox, which rearranges the matrices , , , and so that they can be used to calculate EET indices from a radial design [19,41]. With this approach, the number of sampling points ( ) would be equal to the Sobol' base sample size ( ). The last method employed in the current study is the variance-based SA (VBSA) [19,20]. This method assumes that the output variance is an indicator of its uncertainty and the contribution of each input factor to this variance is a measure of sensitivity. This technique handles nonlinear and non-monotonic functions/computational models, as well as those exhibiting interactions between their factors. Besides, it is able to capture the influence of each factor's full-range of variation [20,54,55]. Perhaps, the biggest drawback of this method is the large number of simulation runs it requires for convergence [21,55]. Aside from computational aspects, another limitation of VBSA is that, variance is not a meaningful gauge for highly skewed or multimodal output distributions and hence, for such situations VBSA indices are not appropriate measures of sensitivity anymore [17,21]. In VBSA, typically two types of indices are defined, first-order and total-order. First-order indices (also known as main effects, ) measure the direct contribution of individual input factors to the variance of output distribution. Equivalently, this can be thought of as the reduction in the output variance achievable by fixing inputs one at a time [56]. In a model where output variability is only a result of main effects (lack of interactions between inputs), ∑ 1 and the model is said to be additive. Nevertheless, in complex computational models, this is rarely the case and main effects do not sufficiently describe the output variability. Considering the high computational expense, particularly for larger , that has to be incurred to estimate all the interaction effects, one may calculate total-order sensitivity indices, , which embrace the main effect as well as all the interactions (of any order) involving the input factor [56,57]. Considering the nature of total effect indices, they are particularly suited for parameter screening as having zero total effect is a necessary and sufficient condition for an input parameter to be uninfluential [21]. Uncertainty Analysis with Model Parameters as Input Factors In this section, we employ visual tools (scatter plots and histograms) to appraise the propagation of uncertainty from model parameters to different model outputs. Figure 3(a) shows the histograms for average crystallite thickness and edge length ( and , respectively) with probability normalization while Figure 3(b) portrays the corresponding results for particle edge length ( ). From Figure 3(a) we can see that the crystallite thickness and edge length are typically a few nm, consistent with previous reports for different C-S-H products [7,10,11,51,58,59]. Particle edge length, instead, is typically 1-2 orders of magnitude larger and its distribution assumes a very long tail spanning up to a few µm (Figure 3(b)) [60]. Concerning the specific surface area (viz., surface area per unit mass of precipitate), the hierarchical structure of the solid C-S-H gives rise to two types of surfaces. One is the overall area of the external surfaces of crystallites (i.e., neglecting the internal crystallite structure; SSACrystallite), and the other only considers the external surface of particles neglecting the surfaces embedded within the bulk of the particles (SSAParticle). Therefore, by definition SSAParticle ≤ SSACrystallite with the equality happening in the absence of secondary nucleation and aggregation (in other words, when every crystallite is a particle by itself). SSACrystallite is calculated from the zeroth moment of crystallite size distribution (which is directly available from PBE simulations) and while we can estimate SSAParticle from and (see the Electronic Supplementary Information in Ref. [10] for the estimation of using geometrical considerations). Another simulation output is the saturation index ( ) with respect to portlandite, a solid phase that competes with C-S-H for precursor ions during the precipitation [10,11,51]. In Figure 3(f) we have plotted this quantity vs. the solution pH at the end of the precipitation (after 12 h). From this plot, an unambiguous correlation is visible, where increases monotonically with pH. In our previous work, we showed that under the examined operating conditions the system was always undersaturated with respect to portlandite, consistent with experimental observations [10,11]. Nevertheless, according to Figure 3(f) at higher pH values portlandite may precipitate along with C-S-H. Indeed, our simulations with nominal model parameters [10] but at a higher inflow NaOH concentration (e.g., 4 times the value reported in [10] giving a final pH of 13.2; Figure S 14(a)) showed that the system does become supersaturated with respect to portlandite in line with certain experiments (refer to SI Section 3 for further discussion) [51,61]. Now let us examine the mapping of input uncertainty to three selected outputs , , and . From Figure 4, we see that variability in parameter has almost no effect on any of the outputs (as implied by the uniformity of the scattered points and the lack of pattern [21]). Concerning other input factors, however, the relative degree of uncertainty propagation depends on the output. From Figure 4 (a-c, e), parameters , / , log , and are all influential with respect to as the output, with / having less impact compared to others. Consulting Figure 4(f-h, j), uncertainty in / clearly has the highest effect on the variability of (strong pattern formed by the scattered data points) while , log , and have much less of an impact. A similar argument applies to (although to a lesser extent) with the output being much more sensitive to / (Figure 4(k-m, o)). In the next section, we will present the quantitative assessment of sensitivity with respect to different model parameters. Figure 4. Scatter plots of crystallite thickness (a-e), particle edge length (f-j), and particle surface area (k-o) vs. different input model parameters (base sample size 20,000). The latter can also be used to identify the influential and uninfluential input factors by comparing the sensitivity indices to that of a dummy variable (which has no effect on the model outcome) [50,52]. From both indices, we can clearly verify the minimal effect of on the studied model outputs consistent with the conclusions made from the scatter plots as discussed earlier (see previous section and Figure 4). Indeed, taking the maximum KS statistic as the sensitivity measure, it is barely higher than the value estimated for the dummy input ( Figure 5(d-f)). With as the model output, the rest of the parameters are all influential with / being slightly above the dummy variable, and , log , and exhibiting quite similar higher influences ( Figure 5(a,d)). With and to a lesser extent , variability in / has the highest impact on the output uncertainty. Except for the uninfluential factor , the rest of model parameters have similar impact over these two outputs ( Figure 5(b,c,e,f)). Figure 5. PAWN sensitivity indices in the form of mean (a-c) and maximum (d-f) of the KS statistic, with 95% confidence intervals obtained from bootstrapping, for crystallite thickness (a,d), particle edge length (b,e), and particle surface area (c,f) as the outputs (sample size of 140,000). for is invariably much smaller than that of the most influential factor). With as the model output, Sensitivity Analysis with Model Parameters as Input Factors / is identified as the second least influential parameter while close values are predicted for the other inputs (similar to PAWN; Figure 6(a) and Figure 5 (a,d)). Along the same lines, with and to a smaller degree , / has the highest impact on the output with sensitivity indices being 48 and 12 times that of , respectively (Figure 6(b,c)). Figure 6. Sensitivity indices obtained using EET with crystallite thickness (a), particle edge length (b), and particle surface area (c) as the outputs (sample size of 120,000). The results are presented as the mean of Elementary Effects plotted against their coefficients of variation (all the 95% confidence intervals are estimated by bootstrapping). Another observation from our EET analysis is related to the level of nonlinearity in model parameters and/or the degree of interaction between them. Following a method proposed by Garcia Sanchez et al. at such a high sample size (140,000) the confidence intervals are wide and there is overlap between the total effect of / with log , and log with (Figure 7(a)). Therefore, we attempted a second SA fixing 2 (which we already know is practically uninfluential) and going up to a sample size of 288,000 (base sample = 48,000). Doing that, the confidence intervals of / fall well below the other three influential parameters ( , log , and ; Figure S are the main and total effects, respectively, and the 95% confidence intervals are estimated by bootstrapping). Figure 7(b), we note the same problem with as with . This time, however, even at an input sample size of 288,000 the confidence bounds overlap significantly ( Figure S 7(b)). This observation can readily be explained by looking at the probability histograms of the output, extending over three orders of magnitude (Figure 3(b)). In other words, the complication arises from the highly skewed distribution of with a skewness of 30 (compare with 10.7 for ; Figure 3(a)) [17,21]. One remedy to this problem is the application of rank (Figure 7 Especially, with parameters , log , and affecting mainly via interactions (see their zero main effects in Figure 7(b) and Figure S 7(b)), upon transformation they apparently become much less influential (that is, they adopt smaller total effects; Figure 7(e,h) and Figure S 7(e,h)). The same complication can be traced in Figure 7(d,g) (and Figure S 7(d,g)) because parameters mainly affect by way of interactions. Consulting Another interesting feature can be seen in variance-based indices with log as the output (Figure 7(i) and Figure S 7(i)). Here, in contrast to the case with untransformed output variable (Figure 7(c) and Figure S 7(c)), the indices have very broad confidence intervals that significantly overlap and make any conclusive deduction impossible [21]. A closer examination of probability distributions of and its log10 transformation reveals that while the former is almost unimodal (Figure 3(d)) the latter is highly multimodal ( Figure S 8). Quantitatively, the Hartigan's dip test of unimodality [66,67] gives p-values of 0.11 (insignificant multimodality) and 0 (significant multimodality) for the untransformed and log10 transformed variable, respectively. Therefore, aside from the complication in converting SA results from transformed outputs back to the original ones [57,64,65], log10 transformation may render the output distribution multimodal limiting the applicability of VBSA to such scenarios [16,21]. It is worth noting that the latter problem should not happen with rank transformation, as the converted distribution is always uniform and thus unimodal [68]. UA/SA with Selected Model Parameters and Experimental Conditions as Input Factors Now that we have examined the model behavior in detail, we can turn our attention to the holy grail of the current study, that is, the theory-driven design of nanoparticle synthesis processes. Ideally, a precipitation model should be able to explain the process as a function of experimental conditions alone. In other words, all the parameters in the theoretical framework have to be defined as a function of operating conditions such as temperature, concentrations of reagents, ionic strength, etc. This is not an easy task because the development of such models requires extensive and sometimes independent sets of experimental data to identify the mechanistic steps involved and calibrate the corresponding theoretical constructs. For instance, Schroeder et al. attempted to calibrate such a framework for the formation and polymorphic transformation of calcium carbonate [24]. Although they accounted for different physicochemical aspects and correlated different parameters with the environmental conditions inside the reactor, limited success was achieved in reproducing the experimental data given the extremely complicated nature of the precipitation process. In the specific case of C-S-H precipitation, additional complications arise due to the nature of the precipitate usually forming a solid solution whose composition depends on the environmental conditions and may evolve as a function of time [10,69,70]. Therefore, with the experimental kinetic data being scarce for synthetic C-S-H [10,11], it is only possible to semi-quantitatively design the product properties as we will present in this section. In its novel environmental [2][3][4], biomedical [5][6][7], and catalysis applications [8,9], the accessible specific surface area of C-S-H product (i.e., ) is one of the most important properties of interest. Therefore, in this section we mainly focus on this characteristic, while information about crystallite and particle sizes are addressed for benchmarking against the literature data. From the discussion in the previous sections, we found that among the model parameters is significantly less influential and its impact is barely above the dummy variable. Additionally, our previous studies showed that the aspect ratio of C-S-H crystallites is 0.5 irrespective of mixing flow rate [10]. Interestingly, the same aspect ratio was found for lower Ca:Si solids based on atomistic simulations [71]. Therefore, in this section we fix these parameters to nominal values we observe here [73][74][75]. TEM images also give values within the range of a few tens to a few hundreds of nm for the width of C-S-H particle [10,11,72]. Figure 9(b)). Among the experimental conditions, the addition flow rate of silicate solution seems to dominate the output variability, albeit with a lower impact when compared to / (Figure 9(g)). Figure 9(h) shows the colored scatter plot for these two factors with marker colors proportional to the output value. The emergence of color patterns in such a plot is a simple and intuitive tool to assess the degree of interaction between pairs of input factors [21,41]. From Figure 9(h) a weak pattern can be discerned (upper left region) where simultaneous occurrence of high and low / gives rise to exceptionally higher surface areas (see also Figure S 11(c) presenting the corresponding EET results where C.V. for the three most influential parameters are all below unity indicating weak interactions among the parameters). For a quantitative assessment of variability propagation to model outputs, PAWN sensitivity indices were estimated for different model parameters and experimental conditions as input factors. Figure 10 summarizes the results with , , as the outputs (consult Figure S 10 for the convergence of PAWN indices; similar conclusions can also be obtained using EET as depicted in Figure S 11). For , log and are the most influential factors (Figure 10(a,d)). All of the experimental variables have low influences on barely above the dummy index (Figure 10(d)). The larger impact of flow rate can be understood from the fact that at higher addition rates, the supersaturation build up is larger which in turn induces more contribution from nucleation events to the overall precipitate. Put differently, higher nucleation rates give rise to larger number of crystallites among which the remaining precursor is divided giving rise to smaller crystallites (the same trend was also detected in our previous work; see Table 1 in Ref. [10]). Looking at Figure 10(b,e), is most sensitive to / with the rest of input factors having minimal effects only marginally above the dummy index. Physically, this means that the relative rates of primary and secondary nucleation events determine the final particle size. With as the SA target, again / is the most influential parameter (Figure 10(c,f)) compatible with our scatter plots (Figure 9(b)). Besides, among the experimental conditions, we can distinguish a comparable dependence on (Figure 10(c,f); similar inference as in scatter plot Figure 9(g)). Conversely, the of the product is much less sensitive (in a global sense) with respect to the other experimental variables. This is a favorable outcome as it allows for optimizing this key property by tuning the synthesis conditions. Therefore, higher surface areas can generally be obtained by increasing irrespective of the value other uncertain input factors assume. From a physical point of view, this can again be explained in the light of supersaturation buildup brought about by higher (providing the limiting reactants-Na2SiO3 and NaOH-faster), which favors primary nucleation over secondary nucleation (and nucleation, in general, over growth) [76]. Consequently, higher particle number concentrations are obtained making the overall surface area larger. Figure 10. Results of SA with selected model parameters and experimental conditions as the input factors. PAWN sensitivity indices in the form of mean (a-c) and maximum (d-f) of KS statistic, with 95% confidence intervals obtained from bootstrapping, for crystallite thickness (a,d), particle edge length (b,e), and particle surface area (c,f) as the outputs (sample size of 54,000). Previously, Wu et al. synthesized C-S-H (of lower Ca:Si ratios) with specific surface areas ranging between 100 and 500 m 2 /g, obtained by varying the synthesis conditions [7]. Our results show that there is room for further improvement by increasing the addition flow rate of silicate solution although one has to optimize the design of the synthesis reactor for maximal mixing [77]. This can be reinforced by the synergistic effect of lowering the cohesion energy (that is, lowering / ; Figure 9(b)) which can be induced either by increasing the relative concentration of monovalent ions (e.g., adding a sodium salt to the mixture) or by working at lower pH values [42,78]. Of course, this conclusion only applies to the set of synthesis conditions investigated here and carefully calibrated computational models are needed to cover scenarios that are more diverse. This includes, for instance, different reactant ratios (which typically induces variations in the Ca:Si ratio of the precipitate [9]), alternative addition orders, different pH levels, and the inclusion of other reagents/surfactants (beside those used in the original experiments [10]). Conclusions In summary, we presented a faster, more user-friendly, and more robust version of our previous PBE modelling framework (put forward in Ref. [10]) describing the process of precipitation from liquid solutions. This was achieved by replacing our speciation function with an interface to PHREEQC, providing access to the large databases already implemented in this popular software. Thanks to this modification, the adaptation to new precipitation scenarios is made much more straightforward and can be performed using keywords similar to the conventions used in PHREEQC, eliminating the need to prepare the database file for every new system. Another modification was the application of DQMOM, which offers several advantages in terms of speed, robustness, and adaptability over our previously implemented method (QMOM). Subtle technicalities in the implementation of DQMOM to obtain a reliable and quickly converging solution method were explained to allow replication/extension of the current work by other researchers. We also provide fully commented MATLAB codes implementing the PBE simulation workflow in the accompanying Supporting Information. Upon developing an improved computational framework, three different global uncertainty/sensitivity analysis (UA/SA) methods were applied to understand the behavior of the model in response to uncertainty in various model parameters. For several simulation outputs, either we demonstrated the consistency of the results from different SA measures or explained the reason behind the inadequacy of the applied method. In the latter case, for instance, we presented particle edge length as an output whose highly Now, defining the moment source term ̅ ̅ ≡ 12 Eq. (11) can be recast into a matrix form as (boldface symbols are vectors and matrices) In our experience, the scaling procedure explained earlier is an inevitable step in the application of DQMOM to the process of nanoparticle formation. Yet another reduction in the condition number of matrix can be readily achieved by preconditioning which makes the convergence of the iterative solution more robust and faster [5]. This is particularly important when dealing with particulate processes that give rise to sharp changes in the particle phase space (e.g., nucleation or aggregation) [4]. Here, we applied a left preconditioning using a diagonal matrix with main diagonal elements With being diagonal, its inverse can trivially be obtained via inverting the main diagonal elements [3]. After these considerations, solving Eq. (28) at each time step of integrating the set of ordinary differential equations (ODE set composed of PBE + mass balances) yields and . For the ODE solvers to work efficiently, the dependent variables have to be properly scaled [6]. Here, having the weights in the range of 10 20 crystallites.m -3 (or higher), both and , , can be extremely large, especially during the burst of nucleation (nucleation rates are in excess of 10 16 crystallites.m -3 .s -1 for typical model parameters [7]). To bring these values to the order of unity, we solved the ODE set for log10 of the weights and weighted abscissas Now, let us describe the constituents of the source term ̅ . For molecular growth (which could be sizedependent [7]) in a homogenous system we have [1] , 31 Therefore, which was obtained using integration by parts [3]. Now, using Eq. (1) ̅ 33 In fact, this is the N-point quadrature approximation of the growth source term for moment order k [1,7]. In matrix format and for abscissas in nanometers where is the volume of the reaction suspension. Again, in matrix form ln 1,10 , … ,10 , … , 10 ⋮ 39 Another complication arises from the fact that PBE only tracks the evolution of crystallites. Therefore, an additional differential equation is required to account for the time variation of particle number concentration ( ; particles.m -3 ) [7,8]. Again, with adopting very large values we solve the ODE for log10 of log 1 ln 10 40 ln 41 Besides the PBE set and the ODE for , we have to solve for mass balances over elemental abundances ( ) inside the solution. Since in the current precipitation system the molar amounts are in mmol range the derivatives are defined in terms of mmol.s -1 to bring them closer to the order of unity. The rate of precipitate formation therefore provides the coupling between kinetics and thermodynamic speciation calculation at each time step of integrating the ODE set [7,9,10]. Additional Details on the Implementation of PBE Simulation Framework Throughout our PBE simulations, we generally used 10 -12 and 10 -5 , respectively, for the relative and absolute tolerances input to the MATLAB's ODE solver (ode15s). Moreover, to avoid convergence problems with PHREEQC we used a maximum step size of 3 (modified using the keyword data block KNOBS, keyword -step_size; default value is 100). This fine-tunes the variation in the activities of master species in a single iteration and make the solution more robust [11]. Despite these considerations, particularly in the case of very sharp fronts (as with, for instance, extremely fast nucleation at very high or very low accompanied by high / ), ODE solver may converge to less accurate results or may not converge at all. This can readily be checked by comparing the precipitation yield calculated from the third moment against that estimated from Ca elemental balance. The latter value is invariably more reliable as the calculation of moments in moment-based methods is always associated with some error which generally increases with the moment order and depends on the specific problem [2,12]. In our experience, with the default tolerances the discrepancy between the two yields is generally low and within a few percent. Nevertheless, an improvement can be achieved by reducing the absolute tolerance to 10 -6 . This, however, comes at the expense of additional computational time. Therefore, in all our simulations we first used a larger tolerance (10 -5 ) and redid the simulation with smaller tolerance only for cases with yields different by more than 1%. Another important detail about the implementation of moment-based methods concerns the initial conditions for the solution of population balance equations. When there are no particles in the reactor at t = 0, the initial conditions for the weights and weighted abscissas would be zero [1]. Thus, in the first time step the abscissas are not defined, the matrix is not invertible, and hence, the terms and cannot be calculated. A viable way to work around this issue is to introduce a negligible amount of tiny fictitious seeds in the reactor to start the integration and maintain stability during the computations [1,7,10]. In the current work, N×10 4 m -3 crystallites/particles with sizes (1, 2, …, N)×10 -10 m were used to seed the PBE simulations. One last implementation detail is related to the simulations performed at temperatures other than the room temperature. In the absence of thermodynamic data on the temperature dependence of C-S-H dissolution reaction (in particular for Ca:Si = 2), we adopted an approximation protocol recommended for the so-called isocoulombic reactions [13,14]. Such reactions have the same number of ions on each side of the reaction, or at least have the same total charge on both sides. This simplifies the temperature dependence of the reaction because of variations in ° for different species canceling out [13,14]. Additionally, Gu et al. [14] noticed that not only Δ ° and Δ ° terms are small for isocoulombic reactions, but also they are usually of opposite signs canceling each other out. This leads to the so-called one-term extrapolation approach which states that for well-balanced aqueous reactions Δ ° does not depend on temperature. This way, only the Gibbs energy change of reaction at one temperature (usually room temperature) suffices to estimate the value at other temperature [14]. The method also applies to isocoulombic reactions involving condensed phases with non-reference state solids (mostly minerals) giving less accurate estimates compared to when the solid is in its reference state [14]. For reactions that are not inherently isocoulombic, combining with other reactions (whose equilibrium constants are known as a function of temperature) is a necessary step before using the one-term extrapolation method [13]. For this purpose, oftentimes the ionization of water is used. For our C-S-H solid, the dissolution reaction reads . . index. As we can see, the convergence is achieved already at sample sizes above 50,000 and the 95% confidence intervals are narrow, allowing for a straightforward assessment of relative importance exhibited by various inputs. Overall, the relatively fast convergence of different indices, narrow confidence intervals, and the consistency of the results with scatter plots testify to the suitability of PAWN for the current UA/SA problem. Supplementary UA/SA Results with Model Parameters as Input Factors Figure S 4 summarizes the convergence behavior for the average of absolute EEs ( Figure S 4(a-c)) and their C.V. values ( Figure S 4(d-f)). For the mean values convergence is obtained at sample sizes above 60,000 in all case and the confidence intervals are quite narrow ( Figure S 4 (a-c)). Achieving convergence for the C.V. values, on the contrary, poses a significant challenge and the confidence intervals are relatively wide even at a very large sample size of 120,000 ( Figure S 4 (d-f)). Fixing to a nominal value 2 and estimating the EET indices over a sample of size 195,000 resolves this issue (Figure S 5). With this larger sample, the obtained measures and their C.V. do not change appreciably but the confidence intervals generally become smaller (compare Figure 6 to Figure S 5). In particular, with as the output, the confidence interval of C.V. for log separates from that of other parameters assuming the lowest value among others ( Figure S 5(a)). Similarly, with as the output, the C.V. for separates from that of and log Figure S 6(b,c)). Still with the convergence is fast despite having parameters with similar main effects ( Figure S 6(a)). On the contrary, convergence of the total effects introduces a significant challenge in the case of and (Figure S 6(d,e)) given their highly skewed distributions as we discussed earlier ( Figure 3(a,b)). This mainly manifests in wide and significantly overlapping confidence intervals while the average values from bootstrapping already stabilize for larger sample sizes ( Figure S 6(d,e)). This is more evident when is fixed to a nominal value of 2 and the analysis is extended to even larger sample sizes (Figure S ) which is an endothermic process and is promoted at higher activities [15]. A less strong influence can also be discerned with decreasing the Ca(NO3)2 concentration ( Figure S 13(e)) although this parameter has to be kept high enough to achieve acceptable precipitation yields ( Figure S 12(e)). PBE simulations at NaOH concentration 0.4 mol/kg water (4 times the value in our experiments in Ref. [7]) at room temperature and 50°C show these effects, giving rise to periods of supersaturation ratio in excess of unity for portlandite (Figure S 14). Fortunately, none of these parameters directly influence the C-S-H particle SSA and therefore, it should be possible to produce high surface area C-S-H product without coprecipitating portlandite.
12,510.4
2019-10-28T00:00:00.000
[ "Engineering" ]
Material Discrimination Algorithm Based on Hyperspectral Image , Introduction Visible near-infrared band images are obtained by sensors through detecting the electromagnetic radiation reflection of objects.It can precisely characterize ground objects so that each object has a spectral fingerprint which is of great significance to the identification of object materials [1,2].However, a hyperspectral image has high spectral dimension and spatial resolution, so it is difficult to process it directly because of a large amount of data [3,4].us, more in-depth studies have been carried out: in 2010, Yang et al. [5] used a supervised way to select band signals; Di et al. [6] applied a band selection to human face recognition and achieved good results.In 2010, Li and Qian [7] constructed a sparse matrix to analyze different bands; Samadzadegan and Mahmoudi [8] constructed the swarm intelligence to optimize band selection strategy.In 2012, Du et al. [9] established a collaborative sparse model to select hyperspectral bands; Hedjam and Cheriet [10] realized a band selection based on graph clustering.In 2013, Feng et.al [11] realized the band selection based on trivariate mutual information and clonal selection.Nakamura et al. [12] proposed a nature-inspired framework for band selection.In 2014, Su et al. [13] used the particle swarm optimization to optimize the band selection process; Xiurui Geng et al. [14] realized a band selection through gradient analysis of different band images.In 2015, Jia et al. [15] proposed a band selection scheme based on the idea of sorting; Patra et al. [16] introduced the idea of rough set to select bands.In 2016, Feng et al. [17] utilized the multiple kernel learning based on discriminative kernel clustering for hyperspectral band selection; Liu et al. [18] proposed a band selection algorithm based on the distribution of adjacent pixels.In 2017, Cao et al. [19] improved a classification map algorithm for fast hyperspectral selection; Shah et al. [20] proposed an algorithm of the dynamic frequency domain to realize band selection.In 2018, Wang et al. [21] proposed the optimal clustering framework to achieve hyperspectral band selection; Xie et al. [22] made modeling and analysis according to the representativeness of the bands.In 2019, Sun et al. [23] used a weighted kernel regulation to realize band selection.Sun et al. [24] calculated the variance between spectral bands and built a model for band selection.In 2020, Torres et al. [25] applied a band selection into the field of signal enhancement.Sun et al. [26] used the idea of low rank to cluster hyperspectral bands.Patra and Barman [27] focused on the image boundary intensity to realize band selection based on the fuzzy set. To sum up, main problems of hyperspectral band selection are as follows.(1) It is difficult to establish a unified band selection model due to high dimensions of hyperspectral data.(2) e quality of band selection cannot directly show its effect.erefore, according to the above problems, (1) a hyperspectral band selection algorithm is constructed based on vision and (2) a subspace clustering framework is proposed based on deep adversary for realizing the preliminary clustering of spectral information.A perception model based on color is proposed to visualize the difference between the target and the background to show the perception effect. A Visual Perception Algorithm More than 80% information is obtained by human vision.One object can be recognized and distinguished from the background mainly by the color.At present, the captured natural images can be regarded as the superposition of RGB three channels.On this basis, a large number of research studies on target extraction, image retrieval, and analysis have been carried out, and a series of achievements have been achieved. e natural images can be regarded as hyperspectral data with low number of channels.erefore, we migrate the related algorithms of RGB images to the hyperspectral field, and the process is shown in Figure 1.(1) e subspace clustering network of deep countermeasure is constructed to realize the initial band selection.(2) According to the color difference of vision principle, a model is constructed to determine the band combination with the strongest response intensity of a specific material, and then the band is selected as the final band. Subspace Clustering Based on Self-Attention Adversarial. When high-dimensional data are encoded to output lowdimensional feature representation, a large amount of information will be lost.However, the attention model, which is based on encoder-decoder framework, can lose less information. where S(•) is the similarity function and Q is the output information.e self-attention adversarial model we built is structured as Figure 2. For true sample acquisition, k groups of A � {A 1 , A 2 , . .., A k } are obtained through the similarity matrix learned by word expression layer.the projection residual from A i to the corresponding subspace S i is calculated as follows: where Z is the characteristic matrix, V i is the projection matrix, T represents matrix transpose, and L R represents the projection residual.m data with small residuals are selected as positive samples.e corresponding generator resistance loss function is as follows: For false samples, the sampling layer randomly samples from the estimated subspace S i to generate m false samples Z j � θ j Z j .In order to make the generated data closer to the learning subspace of discriminator, the antiloss is introduced to revise the existing loss function: where λ is the balanced sparse. A discriminator is constructed by projection residuals to distinguish true and false samples, and the probability loss function of samples belonging to subspace is established. where ε is the parameter.e loss function of the discriminator corresponding to k-rent is as follows: e second term of the following formula is introduced to increase the separation of different groups of subspaces after introducing regular terms.e third term of the following formula is to reduce V i : where μ 1 and μ 2 are two constants greater than 0. In order to make better use of the local manifold structure information of the image, Laplacian regularization term is introduced into the loss function of the generator to construct the image connection relationship.e weight can be expressed as follows: where N k is the K neighborhood of n vertices.For the nonlinear manifold structure, the energy function is defined as follows: According to the definition of the Laplace matrix, Q can be rewritten as 2 Scientific Programming e final generator loss function is obtained as follows: Perception Model Based on Color. According to Gestalt psychocognitive analysis, objects can be recognized, mainly by the eyes and brain.When the eyes observe the images, they cluster themselves according to certain rules to make them become a comprehensible structural entity.Among them, color feature is an effective way. On the basis of the previous analysis in the last section, in order to more comprehensively express the spectral information between bands, a model is established on the basis of relative entropy: Scientific Programming where S (X, Y) is the spectral correlation between band X and band Y; D KL (P|Q) is the relative entropy of probability distribution corresponding to band X and band Y; and M t (X, Y) is the average mutual information between band X and band y. λ is the weight coefficient, which is determined by the relative amplitude of D KL (P|Q) and M t (X, Y). In order to better integrate the entropy information into it, we modify S (X, Y).Because the entropy divergence and the average mutual information are basically equally important to the band selection process, in order to make the contribution of the corresponding matrix M KL and M t consistent to S, the normalization function is constructed. S(X, Y) Different pixel values may correspond to the same color name.For this purpose, we construct the mapping relation; given the data D � {d 1 , . .., d N }, the corresponding word is W � w 1 , . . ., w M ; these words are considered from the potential theme Z � {z 1 , . .., z K }. erefore, a probability model is constructed as follows: where p (w|z) and p (z|d) are prior probabilities.e EM algorithm is used to estimate the maximum similarity as follows: where n (d, w) is the frequency of occurrence.rough training, we get the following results: where α is a parameter.e corresponding maximum similarity can be written as follows: Based on this, the color mapping is realized, and the colors are sorted according to Figure 3 to construct the differentiation degree. Experiment Result and Analysis e experiment is composed of visible infrared hyperspectral data and software simulation data [28], as shown in Figure 4, including grassland, sand, vehicles, buildings, and other typical targets.It can be seen from the figure that the pixel values displayed by different ground objects in different bands are different, and there are also differences in the pixel values of ground objects in the same band, which is the basis of band selection.We normalized the hyperspectral images to 512 × 512 × 300. Display of Spectral Curves of Typical Ground Objects. In order to show the spectral curves of typical ground objects, the spectral curves of leaves and sky are selected to display, as shown in Figure 5. e horizontal axis represents the band number and the vertical axis represents the pixel value.It can be seen that the same kind of features has strong similarity, and different types of features have differences.Although the internal targets have volatility, the overall volatility is small.In the areas of leaves and sky, the most significant area is concentrated in the 0-300 band.In the area of sky, the pixel value of 0-300 band reaches saturation state.Based on the above analysis, the sky and leaves can be effectively distinguished. Band Clustering Effect. In order to verify the accuracy of moving target extraction, we introduce the OA [29] overall accuracy and kappa coefficient.OA represents the proportion of samples with correct classification to all samples.Kappa is an index used for the consistency test [30]: e band selection effect of real scene hyperspectral image is shown in Figure 6, and the band selection effect of simulated scene hyperspectral image is shown in Figure 7.It can be seen from the figure that the effect of the real image is slightly lower than that of the simulated image, which is due to the stable noise and spectral curve contained in the simulated image.However, the information contained in the real image is more complex and has a certain volatility. e sparse nonnegative matrix factorization (SNMF) algorithm [6] transforms the problem of band selection into the problem of sparse decomposition, and it has a certain effect to extract significant spectral images.e fast volume gradient (FVG) algorithm [13] establishes the model according to the gradient to realize the band selection, and the effect is better for the region with obvious boundary.e variable precision neighborhood (VPN) algorithm [17] constructs fuzzy sets according to the relationship between adjacent pixels to realize band selection.e fast and late low rank (FLLR) algorithm [20] introduces the idea of low rank to calculate the redundancy between bands and realize band selection.In this paper, a subspace clustering algorithm is proposed based on depth confrontation, which fully considers the correlation of bands and optimizes the loss function to achieve band selection.On the real data set, OA reaches 90% and kappa reaches 0.67.OA and kappa are 96% and 0.92, respectively, in the simulation data set, which reach better results. Target Detection Effect. On the basis of the optimal band selection, different target material clustering algorithms are used for comparison.e detection results of real data and simulated data are shown in Figure 8. Hu et al. [31] proposed the SVM algorithm to extract image features for clustering to achieve enhancement.Han et al. [32] constructed CNN to extract target features.Shi and Pun [33] built multiscale RESNET to realize target detection.Li et al. [34] detected targets based on boundary features.e above algorithms analyze the target from the perspective of morphology to achieve target detection.Based on clustering, in this paper, we construct a visual perception model to detect the target and use the difference of visual mapping to measure the detection rate of the target, which has a good effect, and the ROC curve value is the highest.e proposed algorithm constructs the mapping model of visual perception by fusing images with three bands.e mapping results of real data using different band combinations are shown in Figure 9(a).{0, 38, 187} segment maps the leaf region to red, but it cannot distinguish the building and sky areas effectively.{1, 161, 35} spectrum segment can distinguish the building area from other areas, which verifies the effectiveness of the proposed algorithm.e mapping results of simulated data using different band combinations are shown in Figure 9(b).{0.4,8.0, 12} spectrum can distinguish grassland from land but cannot distinguish grassland from vehicle.{0.6, 0.6, 10.4} bands can extract vehicles effectively and suppress grassland and land areas.e effectiveness of the proposed algorithm is verified. Conclusion Hyperspectral images have spatial resolution and interspectral resolution, which plays an important role in material recognition.Aiming at the difficulty of hyperspectral band selection, a deep adversarial subspace clustering network is constructed to select the representative band, which can select a representative band.From the perspective of psychology, a color perception model is constructed to highlight the significant areas.Experiments show that the proposed algorithm has good results.On this basis, it can carry out material recognition of typical targets and hidden targets. Figure 1 :Figure 2 : Figure 1: Flow chart of the band selection algorithm. Figure 5 :Figure 6 :Figure 7 :Figure 8 : Figure 5: Display of spectral curves of typical ground objects: (a) spectral curve of leaves; (b) spectral curve of sky.
3,155.6
2021-09-13T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science", "Materials Science" ]
Emergence of rabbit haemorrhagic disease virus 2 in China in 2020 Abstract Rabbit haemorrhagic disease (RHD) is an acute fatal disease caused by the Lagovirus rabbit haemorrhagic disease virus (RHDV), which was first reported in 1984 in China. Strains of two different genotypes (GI.1a and GI.1c) have been detected in China to date. In 2010, a new RHDV variant with a unique genetic and antigenic profile was identified in France, designated RHDV2, which rapidly spread throughout continental Europe and nearby islands. Here, we report the first outbreak of RHD induced by RHDV2 (GI.2) in rabbit farms in the Sichuan province of China. We conducted haemagglutination tests and phylogenetic analysis of the new RHDV isolate SC2020/04, which was identified as a non‐haemagglutinating strain belonging to the RHDV2 (GI.2) genogroup. Considering the serious risk of RHDV2 to the Chinese rabbit industry, the circulation of RHDV2 in the population should be carefully monitored in China. | INTRODUC TI ON China is highly ranked in the global rabbit industry, accounting for 43% of the worldwide slaughtered rabbits with 44% of the global share of rabbit meat output (Wu, Seema, & Huang, 2016). Rabbit haemorrhagic disease virus (RHDV) of the family Caliciviridae, genus Lagovirus, causes high morbidity and mortality in rabbits. Over 90% of RHDV-infected adult rabbits die owing to fulminant hepatic failure within 3 days of infection (Park, Lee, & Itakura, 1995). RHDV was first reported in China in 1984. However, a new RHDV-related virus designated RHDV2 was detected, for the first time, in France in 2010 (Le Gall-Recule et al., 2013), and subsequently spread to other countries in Europe, Australia, America and Africa (Abrantes et al., 2013;Lopes, Rouco, Esteves, & Abrantes, 2019;Mahar et al., 2018;Puggioni et al., 2013;Rouco et al., 2018). Emergence of rabbit haemorrhagic disease virus 2 in China in 2020 2 | MATERIAL S AND ME THODS | Haemagglutination test Liver samples collected from three infected rabbits were frozen and stored at −70°C. Liver samples were homogenized (20% in phosphate-buffered saline [PBS]), frozen at −70°C and thawed twice. The haemagglutination test (Hu et al., 2016) was carried out in U-shaped microtitre plates containing 50 μl of PBS (pH 6.5). Fifty-microlitre suspensions of homogenized liver samples were twofold serially diluted and placed in U-shaped plates; they were then further incu- | Reverse transcription-polymerase chain reaction The full-length of the vp60 gene sequence was amplified by Reverse transcription-polymerase chain reaction (RT-PCR) using the Reverse Transcriptase XL (AMV) kit (Takara Bio) and the Ex Taq kit (Takara Bio). | Phylogenetic analysis Phylogenetic analysis of vp60 gene sequences was performed using MEGA 7 (Kumar, Stecher, & Tamura, 2016) with the maximumlikelihood approach based on the Kimura two-parameter model. Reliability of the nodes was assessed with a bootstrap resampling procedure consisting of 1,000 replicates. | RE SULTS AND D ISCUSS I ON The clinical symptoms and pathological changes in the dead rabbits were similar to those of rabbit haemorrhagic disease. The mortality rate was more than 70% (approximately 1,300 rabbits died), although weaning rabbits had been immunized with a commercial inactivated RHD vaccine. Importantly, most of the unweaned rabbits died of the disease, indicating that RHDV2 might be the causal pathogen because RHDV2 is able to fatally affect a high proportion of young rabbits. Given that the haemagglutination test remains the routine diagnostic method for RHDV in China, this non-haemagglutinating characteristic warrants further attention in the detection of clinical samples. The new isolate exhibits the highest nucleotide sequence identity with the NL2016 strain from the Netherlands (98.3%; GenBank accession number: MN061492), which corresponds to RHDV2. Phylogenetic analysis was employed to determine the evolution of the new isolate. As shown in Figure 1, the new isolate is in the same branch of the other RHDV2 strains. These results support the conclusion that the isolate collected from the Sichuan province of China in 2020 belongs to the RHDV2 (GI.2) genogroup, which was designated strain SC2020/04 (GenBank accession number: MT383749). This represents the first outbreak of RHDV2-induced RHD in rabbit farms in China. We previously classified all RHDV isolates in China collected before 2017 in GI.1 (Hu et al., 2017); therefore, the present finding indicates the potential for co-circulation of RHDV and RHDV2 in China. Indeed, RHDV2 (GI.2) was reported to replace RHDV (GI.1) in some countries, including Portugal, Sweden and Australia (Lopes et al., 2014;Mahar et al., 2018;Neimanis et al., 2018). In addition, recombinant events between GI.2 and other genotypes have been reported (Almeida et al., 2015;Lopes et al., 2015;Silverio et al., 2018). Considering the distinct serotype from RHDV (GI.1), high risk of F I G U R E 1 Maximum-likelihood phylogenetic trees for the complete nucleotide sequences of RHDV vp60 genes. Bootstrap probability values above 50% with 1,000 replicates are indicated at the nodes. The branch lengths are proportional to the genetic distance. European brown hare syndrome virus (EBHSV) strain BS89 was used as the outgroup to root the tree RHDV2 (GI.2) to the Chinese rabbit industry, and limited level of cross protection induced by RHDV/RHDVa vaccine against RHDV2, ongoing surveillance and vaccine formulation update are most imminent requirements for control of the disease induced by RHDV2 in China. ACK N OWLED G EM ENT This work was supported by the funds earmarked for the China Agriculture Research System (No. CARS-43-C-1) and National Key R&D Program of China (No. 2018YFD0502203). We would like to thank Editage for English language editing. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest regarding the publication of this manuscript. All authors have read and approved the final manuscript. E TH I C A L A PPROVA L The authors confirm that the ethical policies of the journal have been adhered to. The collection of the liver samples was performed in strict accordance with the guidelines of Jiangsu Province Animal Regulations (Government Decree No. 45). PE E R R E V I E W The peer review history for this article is available at https://publo ns.com/publo n/10.1002/vms3.332. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
1,444.8
2020-05-06T00:00:00.000
[ "Biology", "Medicine" ]
Toxic Gas and Smoke Generation and Flammability of Flame-Retardant Plywood Limited by flammability, wood and wood-based materials face challenges in distinguishing themselves as structural materials or finishing materials. Once burning, they can produce toxic gases detrimental to humans and the environment. Therefore, it is critical to make clear whether fire-retardant wood construction materials are insusceptible to fire and not the sources of toxic gases. This study aimed to evaluate flame-retardant plywood from the aspects of flammability and the toxic gas and smoke generation during combustion. The flame-retardant plywood was manufactured by impregnating a flame-retardant resin in line with International Maritime Organization (IMO) standards. The research results indicate that seven out of the eight kinds of toxic gases listed by the IMO, other than CO, were not detected during the combustion of the flame-retardant plywood. While CO was detected, its quantities under three test conditions are below the corresponding thresholds. Therefore, unlike synthetic resin products, flame-retardant plywood is a promising finishing material that can reduce the damage from toxic gases in the event of a fire. In the smoke generation tests, the mass reduction rate of flame-retardant plywood increased from 13% to 18% and then to 20% as the test condition became more severe. Under the same circumstances, the average maximum specific optical density also followed an upward trend, whose values (75.70, 81.00, and 191.20), however, still met the IMO standard of below 200. This reflects that the flame-retardant plywood is competent as a finishing material. Further, flammability was evaluated, and the critical flux at extinguishment (CFE), total heat release (Qt), and peak heat release rate (Qp) were determined to be 49.5 kW/m2, 0.21 MJ, and 0.66 kW, respectively, which all did not reach the corresponding thresholds given by the IMO. To sum up, flame-retardant plywood has satisfactory flame-retardant performance and meets fire safety standards, showing the potential to be an attractive finishing material for building and construction. Introduction Wood and wood-based materials have been widely used as structural and finishing materials for construction.However, their properties (constituents) render them vulnerable to fire [1]. In 2019, 40,103 cases of fire occurred in South Korea, which led to 2515 casualties and property damage of KRW 858.4 billion, as reported.Among the places where fires occurred, residential facilities ranked the top regarding frequency (11,058 cases, 27.6%), followed by industrial facilities (5429 cases, 13.5%).Among the initial igniting materials, paper, wood, and hay were the most common ones (9484 cases, 23.6%), followed by electricityrelated materials and electronics (20.5%), garbage (11.3%), synthetic resins (11.2%), and food (7.9%).In addition, it was reported that about 75.8% of deaths (216 people) and about 79.7% of injuries (1777 people) occurred in residential and non-residential facilities [2]. Polymers 2024, 16, 507 2 of 11 Noteworthily, the number of large-scale fires is also on the rise every year.The soaring energy consumption and the utilization of miscellaneous combustible interior materials following rapid economic growth account for the increase in the number of fires and the aggravation of resulting damage, which are highly probable to intensify further in the future [3].With inspiring technological and industrial advancements, various finishing materials have been developed and applied to buildings, which, however, can emit large amounts of toxic gases while burning.The most serious harm to humans in the event of a fire is asphyxiation after the inhalation of toxic gases rather than direct contact with flames [4].The risk of such asphyxiation needs more vigilance nowadays as varieties of combustible synthetic polymer materials are gradually developed and used.Plenty of studies have been conducted on the types and characteristics of toxic gases generated in fires and the effects of these gases on the human body [3,[5][6][7].The International Maritime Organization (IMO) has determined eight kinds of toxic gases from fire that can cause fatal damage to the human body, namely carbon monoxide (CO), hydrogen bromide (HBr), hydrogen chloride (HCl), hydrogen cyanide (HCN), hydrogen fluoride (HF), nitric oxide (NO), nitrogen dioxide (NO2), and sulfur dioxide (SO2), for which thresholds have been set to facilitate strict management [8].In addition, the IMO has provided flammability evaluation methods and safety standards for finishing materials and enacted relevant regulations so that only the products that conform to the standards can be selected for construction [9]. The following regulations were stipulated for the interior finishes and trims of buildings in the United States: interior finishing materials that give off smoke or gases denser or more toxic than those given off by untreated wood or untreated paper under comparable exposure to heat or flame shall not be permitted [10]. Besides being an important energy source, wood, along with its derived products, has also been used as a major component of buildings [7].However, wood emits a variety of compounds when combusted [11].By probing into the combustion behavior of flooring materials and the toxicity of generated gases, Lee et al. reported that the CO and CO 2 emitted from wood-based medium-density fiberboard (MDF) flooring were less than those from polyester flooring and polyvinyl chloride (PVC) flooring [12].In addition, they also evaluated the gas hazards from flame-retardant wood and the toxicity index of combustion gases and determined that the gas toxicity indexes of untreated wood and flame-retardant wood were 0.183 and 0.196-0.251,respectively, both lower than those of PVC (4.13) and urethane flooring (7.2) [13]. Wood and wood-based materials are sometimes restricted in application because they are vulnerable to fire.More seriously, fatal toxic gases are generated upon their combustion and pose threats to human health, which needs more consideration.Many efforts are made in research to lessen the vulnerability of wood and wood-based materials to combustion [1,[14][15][16], and numerous valuable results have been achieved so far, including flame-retardant wood that does not burn at all.On this basis, it is necessary to study the types and amounts of the toxic gases generated when flame-retardant wood is combusted for a better understanding of wood and wood-based materials that are safer and can reduce harm to humans. Phosphorus-based flame retardants attract more attention because of their high flame retardancy efficiency.On the one hand, they can capture free radicals in the gas phase; on the other hand, they promote the carbonization in condensed phase, which prevents the heat exchange and the release of pyrolysis volatiles.Therefore, phosphorus-based flame retardants are applied in various polymers extensively.Ammonium phosphate polymer (APP) and guanyl urea phosphate (GUP) are proven effective fire retardant chemicals.During combustion, by this intumescent fire retardant system, APP and GUP act as an acid and blowing agent, and wood materials provide carbon resources because of their ability to form a carbon layer when degraded. This study was intended to evaluate the flammability and the toxic gas and smoke generation of flame-retardant plywood that is manufactured and applied as finishing materials for construction in South Korea, with combustion tests.The results are expected to contribute to the development of wood-based finishing materials with a higher safety level and reduced harm to humans upon combustion.This study was intended to evaluate the flammability and the toxic gas and smoke generation of flame-retardant plywood that is manufactured and applied as finishing materials for construction in South Korea, with combustion tests.The results are expected to contribute to the development of wood-based finishing materials with a higher safety level and reduced harm to humans upon combustion. Flame-Retardant Plywood The flame-retardant plywood used for the evaluation of toxic gas and smoke generation and combustion characteristics was prepared through vacuum pressure impregnation with the flame-retardant resin (NF200+) under a pressure of 17 kgf/cm 2 for 20 min.The impregnation amount of the flame retardant was more than 300 kg/m 3 , and the used flame-retardant plywood passed the flame-retardant performance test according to the KS F ISO 5660-1 standard [17].The uses, specifications, and appearance before and after the test on the test specimens are shown in Tables 1,2,3 and 4. Flame-Retardant Plywood The flame-retardant plywood used for the evaluation of toxic gas and smoke generation and combustion characteristics was prepared through vacuum pressure impregnation with the flame-retardant resin (NF200+) under a pressure of 17 kgf/cm 2 for 20 min.The impregnation amount of the flame retardant was more than 300 kg/m 3 , and the used flame-retardant plywood passed the flame-retardant performance test according to the KS F ISO 5660-1 standard [17].The uses, specifications, and appearance before and after the test on the test specimens are shown in Tables 1-4. Toxic Gas Generation Smoke Generation 1 Remove all dirty layers and particles in the test chamber, and clean the internal probe. Prepare the test chamber Prepare the test chamber with a cone set at 25 kW/m 2 or 50 kW/m 2 , set the distance between the cone heater and the specimen to be 50 mm, and position the pilot burner 15 mm down from the bottom edge of the cone heater. 2 Maintain the filters, gas sampling line, valves, and gas cell at 150-180 °C for at least 10 min prior to test. Tests with pilot flame For tests with a pilot flame, make the burner in position, turn on the gas and air supplies to ignite the burner, and check the flow rates. 3 During the smoke density test, start sampling by opening the sampling valve to introduce the gas in the chamber into the sampling line at the moment of maximum smoke density. Preparation of photometric system Perform zero setting, open the shutter to set the full-scale 100% transmission reading, recheck the 100% setting, and repeat the operations until accurate zero and 100% readings are obtained on the amplifier and recorder when the shutters are opened and closed. - Loading the specimen Place the holder and specimen on the supporting framework below the radiator cone, remove the radiation shield from below the cone, and simultaneously start the data recording system and close the inlet vent.The test chamber door and the inlet vent must be closed immediately after the test starts. - Recording of light transmission Record the light transmission and time continuously from the start of the test. - Termination of test The initial test in each test condition must last for 20 min to verify the possible existence of a second minimum transmittance. - Conditioning of specimens Before measurement, the test specimens must be conditioned to a constant mass at 23 ± 2 °C and 50% ± 2% relative humidity. Table 3. Toxic gases generated upon combustion and their characteristics and criteria [5,6,8].conditions, 232 ppm, 293 ppm, and 1444 ppm CO were detected, lower than the threshold of 1450 ppm set by the IMO.Therefore, the flame-retardant plywood tested in this study was determined to be applicable as a finishing material for ships.In other cases, the detected toxic gases included 704 ppm CO, 663.6 ppm NO, 11 ppm SO2, 63 ppm HCl, and 70 ppm HF when polyurethane is combusted and 1830 ppm CO, 232.3 ppm NO, 7.0 ppm SO2, 282 ppm HCl, and 70 ppm HF when PVC is combusted [3].Their distinct differences from the results of this study demonstrate that plywood made of wood is a safer material.As is well known, wood is mainly constituted by elements of carbon (C, 50%), oxygen (O, 44%), and hydrogen (H, 6%).Consequently, only CO and CO2 are generated, while the other toxic gases are not released basically during wood combustion.In addition, the flame-retardant resin used in the manufacturing of the flame-retardant plywood in this study was soluble in water, whose main components were APP, GUP, phosphonic acid, acrylamide acrylic acid-N-(3-(dimethylamino)propyl)methacrylamide copolymer, 2benzisothiazolin-3-one, and a small amount of additives [15].In the combustion test of the flame-retardant plywood in which such a water-soluble, flame-retardant resin was introduced via vacuum pressure impregnation, no toxic gases other than CO among the eight kinds listed by the IMO were detected, and the amount of CO released was below the threshold.In other words, besides the gases generated from wood combustion, the emission of toxic gases due to the introduction of the flame-retardant resin also does not need excessive concern. Toxic Gas Criterion (ppm) When selecting finishing materials, one should consider the above results to evaluate the effects of such materials on the environment and human bodies in case of a fire.Regarding wood and wood products, those that emit the smallest possible amount of toxic substances are good choices because people may suffer from minimal harm upon the combustion of these materials [7].Flammability should not become the stumbling block in the application of wood.Since various types of flame-retardant wood, flame-retardant plywood, etc., are currently under development and production, it is expected that fireretardant products have broad application prospects in providing buildings and residential environments that are safer from toxic gases generated by fire and can minimize the harm to humans once a fire occurs.As is well known, wood is mainly constituted by elements of carbon (C, 50%), oxygen (O, 44%), and hydrogen (H, 6%).Consequently, only CO and CO2 are generated, while the other toxic gases are not released basically during wood combustion.In addition, the flame-retardant resin used in the manufacturing of the flame-retardant plywood in this study was soluble in water, whose main components were APP, GUP, phosphonic acid, acrylamide acrylic acid-N-(3-(dimethylamino)propyl)methacrylamide copolymer, 2benzisothiazolin-3-one, and a small amount of additives [15].In the combustion test of the flame-retardant plywood in which such a water-soluble, flame-retardant resin was introduced via vacuum pressure impregnation, no toxic gases other than CO among the eight kinds listed by the IMO were detected, and the amount of CO released was below the threshold.In other words, besides the gases generated from wood combustion, the emission of toxic gases due to the introduction of the flame-retardant resin also does not need excessive concern. When selecting finishing materials, one should consider the above results to evaluate the effects of such materials on the environment and human bodies in case of a fire.Regarding wood and wood products, those that emit the smallest possible amount of toxic substances are good choices because people may suffer from minimal harm upon the combustion of these materials [7].Flammability should not become the stumbling block in the application of wood.Since various types of flame-retardant wood, flame-retardant plywood, etc., are currently under development and production, it is expected that fireretardant products have broad application prospects in providing buildings and residential environments that are safer from toxic gases generated by fire and can minimize the harm to humans once a fire occurs.2. Figures 1 and 2 show the block diagrams of the test equipment applied to the measurement of toxic gas generation and smoke generation, respectively. Test Methods Toxic gas and smoke generation during the combustion of flame-retardant plywood was measured according to the test regulations of IMO Res.MSC.307(88): 2010/ANNEX 1/Part 2 as shown in Figure 3. Before measurement, the test specimens were conditioned at a temperature of 23 ± 2 ℃ and relative humidity of 50 ± 5% for 816 h.The test methods and procedures for toxic gas and smoke generation upon combustion are shown in Table 2.In addition, the flammability of the flame-retardant plywood was measured in light of the test regulations of IMO Res.MSC.307(88): 2010/ANNEX 1/Part 5. Before measurement, the test specimens were conditioned for 72 h under the same temperature and relative humidity conditions.Table 2. Test methods of toxic gas and smoke generation upon combustion. Procedure Test Methods Test Methods Toxic gas and smoke generation during the combustion of flame-retardant plywood was measured according to the test regulations of IMO Res.MSC.307(88): 2010/ANNEX 1/Part 2 as shown in Figure 3. Before measurement, the test specimens were conditioned at a temperature of 23 ± 2 °C and relative humidity of 50 ± 5% for 816 h.The test methods and procedures for toxic gas and smoke generation upon combustion are shown in Table 2.In addition, the flammability of the flame-retardant plywood was measured in light of the test regulations of IMO Res.MSC.307(88): 2010/ANNEX 1/Part 5. Before measurement, the test specimens were conditioned for 72 h under the same temperature and relative humidity conditions. Polymers 2024, 16, x FOR PEER REVIEW 4 of 12 2. Figures 1 and 2 show the block diagrams of the test equipment applied to the measurement of toxic gas generation and smoke generation, respectively. Test Methods Toxic gas and smoke generation during the combustion of flame-retardant plywood was measured according to the test regulations of IMO Res.MSC.307(88): 2010/ANNEX 1/Part 2 as shown in Figure 3. Before measurement, the test specimens were conditioned at a temperature of 23 ± 2 ℃ and relative humidity of 50 ± 5% for 816 h.The test methods and procedures for toxic gas and smoke generation upon combustion are shown in Table 2.In addition, the flammability of the flame-retardant plywood was measured in light of the test regulations of IMO Res.MSC.307(88): 2010/ANNEX 1/Part 5. Before measurement, the test specimens were conditioned for 72 h under the same temperature and relative humidity conditions.Table 2. Test methods of toxic gas and smoke generation upon combustion. Procedure Test Methods Toxic Gas Generation Smoke Generation Toxic Gas Generation In the event of a building fire, a wide variety of toxic gases are generated in the form of single gases or mixed gases depending on the combustion materials [4,6].The IMO has determined eight kinds of toxic gases that may be generated when finishing materials for bulkheads, linings, or ceilings are combusted and set a criterion for each of them.Table 3 lists the eight kinds of toxic gases, their criteria, and their effects on the human body. In this study, toxic gas and smoke generation from flame-retardant plywood upon combustion was measured according to the IMO fire safety standards for interior finishing materials (bulkhead, lining, and ceiling materials).Table 4 displays the appearance of the specimens before and after the 800 s tests. Table 5 lists the emission results of the toxic gases from flame-retardant plywood upon combustion under three conditions (irradiance of 25 kW/m 2 in the absence of a pilot flame, irradiance of 25 kW/m 2 in the presence of a pilot flame, and irradiance of 50 kW/m 2 in the absence of a pilot flame).Only CO was detected, while the others were not.In the three test conditions, 232 ppm, 293 ppm, and 1444 ppm CO were detected, lower than the threshold of 1450 ppm set by the IMO.Therefore, the flame-retardant plywood tested in this study was determined to be applicable as a finishing material for ships.In other cases, the detected toxic gases included 704 ppm CO, 663.6 ppm NO, 11 ppm SO 2 , 63 ppm HCl, and 70 ppm HF when polyurethane is combusted and 1830 ppm CO, 232.3 ppm NO, 7.0 ppm SO 2 , 282 ppm HCl, and 70 ppm HF when PVC is combusted [3].Their distinct differences from the results of this study demonstrate that plywood made of wood is a safer material.As is well known, wood is mainly constituted by elements of carbon (C, 50%), oxygen (O, 44%), and hydrogen (H, 6%).Consequently, only CO and CO 2 are generated, while the other toxic gases are not released basically during wood combustion.In addition, the flame-retardant resin used in the manufacturing of the flame-retardant plywood in this study was soluble in water, whose main components were APP, GUP, phosphonic acid, acrylamide acrylic acid-N-(3-(dimethylamino)propyl)methacrylamide copolymer, 2-benzisothiazolin-3-one, and a small amount of additives [15].In the combustion test of the flame-retardant plywood in which such a water-soluble, flame-retardant resin was introduced via vacuum pressure impregnation, no toxic gases other than CO among the eight kinds listed by the IMO were detected, and the amount of CO released was below the threshold.In other words, besides the gases generated from wood combustion, the emission of toxic gases due to the introduction of the flame-retardant resin also does not need excessive concern. When selecting finishing materials, one should consider the above results to evaluate the effects of such materials on the environment and human bodies in case of a fire.Regarding wood and wood products, those that emit the smallest possible amount of toxic substances are good choices because people may suffer from minimal harm upon the combustion of these materials [7].Flammability should not become the stumbling block in the application of wood.Since various types of flame-retardant wood, flame-retardant plywood, etc., are currently under development and production, it is expected that fireretardant products have broad application prospects in providing buildings and residential environments that are safer from toxic gases generated by fire and can minimize the harm to humans once a fire occurs. Smoke Generation In this study, smoke generation during the combustion of flame-retardant plywood was measured according to the IMO fire safety standards for interior finishing materials (bulkhead, lining, and ceiling materials).Table 6 shows the average results of smoke generation from three test specimens measured in each condition.Notes: The criterion is set for bulkheads, linings, or ceilings.Each value is the average result of three tests in each condition. The mass loss and maximum specific optical density (Dm) of the test specimens were evaluated under each test condition.The average mass of the test specimens was 87.48 g before the test and 72.30 g after the test, namely there was a mass decrease of 15.18 g on average under the test conditions.Therefore, the average mass reduction rate was calculated as 17%.Specifically, as the test conditions became more severe, the mass reduction rate tended to increase (from 13% to 18% and then to 20% in Figure 4).Notes: These criteria are set for bulkheads, linings, or ceilings.Each value is the average result of three tests in each condition. Smoke Generation In this study, smoke generation during the combustion of flame-retardant plywood was measured according to the IMO fire safety standards for interior finishing materials (bulkhead, lining, and ceiling materials).Table 6 shows the average results of smoke generation from three test specimens measured in each condition. The mass loss and maximum specific optical density (Dm) of the test specimens were evaluated under each test condition.The average mass of the test specimens was 87.48 g before the test and 72.30 g after the test, namely there was a mass decrease of 15.18 g on average under the test conditions.Therefore, the average mass reduction rate was calculated as 17%.Specifically, as the test conditions became more severe, the mass reduction rate tended to increase (from 13% to 18% and then to 20% in Figure 4).The average Dm showed different values (75.70, 81.00, and 191.20) under the three test conditions.The IMO stipulated that the average Dm should not exceed 300 under these conditions.All the measurement results of average Dm in this study did not exceed 200 (Figure 5), indicating that the flame-retardant plywood meets the IMO smoke generation standard. When flame-retardant plywood was applied as a finishing material, its Dm and toxic gas generation meet the finishing material standards presented by the IMO.Therefore, flame-retardant plywood can become a contributor to protecting humans from the considerable harm of toxic gases and smoke generated upon combustion.When flame-retardant plywood was applied as a finishing material, its Dm and toxic gas generation meet the finishing material standards presented by the IMO.Therefore, flame-retardant plywood can become a contributor to protecting humans from the considerable harm of toxic gases and smoke generated upon combustion. Flammability Table 7 lists the results of the flammability test on the flame-retardant plywood according to the standard under IMO Res.MSC.307(88): 2010/ANNEX 1/Part 5.The test specimens were the flame-retardant plywood manufactured through vacuum pressure impregnation of the water-soluble flame-retardant resin mentioned in Section 3.1.The test was intended to examine the applicability of the foregoing material as a finishing material for ships.As shown in Table 7, the critical flux at extinguishment (CFE) was 49.5 kW/m 2 on average, which was more than twice the threshold of 20 kW/m 2 given by the IMO.The average of the total heat release (Qt) was 0.21 MJ, which indicated excellent performance corresponding to about 30% of the upper threshold of 0.70 MJ presented by the IMO, and the peak heat release rate (Qp) was determined to be 0.66 kW on average, which was about 17% of the upper threshold of 4.00 kW given by the IMO, indicative of excellent flameretardant performance.A previous study evaluated the flame-retardant performance of Korean pine as wall panels according to the test method under ISO 5660-1 [14].The Korean kW/m 2 in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Polymers 2024, 16, x FOR PEER REVIEW 6 of 12 conditions, 232 ppm, 293 ppm, and 1444 ppm CO were detected, lower than the threshold of 1450 ppm set by the IMO.Therefore, the flame-retardant plywood tested in this study was determined to be applicable as a finishing material for ships.In other cases, the detected toxic gases included 704 ppm CO, 663.6 ppm NO, 11 ppm SO2, 63 ppm HCl, and 70 ppm HF when polyurethane is combusted and 1830 ppm CO, 232.3 ppm NO, 7.0 ppm SO2, 282ppm HCl, and 70 ppm HF when PVC is combusted[3].Their distinct differences from the results of this study demonstrate that plywood made of wood is a safer material. Toxic gas and smoke generation (IMO, Part 2) Irradiance of 25 kW/m 2 in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Irradiance of 25 kW/m 2 in the presence of pilot flame Polymers 2024, 16, x FOR PEER REVIEW 6 of 12 conditions, 232 ppm, 293 ppm, and 1444 ppm CO were detected, lower than the threshold of 1450 ppm set by the IMO.Therefore, the flame-retardant plywood tested in this study was determined to be applicable as a finishing material for ships.In other cases, the detected toxic gases included 704 ppm CO, 663.6 ppm NO, 11 ppm SO2, 63 ppm HCl, and 70 ppm HF when polyurethane is combusted and 1830 ppm CO, 232.3 ppm NO, 7.0 ppm SO2, 282 ppm HCl, and 70 ppm HF when PVC is combusted [3].Their distinct differences from the results of this study demonstrate that plywood made of wood is a safer material. Toxic gas and smoke generation (IMO, Part 2) Irradiance of 25 kW/m 2 in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Polymers 2024, 16, x FOR PEER REVIEW 6 of 12 conditions, 232 ppm, 293 ppm, and 1444 ppm CO were detected, lower than the threshold of 1450 ppm set by the IMO.Therefore, the flame-retardant plywood tested in this study was determined to be applicable as a finishing material for ships.In other cases, the detected toxic gases included 704 ppm CO, 663.6 ppm NO, 11 ppm SO2, 63 ppm HCl, and 70 ppm HF when polyurethane is combusted and 1830 ppm CO, 232.3 ppm NO, 7.0 ppm SO2, 282ppm HCl, and 70 ppm HF when PVC is combusted[3].Their distinct differences from the results of this study demonstrate that plywood made of wood is a safer material. Toxic gas and smoke generation (IMO, Part 2) Irradiance of 25 kW/m 2 in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Irradiance of 50 kW/m 2 in the absence of pilot flame Polymers 2024, 16, x FOR PEER REVIEW 6 of 12conditions, 232 ppm, 293 ppm, and 1444 ppm CO were detected, lower than the threshold of 1450 ppm set by the IMO.Therefore, the flame-retardant plywood tested in this study was determined to be applicable as a finishing material for ships.In other cases, the detected toxic gases included 704 ppm CO, 663.6 ppm NO, 11 ppm SO2, 63 ppm HCl, and 70 ppm HF when polyurethane is combusted and 1830 ppm CO, 232.3 ppm NO, 7.0 ppm SO2, 282 ppm HCl, and 70 ppm HF when PVC is combusted[3].Their distinct differences from the results of this study demonstrate that plywood made of wood is a safer material. kW/m 2 in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Polymers 2024, 16, x FOR PEER REVIEW 6 of 12 conditions, 232 ppm, 293 ppm, and 1444 ppm CO were detected, lower than the threshold of 1450 ppm set by the IMO.Therefore, the flame-retardant plywood tested in this study was determined to be applicable as a finishing material for ships.In other cases, the detected toxic gases included 704 ppm CO, 663.6 ppm NO, 11 ppm SO2, 63 ppm HCl, and 70 ppm HF when polyurethane is combusted and 1830 ppm CO, 232.3 ppm NO, 7.0 ppm SO2, 282ppm HCl, and 70 ppm HF when PVC is combusted[3].Their distinct differences from the results of this study demonstrate that plywood made of wood is a safer material. kW/m 2 in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Polymers 2024, 16, 507 5 of 11 Flammability (IMO, Part 5 ) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Toxic gas and smoke generation (IMO, Part 2) Irradiance of 25 kW/m in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Toxic gas and smoke generation (IMO, Part 2) Irradiance of 25 kW/m in the absence of pilot flame Irradiance of 25 kW/m 2 in the presence of pilot flame Irradiance of 50 kW/m 2 in the absence of pilot flame Flammability (IMO, Part 5) Heat flux: 50.5 kW/m 2 (at the 50 mm position) 23.9 kW/m 2 (at the 350 mm position) Table 5 . Toxic gas generation of flame-retardant plywood upon combustion.criteria are set for bulkheads, linings, or ceilings.Each value is the average result of three tests in each condition. Polymers 2024 , 12 Table 5 . 16, x FOR PEER REVIEW 7 of Toxic gas generation of flame-retardant plywood upon combustion. Figure 4 . Figure 4. Mass reduction rate in each test condition.Figure 4. Mass reduction rate in each test condition. Figure 4 . Figure 4. Mass reduction rate in each test condition.Figure 4. Mass reduction rate in each test condition.The average Dm showed different values (75.70, 81.00, and 191.20) under the three test conditions.The IMO stipulated that the average Dm should not exceed 300 under these conditions.All the measurement results of average Dm in this study did not ex- Figure 5 . Figure 5. Average Dm in each test condition. 3. 3 . FlammabilityTable7lists the results of the flammability test on the flame-retardant plywood according to the standard under IMO Res.MSC.307(88): 2010/ANNEX 1/Part 5.The test specimens were the flame-retardant plywood manufactured through vacuum pressure impregnation of the water-soluble flame-retardant resin mentioned in Section 3.1.The test was intended to examine the applicability of the foregoing material as a finishing material for ships. Figure 5 . Figure 5. Average Dm in each test condition. Table 2 . Test methods of toxic gas and smoke generation upon combustion. Table 4 . Appearance of test materials under different test conditions. and Condition Before the Test After the Test Polymers 2024, 16, x FOR PEER REVIEW 6 of 12 Table 4 . Appearance of test materials under different test conditions. Table 4 . Appearance of test materials under different test conditions. Table 4 . Appearance of test materials under different test conditions. Table 4 . Appearance of test materials under different test conditions. Table 4 . Appearance of test materials under different test conditions. Table 4 . Appearance of test materials under different test conditions. and Condition Before the Test After the Test 2.2.Test Methods 2.2.1.Test Equipment Toxic gas generation was measured by the Korea Marine Equipment Research Institute according to the test regulations of IMO Res.MSC.307(88):2010/ANNEX1/Part 2. Figures1 and 2show the block diagrams of the test equipment applied to the measurement of toxic gas generation and smoke generation, respectively. Table 6 . Smoke generation of flame-retardant plywood upon combustion. Table 7 . Flammability of flame-retardant plywood.These criteria are set for surface materials (bulkhead, wall, and ceiling linings).Since there was no ignition, no values were recorded.
7,707
2024-02-01T00:00:00.000
[ "Environmental Science", "Materials Science" ]
Total wave power absorption by a multi-float wave energy converter and a semi-submersible wind platform with a fast far field model for arrays Wave energy converters absorb wave power by mechanical damping for conversion into electricity and multi-float systems may have high capture widths. The kinetic energy of the floats causes waves to be radiated, generating radiation damping. The total wave power absorbed is thus due to mechanical and radiation damping. A floating offshore wind turbine platform also responds dynamically and damping plates are generally employed on semi-submersible configurations to reduce motion, generating substantial drag which absorbs additional wave power. Total wave power absorption is analysed here by linear wave diffraction–radiation–drag models for a multi-float wave energy converter and an idealised wind turbine platform, with response and mechanical power in the wave energy case compared with wave basin experiments, including some directional spread wave cases, and accelerations compared in the wind platform case. The total power absorption defined by capture width is input into a far field array model with directional wave spreading. Wave power transmission due a typical wind turbine array is only reduced slightly (less than 5% for a 10 × 10 platform array) but may be reduced significantly by rows of wave energy converters (by up to about 50%). Introduction Floating platforms for offshore renewable energy are becoming established for wind energy and are in early stage development for wave energy. We consider here total wave power absorption by platforms necessary to determine wave fields due to arrays, comprising wind or wave farms. Offshore wind farms are expanding rapidly in many parts of the world. Most platforms to date (2021) have fixed foundations of monopile or jacket structure form, suitable for relatively shallow water, less than about 30 m deep. Floating foundations or platforms are required for deeper water, markedly increasing the available energy resource. Offshore wind speeds are also higher and less intermittent B Peter Stansby<EMAIL_ADDRESS>1 School of Engineering, University of Manchester, Manchester M13 9PL, UK 2 Department of Civil Engineering, Ghent University, 9000 Ghent, Belgium than onshore or nearshore. However, floating platform design is less mature than for fixed platforms with several configurations under consideration, including semi-submersible, spar, tension-leg, barge types and hybrids, see reviews in Koo et al. (2014), Carbon Trust (2015), Leimeister et al. (2018) and design criteria in DNVGL (2019). As floating wind platforms are developed, economic and operational advantages over fixed platforms may become apparent in shallower as well as deeper water. The influence of wind platforms on the regional wave conditions has received little attention. For fixed platforms, monopiles of about 4-5 m diameter will cause negligible wave diffraction and jacket structures even less. For floating platforms, submerged components are of similar size but they now respond dynamically due to wave action. Power from the onset wave is converted into kinetic energy of the platform, which is in turn converted into radiated wave power, absorbing power from the wave field. Wave power absorption due to radiation damping appears not to have been considered to date and measurement is certainly difficult. However, calculation through a computational model is rel-atively straightforward. This is one aim of this paper based on an idealised semi-submersible wind platform which has been studied experimentally and by linear diffraction modelling . Floating wave energy platforms are intended to convert wave power into mechanical energy and then electricity. There have been many concepts, e.g. Falcão (2010), Babarit et al. (2012), without convergence of design, but it is becoming apparent that multi-float systems with multiple PTOs may have capacity similar to wind turbines in some locations, (Stansby et al. 2017;Carpintero Moreno and Stansby 2019). Again wave power is converted into platform kinetic energy as well as mechanical energy, absorbing power due to radiation damping from the incoming waves. It is well known that, for the classical case of the point absorber resonating at maximum efficiency in regular waves, the mechanical power is equal to the radiated power (Falnes 2002). To determine wave power propagation through an array, the total wave power absorption by each platform (due radiation as well as mechanical damping and possibly small drag damping) is required. Another aim is thus to evaluate total wave power absorption as a capture width for multi-float wave energy systems; M4 is chosen which has been studied experimentally and computationally (Stansby et al. 2017;Carpintero Moreno and Stansby 2019). There has been limited experimental investigation with directional spread waves and some computational modelling is presented here. Combining wave energy conversion by arrays with coastal protection has been considered previously, e.g. Abanades et al. (2015), Rodriguez-Delgado et al. (2019), Bergillos et al. (2020), while the effect of floating wind platform arrays appears not to have been considered. It should be noted that the objectives for wave energy and wind platforms are different; the motion of a wind platform should be as small as possible to support the turbine while the wave energy platform needs to respond optimally for power conversion. For wind platform arrays, wave power absorption is thus a secondary benefit. If total wave power absorption by wave energy converters (WECs) is significant there could also be benefit in using arrays or rows of WECs to reduce wave power transmitted to floating (or fixed) wind farms. The prediction of wave power absorption (and mechanical conversion) in arrays is complicated by radiated waves from different platforms interacting with the incoming wave field. However some analysis on clusters of point absorbers is relevant here (Göteman et al. 2015). In regular waves, it was shown that radiated wave amplitudes have decayed to less than 4% over a distance of about 5 wave lengths as radiated waves from different point absorbers have different phase. In irregular waves with frequency components also of different phase, this will be further reduced, and directionally spread waves will reduce this again, e.g. Weller et al. (2010). A cluster of point absorbers is hydrodynamically similar to a multi-float WEC, such as M4, and a multi-float semi-sub wind platform. Typically semi-sub platforms have 3 or 4 columns and the one we consider here has 4 columns. In this paper, we propose a far field wave propagation model for arrays considering total power absorbed without far field radiated effects which are expected to be negligible. The far field model is for directional spread waves and assumes the onset wave power to be uniform in constant depth, providing a fast model suitable for evaluation of many array configurations. Previous coastal wave propagation modelling has used a regional spectral wave model such as SWAN with mechanical power absorption represented as a sink term, e.g. Chang et al. (2016), McNatt et al. (2020. The remainder of the paper is laid out as follows. Section 2 describes the platform configurations. The linear diffraction/radiation/drag model follows in Sect. 3, then the modification for directional waves in Sect. 4. The far field wave power array model is presented in Sect. 5. Results for response and capture width follow in Sect. 6, comparing with experiment where possible, estimating total capture width due to mechanical and radiation damping for the multi-float wave energy converters and radiation and drag damping for the semi-sub wind platform. Results for wave propagation through rows of wave energy converters and an array of wind floaters are then presented. These are discussed in Sect. 7 and some conclusions are drawn in Sect. 8. Wave energy conversion and wind platform configurations The multi-float wave energy system M4 tested has six floats (Carpintero Moreno and Stansby 2019), with one bow float and three mid-floats in an effectively rigid frame, and two stern floats connected to the two outer mid-floats by beams with hinges above the mid-float for power take-off (PTO) due to relative rotation, as shown in Fig. 1a. This configuration is termed 132. The mass distribution is given in Table 3. This is modelled and a 134 configuration is also modelled with four sterns floats and four PTOs, with the same characteristics as for the 132 configuration. The wind platform is created by removing the stern floats and beams and replacing the bow and mid-floats with cylindrical floats with a flat base and damping plates, as shown in Fig. 1b. The wind turbine and column mass and inertia of the NREL 5 MW wind turbine (Jonkman et al. 2009) are represented by a mass at the hub position. The mass distribution is given in Table 4. Figure 2 shows the elevations. The upwards kink in the stern beam in Fig. 2a is to avoid clashing on the mid-float deck in extreme conditions. Also, the PTO in the form of a simple pneumatic damper is positioned for convenience of attachment; at full scale, it would be hinged just above deck level and the mast above hinge level would not be necessary. The inclined member in Fig. 2b is to maintain rigidity of the turbine mass support. Figure 3 shows the plan dimensions and Fig. 4 snapshots of videos during wave basin testing. Mathematical formulation and model The multi-float time-domain formulation applies to both the WEC M4 and the wind turbine platform. The wind turbine platform has no hydrodynamic power take-off, but has additional turbine and drag forces. A general form based on linear hydrodynamics is presented. The hydrodynamic forces are due to linear wave excitation or diffraction, added mass, radiation damping, restoring, drag and mooring forces. Excitation, added mass and radiation damping are defined using WAMIT coefficients (Lee and Newman 2013), using Cummins (1962) method for irregular waves. The standard form of the uni-directional JONSWAP wave spectrum will be used. The model is basically that presented in Stansby and Carpin- Fig. 5 Plan view of 132 WEC configuration with body A (bow and midfloats in red) and body B (two stern floats in black) with hinge O shown as solid line and on the right hand side notation h, v and θ relative to O in a vertical plane; θ A is for body A and θ Bi is for each stern float B. The wind turbine platform corresponds to body A only tero Moreno (2020a) for the WEC M4 and in Stansby et al. (2019) for the wind platform, unified here. The modification for directional waves is considered separately. The main modes are heave, surge and pitch but roll and sway motion are included for directional waves and because mechanical damping for the WEC is asymmetric about the centreline. Yaw is not included as the platform aligns with the mean wave directions. The wind platform is symmetric about the centreline and roll and sway are not considered. Notation Mathematical notation is shown in Fig. 5. Angular rotation θ is clockwise positive, h is longitudinal horizontal distance from O to a float positive in stern direction, v is vertical distance from O to a float positive below O, t is transverse horizontal distance from O positive on starboard side. H , V and T are total hydrodynamic forces in conventional x, z, y directions, M is pitch moment about O and M R is the roll moment. The general form can be reduced for either the wave or wind platform. Although there are multiple floats (N ), body A may be considered as a single body (N A floats) with floats B (N B floats) acting individually as shown in Fig. 5. There are N m masses (floats, ballast, beams, turbine and support) with N mA and N mB corresponding to A and B floats. Equations of motion For body A, taking moments about the y axis through O accounting for mooring force where M mechi −B mechθri , total M mech N 1+N A M mechi and relative angle θ ri θ A − θ Bi . For roll about O for floats A and B combined, For the WEC system, there is no net force or moment on the hinge. In general, in the longitudinal horizontal direction: in the vertical direction: and in the transverse horizontal direction The positions of the centres of gravity of each mass , linearised for small angles, are defined by: For body A masses, i 1, N mA We thus have seven equations for seven unknowns for the WEC caseθ A ,θ Bi (i 1 + n A , N ),θ R ,ẍ O ,z O ,ÿ O , Eqs. (1), (2), (3), (4), (5), (6), respectively, and three equations forẍ O , z O ,θ A for the wind turbine case with no roll or sway. The equations may be re-arranged more conveniently. For floats A For roll of A and B combined (10) For the whole system, in the horizontal longitudinal direction: in the vertical direction: and in the transverse direction: We thus have, for the most general case, equations forθ A , 3.4, also being a function of these parameters and hydrodynamic (WAMIT) coefficients. To restrain sway mode, y O may be set to zero. Wave spectrum We are concerned with irregular waves with the standard JONSWAP spectrum S( f ) defined by a significant wave height H s , a peak frequency f p 1/T p where T p is the peak period, and a spectral peakedness factor γ . Although the measured spectrum was always close to the target in the experiments, the measured spectrum was input into the model. The surface elevation η at the mid-float may be defined by linear superposition of the discretised wave amplitude components ( 1 4 ) where the upper limit on frequency was generally f max 4.0 Hz, between 3 and 8 times f p , f f max /K , a k , and ϕ r is phase from a uniform random distribution between 0 and 2π . K is generally set to 200 (400 produced almost identical results). Hydrodynamic forces and moments Hydrodynamic moments and forces are defined using WAMIT notation as shown in Table 1. Linear diffraction forces and moments for each float are defined by frequency-dependent coefficients for amplitude F and phase ϕ, as given in Stansby and Carpintero Moreno (2020a). For each float, i 1, N : Pitch moment, Roll moment, Vertical force, Longitudinal horizontal force, Transverse horizontal force, Added mass and radiation damping forces and moments are defined by frequency-dependent coefficients A and B, respectively, using the Cummins method. With a single body and one degree of freedom x, we have where f includes forces due to excitation, restoring and PTO; A ∞ is added mass for infinite frequency and the impulse response function for radiation damping is given by In discrete form with time step t, time t n t and which is precomputed and in discrete form where τ t and M T p / t. The lower limit (m−2M) was generally used to represent -∞ with almost identical results given by (m − 4M). The RHS is generalised for each float with six modes. For each float i 1, N pitch moment is defined by: where the subscript rest indicates restoring moment, defined below. There is an equivalent expression for roll moment. As an example of force, the vertical force for each float i 1, N is defined by: where the subscript drag indicates drag force, described below. There are equivalent expressions for longitudinal horizontal forces H i and transverse horizontal forces T i . There is an additional mean longitudinal force due to hydrodynamic drift. The restoring heave force for a single float is given simply by: V rest −ρgπr 2 z where r is float radius (H rest T rest 0). For pitch, the restoring moment about O, M rest cθ , is due to the components of weight and buoyancy and the water plane restoring moment −ρgπ r 4 4 θ , although the heave restoring force dominates markedly. Values of factor c, and metacentric height, are shown in Table 2 for float combinations. Drag and wind thrust The heave drag force is given by V dragi −0.5ρπr 2 i C D |ż i |ż i . Note float velocity relative to flow velocity is not considered and drag coefficient C D is effectively a viscous tuning parameter. For damping plates, Tao and Thiagarajan (2003) showed C D > 4 for heave and C D ≈ 6 was a representative value, although dependent slightly on amplitude of motion. For heave, C D 6 is assumed and C D 0 for surge and sway since the horizontal cross section of each float is circular and zero proved effective for WEC simulations with rounded base floats; CFD also showed drag coefficient was very small (Gu et al. 2018). Tao and Thiagarajan (2003) also produced a simple formula for added mass in heave and that obtained from WAMIT was within 1% providing a useful cross-check. The wind thrust is given by where U hub is wind speed at the hub,ẋ hub is hub velocity, ρ air is air density and A turb is the swept area for the rotor of radius r turb ,πr 2 turb . The thrust coefficient C T is dependent on the wind speed and is determined from blade element momentum theory using the NREL 5 MW turbine characteristics (Jonkman et al. 2009). The force is assumed to be quasisteady and defined by the relative velocity (U hub −ẋ hub ); note the rotation speed is also dependent on wind speed. The quasi-steady behaviour has been shown to be a close approx-imation by Apsley and Stansby (2020). The C T , U hub curve is shown in Fig. 6 at full scale with a cut in of 3 m/s and a cut out of 25 m/s. We are concerned with range 5-20 m/s shown. For the purposes of this demonstration, the wind velocity at the hub is assumed uniform across the swept area. The moment about O in Eqs. (1) Power calculations The total mechanical power for the WEC case is given by The averaged radiated wave power results from all floats driven by pitch moment M rad , roll moment M R rad , heave force V rad and surge force H rad , sway force T rad , and is given by The power absorbed by drag only occurs in the heave direction and is given by And the power absorbed by the wind turbine is given by In addition, there are second-order hydrodynamic forces associated with a fixed body due to sum and difference frequencies which are small but the zero difference frequencies generate a mean drift force. There are additional horizontal mean forces due to the time-average mechanical power absorption P mech , the power required by float motion to radiate waves P rad and the power absorbed by drag P drag . This power absorbed from oncoming waves is balanced by the horizontal energy flux with a representative wave speed giving a horizontal force, also of second order. This argument was described for two-dimensional problems in (Mei 1999) (Sect. 7.10). The power absorbed is first determined from a linear computation giving motions without moorings, which have negligible effect (Stansby and Carpintero Moreno 2020a). The total mean force may thus be calculated. This underestimated the measured mean mooring force (by up to Time stepping The parameters θ A , θ B , θ R , x O , z O , y O and their time derivatives were advanced in time with step t with the example in Eq. 27 below for θ A θ n+1 Note θ B does not apply to the wind platform case and θ R and y O are not considered. For the wave platform, there are two θ B parameters for the 132 case and four for the 134 case. The WAMIT coefficients are for all cross-coupled terms between floats as well as for the directly coupled (diagonal) terms which have greatest magnitude. Forming a direct formulation for each ofθ A ,θ B ,θ R ,ẍ O ,z O ,ÿ O (for the most general wave platform case) with all cross-coupled terms is difficult to generalise. However, the dominant diagonal terms in added mass for each ofθ A ,θ B ,θ R ,ẍ O ,z O ,ÿ O may be removed from each of H i , V i , M i , M Ri and added to the LHS of Eqs. (8), (9), (10), (11), (12), (13). This proved desirable for numerical stability. Iteration is still required with updated values ofθ A ,θ B ,θ R ,ẍ O ,z O ,ÿ O for terms on the RHS but this showed fast convergence, with less than 10 iterations (default value). The radiation damping and diffraction force terms were not modified in the iteration. A time step size of T p /200 was sufficiently small to give converged results (to plotting accuracy). The equation set with numerical solu-tion is thus complete and proved stable and convergent. The computer time for a run is small, order one minute on a laptop. Modification for directional spread waves There are various options for generating directional waves, e.g. Latheef et al. (2017). The directional wave spectrum is usually defined by with the mean wave direction given by θ 0, for−π < θ < π, and s is the spreading parameter. α is defined by the requirement π −π G(θ )dθ 1. One approach for generating directional waves is to split each frequency component into directions defined by G(θ ) known as the double summation method. However, this means that a specific frequency has several directional components and partial standing waves result; the wave field is non-ergodic (Jefferys 1987). To avoid this, each frequency component may be sub-divided into a number of smaller components with different frequencies which together satisfy the spreading across the original frequency band, known as the single summation method. An equivalent more efficient approach, often employed experimentally, is known as the random directional method (Latheef et al. 2017). The direction of propagation of any one frequency component is chosen randomly, subject to a weighting function based upon the desired directional spread. This approach also avoids components of the same frequency co-existing and results in ergodic wave fields. The appropriate weighting for choosing the direction of the components is based upon a normal distribution with a standard deviation of σ θ in accordance with the directional distribution, where σ θ 2 2 1+s as a close approximation to the Eq. (28) above. This is applied to each frequency component in the spectrum. The random angle is determined by the Box-Muller method where two random number numbers (u 1 , u 2 ) are first generated from a uniform distribution between 0 and 1 and then converted to a random number u 3 √ −2ln(u 1 )cos(2π u 2 ) with unit standard deviation and mean zero, giving a random angle u 3 σ θ . This is the approach adopted here to represent the effect of directional spread waves defined by the measured spectrum and a spread factor s. The excitation forces and moments are affected by the heading angle and hydrodynamic (WAMIT) coefficients are determined at 2°intervals. The excitation forces and moments are as defined by Eq. (15) except that each frequency component k has a random heading from the normal distribution defined above, defining the excitation coefficients. Far field wave power array model Power from the incident wave field is absorbed by damping imparted to wave energy converters or floating wind platforms. With platform dimension small in relation to spacing, each platform can be regarded as a point sink for power; the resultant wave power incident on a platform is the far field wave power less that due to absorption by all other platforms. Here, we assume that the onset far field is uniform and depth is constant. This is an idealisation enabling fast computation to give the wave field within the farm and down-wave of the farm. A complete analysis at a regional scale would involve spatial and temporal variation of wave propagation with a spectral model, such as SWAN or TOMAWAC, e.g. Ruehl et al. (2014), McNatt et al. (2020). With directional spreading, the uniform onset wave power/metre is P onset . The capture width for a given T p , γ and s defines the power P absorbed, hence removed from the wave field. This is represented as a point sink of power P which is spread at a distance r such that giving wave power/metre P θ PG(θ )/r . The wave power/metre in the x direction is thus P onset − P θ cos(θ ) and in the y direction −P θ sin(θ ) for a single point. This gives the resultant wave power/metre and, for a device, power absorption from the capture width. If there are N devices, the wave power in the x and y directions becomes P onset − i P θi cos (θ i ) and − i P θi sin(θ i ), respectively, i 1, N . This strictly requires an iterative procedure as each device affects every other device. However, the upwave effect is negligible and if the devices are ordered with increasing distance down-wave, one sweep is sufficient. Note that this does not account for the frequency distribution of power absorption which could be included if known from a model while adding to the complexity. There are thus many variables and fast methods are desirable. The highly accurate idealisation is also a useful check for regional scale models. Multi-float wave energy platform in uni-directional waves The 132 M4 was tested experimentally (Carpintero Moreno and Stansby 2019). The damping was almost linear but not equal on the left and right hand sides (although the same part number of pneumatic damper was used). The model prediction of rms relative angle θ rel is shown in Fig. 7 to agree approximately with experiment for H s ≈ 0.04 m and 0.06 m with γ =3.3, although there is some shift in peak values. The specific values of H s and B mech are given in Table 5. The capture width for average mechanical power absorbed normalised by the device width (of 1.75 m) is shown in Fig. 8 and the model now generally underestimates, by up to 35% near the maximum while agreement is close for longer waves (T p > 1.4 s); differences are discussed further in Sect. 7. The total capture width due to mechanical and radiated power absorbed is estimated from the model and shown to be at least twice the mechanical power. The capture width without PTO is also shown to be greater than the total with PTO indicating that this is most effective for wave energy absorption. The capture width due to radiation is shown in Fig. 9 for H s 4 cm with the split between surge, heave and pitch (summed over all floats) and that due to radiation and PTO. This shows that radiation absorption is greater than that from the PTO for T p < 1.4 s and equal for T p ≥ 1.4 s. Heave radiation is greater than surge, while pitch is small. The total due to all effects is also shown. The 132 configuration has been modified to 134, with the 4 stern floats driving PTOs identical to those in the 132 case, shown in Fig. 10. The capture width normalised by device width from the model remains similar but with a device width of 2.45 m rather than 1.75 m, as shown in Fig. 11, which includes the case without PTO. The 134 configuration with Fig. 7 Variation of rms θ rel for right and left sides with T p for H s~4 and 6 cm with γ 3.3: comparison of model with experiment for 132 M4 Fig. 8 Variation of capture width/device width with T p for H s~4 and 6 cm with γ 3.3 for mechanical PTO power, total for mechanical PTO and radiation, total without PTO for 132 M4 with device width of 1.75 m Fig. 9 Variation of capture width/device width estimated from the model with T p for H s 4 cm with γ 3.3: capture width is total due to radiation power with components due to surge, heave and pitch, due to mechanical power, and the total: for 132 M4 with device width of 1.75 m Multi-float wave energy platform in directional spread waves Some data with directional spread waves are available for the 132 M4 case with H s~4 cm and γ 1. The specific values of H s and B mech are given in Table 6. With uni-directional waves (s ∞), the model rms relative angle θ rel is shown in Fig. 12, again with different left and right linear damping, to be close to experiment for the left side but with some difference on the right. The model angle is also shown without PTO. With s 20 in Fig. 13, the model angle is in similar agreement with experiment and also with s 5 in Fig. 14. The model rms roll angle is also shown to be small but significant due to multi-directional waves. Variation of capture width/device width with T p for H s~4 cm with γ 1 and ∞, 20, 5: mechanical PTO power is compared with experiment and total capture due to mechanical and radiated power is estimated from model: for 132 M4 with device width of 1.75 m Fig. 16 Variation of capture width/device width estimated from the model with T p for H s 4 cm with γ 1 and s ∞, 20, 5: capture width is for mechanical power, total due to radiated and mechanical power and due to radiated power without PTO: for 132 M4 with device width of 1.75 m The total capture width due to radiation and PTO, estimated from the model, is shown in Fig. 15, with the experimental mechanical PTO from model and experiment. The model can again underestimate, by up to about 30% near maximum values. The model capture width is shown in Fig. 16 to be greater without PTO than the total with PTO and radiation combined. This shows that the total capture width can be around 4 × greater than that due to PTO. Floating wind platform in unidirectional waves The floating wind platform will also absorb wave power due to radiation and drag effects. For this case, hub and base accelerations have been measured and compared with experiment for H s~6 cm with γ 3.3 in Fig. 17, without any wind effect. The H s values are as given in Table 5. Agreement between model and experiment is quite close at the base, while the model can overestimate at the hub, by up to 30%. Some sway was measured in the experiment due to oscillatory vaning effect not present in the model due to symmetry. The capture width from the model normalised by device width (now 1.65 m) is shown in Fig. 18. Power absorbed by radiation is the greater component for lower T p and mainly due to surge, while becoming similar to heave for larger T p . The contribution from pitch is negligible. However, capture width from drag dominates for the larger T p . The net effect is that total capture width is quite uniform across the range of T p , although much smaller than total capture width for the M4 WECs and smaller than the PTO capture widths (about a quarter). Experimental results with directional waves were not available for this case. These results, presented in Fig. 18, were without wind power absorbed. The effect of representative wind speeds of 8, 12 and 16 m/s at full scale, or 1.13, 1.70 and 2.26 m/s at a model scale of 1:50, are shown in Fig. 19 again with H s 6 cm. It is interesting that power absorbed due to wave radiation is always much greater than that due to wind damping (both represented as capture width), and the wave damping is largely due to drag for the larger T p . Wind damping is largest around the rated wind speed, 8-12 m/s at full scale, when wave damping is somewhat reduced. Wave power transmission through arrays We first consider rows of WECs. Spacing should be 5 wavelengths or more for radiation from a multi-float platform to be negligible. With a typical peak period of 1.1 s (7.8 s full scale), the corresponding wave length is 1.9 m, so the spacing should be at least 9.7 m; we consider 12 and 20 m. The total capture width/platform width is order unity and we test representative values of 1 and 1.5, noting the model can underestimate and control will increase this further. In general, capture width is defined by T p , γ and s with linear modelling and there will be additional non-linear effects. The input to the array model is capture width and s. Results for a single row of one hundred 134 devices (width 2.45 m) and with two staggered rows, one spacing apart, are shown in Fig. 20 for s 5 and 20. The influence of s is negligible. One row with a spacing of 20 m and a capture width of one platform width (normalised capture width nCW 1) give a wave power reduction of 12% and two rows 23%. With a capture width of 1.5 platform width (nCW 1.5), the reduction with 20 m spacing is 35%, and with 12 m spacing 48%. For the wind platform, a turbine diameter of 126 m scales to 2.52 m at 1:50 model scale and turbines are normally assumed to be more than 8 diameters apart to make wake power losses acceptably small. This gives a spacing of 20.2 m and a value of 20 m is assumed. A 100 turbine farm is assumed; with a turbine capacity of 5 MW, the total capacity would be 500 MW. The platforms are placed in a 10 × 10 array and wave power is output one spacing down-wave of the last row (termed near), and 10 spacings down-wave (termed far). A capture width of 0.1 platform width is assumed which is probably an overestimate as directional spread will reduce this further. Results are shown in Fig. 21 for s 5 and 20 which clearly now have an effect. Note a row of 10 extends from −4.5 to + 4.5 spacings. The wave propagation far downwave is reduced by less than about 3%. Discussion Linear hydrodynamic modelling is used to determine mechanical and radiation power to give total power absorbed. This model provides an approximation of experimental results, with similar trends, with mechanical power capture underestimated by up to 35% near maximum capture width, while accurate for longer waves. The reason for underestimation is unknown and, in contrast, linear models for point absorbers tend to over-not underestimate power, e.g. Giorgi and Ringwood (2018). In this case, with multiple floats, nonlinear free surface interaction between floats due to radiation could magnify response and power, representing a limitation in the model. For longer waves, this interaction will be less and this is consistent with results for T p > 1.4 s, corresponding with wavelengths about 50% greater than the overall device dimension, being quite accurate. Reflections from the beach, of radiated as well as incident waves, in the wave basin could however also contribute to the discrepancy, representing a weakness in the experiments. An important point for this study is that power and response prediction is conservative and thus total power prediction, resulting in wave power transmission prediction for arrays, is also conservative. Mechanical power capture, often defined by capture width, has been investigated for many forms of wave energy converters (Falcão 2010;Babarit et al. 2012). It is well known that power radiated is similar to that converted and, for optimal conversion by a point absorber in regular waves at resonance, they are equal (Falnes 2002). There has been no evaluation of total power capture for more general forms of wave energy converter, to our knowledge. This is necessary for determining power capture and absorption from arrays of converters and down-wave impact on coastlines. Absorption also applies to floating wind farm arrays where additional power is absorbed due to drag, which can be substantial with damping plates on semi-sub types, as seen in Fig. 18. Diffraction of waves by platforms will be small, as typical column widths are generally less than 20% of a wavelength, and thus in the inertia loading regime. However, radiated waves due to body motion will interact within an array of devices. Linear diffraction-radiation models may estimate this directly but it becomes very time-consuming for even small arrays, e.g. Sun et al. (2016). Göteman et al. (2015) showed that, for clusters of point absorbers in regular waves in an array (an array of arrays), the effect of radiated waves becomes negligible (wave amplitudes less than 5% of ambient) at distances greater than about 5 wavelengths. The multi-float devices of interest here are hydrodynamically similar to clusters of point absorbers and the far field radiation effect may be expected to be reduced in irregular waves. Far field wave power is due to the onset or ambient power, less the effect of that absorbed by each device. Determining radiated wave power experimentally is difficult and may be impossible, at least in multi-directional irregular waves, but it is straightforward in a linear model as it is simply due to the product of damping force and float velocity. This has been undertaken here for the multi-float M4 in 132 and 134 form, and for an idealised semi-sub wind platform. For wave energy converters, the total power absorption is at least twice mechanical power, and total power absorbed without PTO engaged is greater than with the PTO engaged. The split of radiated power between modes of heave, surge and individual float pitch shows that heave and surge dominate with heave greater but surge still significant. These results were with un-optimised linear mechanical damping. Importantly, it has been shown that control of PTO torque with auto-regression prediction can increase power capture by between 21% and 83% above optimum linear damping (Liao et al. 2020(Liao et al. , 2021. The results estimated for power capture, mechanical and total, are thus an underestimate of what is possible. There were some data available with directional waves, only with γ 1 for the 132 M4, which show that PTO capture width is reduced by up to 50% for the larger T p , but this is where control was most effective in uni-directional waves. Clearly this is an area requiring further work. The total cap-ture width was more than twice the PTO capture width, up to four times. For the semi-sub wind platform, the capture width was relatively small, around 10% of platform width and was only obtained for uni-directional waves. The objective for floating platforms supporting wind turbines is to make motion as small as possible, which is different from WEC platforms where the objective is to make power capture as large as possible, which generally occurs with large motions. Reducing motion of a wind platform by pumping water between floats is thus desirable and up to 40% reduction has been demonstrated by Stansby (2021), but this would reduce radiation damping further. Interestingly power capture by drag dominates for larger T p and by radiation for lower T p , with the effect of surge the main contribution. Power absorption by drag, essentially from damping plates, is greater than that from wind turbine damping. The far field wave propagation model for directional waves requires only the device capture width, spacing and the directional wave spread factor s. The simplified model is highly efficient. The capture width itself is dependent on platform configuration, T p , γ and s, assuming linear waves. There will be additional nonlinear effects particularly in larger waves where, for example, drag and overtopping will generate additional losses. Incorporation of control will increase PTO capture and motion and hence radiation capture width. The values of total capture width normalised by platform width of 1 and 1.5 for the WEC and 0.1 for the semi-sub wind platform are considered representative. With a small WEC platform spacing of 600 m (full scale), wave power is reduced by almost 50% across two long staggered rows. The reduction in wave power by a wind farm is quite small, less than 3%, but could be useful. Experimental data for comparison would be of course be desirable. The high absorption by multi-float WECs does suggest that rows would be effective for coastal protection. Since considerable funds are invested to protect vulnerable, but high value, coastlines the combination of WECs for power generation and coastal protection is complementary. Reducing wave propagation through wind farms is also beneficial, for structural and turbine loading and human safety, and rows of WECs around a wind farm would achieve this while also generating additional power. Conclusion The total wave power capture by a multi-float WEC due to radiation as well as mechanical PTO capture is estimated using linear diffraction-radiation modelling. The wave power absorption for a semi-sub wind platform due to drag and radiation is also estimated. Wave basin results for the multi-float WEC M4 in 132 form are compared with the modelling, including some results for directional spread waves. Mechanical power can be underestimated, by up to about 35% near maximum capture width, while accurate for longer waves. The reason for the underestimation has yet to be understood. For WECs total power, capture width is at least twice that due to PTO and radiation power is split mainly between heave and surge with heave generally greater. PTO was due to un-optimised linear dampers and torque control with auto-regressive forward prediction has been shown to increase power by between 21% and 83% above optimum linear damping. The capture widths are thus underestimates. Magnitudes for the 8-float 134 configuration with 4 PTOs are similar in terms of platform width which is 48% larger than for the 132 case. For the semi-sub wind platform, power absorption is due mainly to surge radiation and drag on the damping plates, with radiation dominating for lower periods and drag for larger. An idealised far field wave power propagation model for arrays has been proposed for directional spread waves, where each device is regarded as small relative to spacing, providing a point sink for wave power, defined by capture width. Using representative capture widths, reduction of wave power for a floating wind array is quite small, less than 3% for a 10 × 10 array with 1 km spacing. For rows of WECs, power is reduced by up to 35% for 2 staggered rows with 1 km spacing and almost 50% with 600 m spacing. This does raise the potential for coastal protection and also protection of wind farms by rows of WECs around a wind farm.
9,977.8
2021-10-19T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Deep security analysis of program code Due to the continuous digitalization of our society, distributed and web-based applications become omnipresent and making them more secure gains paramount relevance. Deep learning (DL) and its representation learning approach are increasingly been proposed for program code analysis potentially providing a powerful means in making software systems less vulnerable. This systematic literature review (SLR) is aiming for a thorough analysis and comparison of 32 primary studies on DL-based vulnerability analysis of program code. We found a rich variety of proposed analysis approaches, code embeddings and network topologies. We discuss these techniques and alternatives in detail. By compiling commonalities and differences in the approaches, we identify the current state of research in this area and discuss future directions. We also provide an overview of publicly available datasets in order to foster a stronger benchmarking of approaches. This SLR provides an overview and starting point for researchers interested in deep vulnerability analysis on program code. Introduction With the continuous digitalization of our society, an increasing number of software and IT systems is used every day. Known vulnerabilities defined as "weakness in an information system, system security procedures, internal controls, or implementation that could be exploited by a threat source" (National Institute of Standards and Technology 2020) are constantly increasing. To illustrate the problem, the number of recorded vulnerabilities in the NIST National Vulnerability Database grew from 14,500 records in 2017 to 17,300 records in 2019 (National Vulnerability Database 2020). As distributed and web-based applications are omnipresent in many areas today, making them more secure gains paramount relevance. Furthermore, addressing security, by preventing and fixing vulnerabilities, early in a development process saves high costs analogous to failure prevention and fixing in general, which associated costs substantially rise in later development stages (Kumar and Yadav 2017). Machine learning and especially deep learning (DL) methods gained importance in many research and application domains. In software engineering, e.g., deep learning is used for analysis and prediction based on software development artifacts, such as commits, issues, documentation, description, and, of course, program code. Thereby, program code can be either compiled object code or source code as written in a programming language. Models can be trained for different purposes, such as approximate type inference, code completion, and bug localization . One particular application of deep learning on program code is vulnerability analysis. The analysis is typically realized as binary classification, distinguishing vulnerable from non-vulnerable program code, or as a multi-class classification, additionally distinguishing the type of a vulnerability. These types typically follow the CWE (Common Weakness Enumeration) categorization system by The MITRE Corporation (2020). This systematic literature review explicitly focuses on vulnerability analysis using deep learning approaches by detecting patterns in source code or object code. We identify a set of 32 relevant primary studies proposing deep vulnerability analysis of source and object code. The aim of the proposed methods is detecting known types of vulnerabilities in unseen program code rather than discovering new types of vulnerabilities. We briefly discuss the evolution from shallow networks to deep learning for this application. Further, we compare and contrast proposed methods considering a typical deep learning pipeline, ranging from data gathering, pre-processing, learning to evaluation. Finally, we provide rich discussions on code embeddings, network topologies, available datasets, and future trends in this area. Our results are relevant for researchers in security and software engineering, supporting them in finding new research directions and in conducting their ongoing research. The systematic and concise overview of deep learning approaches to vulnerability analysis on program code will also be helpful for beginners in this research area, as they can use our analysis as a guide in this complex and diverse field and in the tremendously growing list of machine learning literature. The remaining sections of the paper are organized as follows: Section 3 provides a general overview of deep learning on code. Section 4 introduces our research questions and the methodology of this systematic review. In Section 5 we present and discuss findings per research question. We discuss trends and future directions in Section 6 and consider threats to validity in Section 7. Finally, Section 8 concludes the survey. Related Work Surveys on similar topics can already be found in the literature, and we will discuss them below. We would like to emphasize that in this work we distance from older, existing studies that use machine learning methods or code metrics, but not deep networks. The survey by Allamanis et al. (2018) compares models for the domain of machine learning on code with various applications and discusses the naturalness of code. One application, he investigates is bug or code defect detection. The security of software could suffer as the number of bugs increases, but vulnerability detection in particular are the security-related bugs, which an attacker can exploit. The authors refer to deep learning methods as third wave of machine learning and consider them future work. The survey by Ucci et al. (2019) categorizes malware analysis methods by the use of machine learning techniques. Our work also covers the analysis of object code but especially vulnerabilities written by an engineer rather than detecting malicious code infiltrated by an attacker. Besides supervised learning methods, they also include unsupervised learning, which is common for abnormal behavior or code like malware. Lin et al. (2020) published a similar work on source code based vulnerability detection by deep learning. Our work covers partly the same primary studies, but our work discusses in addition the analysis of object code and the differences of the underlying deep learning architecture while this work focuses on deep neural networks. Computer security issues investigated by deep learning is the topic of the work by Choi et al. (2020). Their range of covered topics is much broader. Program analysis is one sub-area, which is discussed in this work in more detail. Similar to this work, Berman et al. (2019) consider deep learning techniques for the whole cyber-security domain as application. They have a much broader view than software. Even more broader is the selection of (Guan et al. 2018) with a spectrum of technical security issues. Ferrag et al. (2020) looks into network attack scenarios with deep learning for intrusion detection. Network security and software security are different fields with unequal requirements to the analysis. The survey by Ghaffarian and Shahriari (2017) focuses on vulnerability detection similar to our survey, but at the time of publication, deep learning was not yet used for this application and the author considers it a future direction. Nevertheless, we recommend new researchers in this field to read this publication because it covers anomaly detection and software metrics more in detail than we do. Similarly, Jie et al. (2016)'s survey reviews publications based on traditional machine learning methods. Deep Vulnerability Analysis on Code Deep learning (DL) is a subarea of machine learning, specifically concerned with the analysis of complex data using multi-layer neural network topologies. DL algorithms are suitable for supervised as well as unsupervised learning tasks and have been demonstrated useful for the analysis of program code. Current applications on program code are various, including malware identification via anomaly detection (Le et al. 2018;Cakir and Dogdu 2018), prediction of method names and types (Alon et al. 2019;Hellendoorn et al. 2018), semantic code search (Cambronero et al. 2019), and classification of vulnerable program code, which is the focus of this survey. While traditional machine learning employs manually crafted features, created in a step known as feature engineering, DL advances the machine learning concept towards representation learning, i.e., automatically extracting and learning features from the raw input data (LeCun et al. 2015). Given sufficient and representative training data, this methodological advancement allows for superior analysis results and removes the dependence on subject matter experts for defining features in often rather subjective processes. For text analysis, including program code, feature engineering has been nontrivial making DL an especially welcome technique (Gao et al. 2018 , [S1]). Vulnerabilities are a subset of all software defects that may exist in a program code (Shin and Williams 2013) and the focus of deep vulnerability analysis is not finding new types of vulnerabilities but rather detecting a known type in new and unseen program code. To illustrate this explanation and our discussion in the later sections, we introduce an example of vulnerable source code suffering from an integer overflow (cp. Listing 1). The source code, written in C, operates based on a user input provided via a command line argument. This argument is being converted into an unsigned long integer (32bit) using the strtoul() function. The resulting value is passed on to a test()-function expecting an unsigned integer (16bit) as input. Inputs larger than the 16bit range, e.g., n > 1, 073, 741, 823, result in an integer overflow on the three highlighted positions in the source code. An integer overflow is categorized as vulnerability type CWE-190 and may lead to overwriting of the stack if the size of the buffer is allocated smaller than the amount of data copied into it (cp. line 6 of Listing 1). The CWE aggregates vulnerabilities to classes and sub-classes, e.g., buffer access with incorrect length value (CWE-805) being a subcategory of improper restriction of operations within the bounds of a memory buffer or buffer overflow (CWE-119). In addition, CVEs (Common Vulnerabilities and Exposures) describe where an instance of a CWE has been discovered, e.g., CVE-2017-1000121 reports an integer overflow found in Webkit. Such a vulnerability as in this example is hard to find for a developer by hand. A traditional static application security testing method (SAST) can hardly find overflows since there is no way to check automatically whether the calculation was performed correctly. A study by ( [S2]) shows higher detection rates for a deep learning based approach in comparison to the SAST tools Clang, Flawfinder and CppCheck. Traditional program analysis for finding vulnerabilities is based upon logical rule-based inference systems and heuristics. While having advantages, e.g., strong analysis soundness or completeness guarantees, these methods usually suffer from the undecidability of the underlying analysis problems and approach this problem using approximations of program behavior and sophisticated heuristics. Machine learning and in particular DL provides an orthogonal approach by focusing on statistical properties of software under the assumption that programs are written by humans and therefore follow regular patterns and code idioms (Pradel and Chandra 2021) (cp. naturalness hypothesis (Hindle et al. 2012;Allamanis et al. 2018)). In this way, DL methods for vulnerability analysis can better handle the often fuzzy patterns of software vulnerabilities and integrate natural text, such as comments and identifier names, into the analysis, which is usually omitted in traditional program analysis (Pradel and Chandra 2021). Furthermore, considering their statistical nature, DL methods also promise to cope better with the not well-defined characteristics of software vulnerabilities, which becomes apparent when considering the often reported high numbers of false positives for traditional program analysis (Johnson et al. 2013;Christakis and Bird 2016). The typical training pipeline of a DL model consists of four major phases: data gathering, pre-processing, learning and evaluation (cp. Fig. 1). Below, we provide a brief overview of each of these phases. [S14] Data Gathering Phase Essential to the prediction quality of a DL model is the availability of rich and representative training data. Supervised training methods additionally require high-quality labeled training samples in large quantities. Fortunately, software forges like GitHub 1 are an unprecedented source of data exploitable for DL on program code due to their increasing popularity and widespread usage for collaborative software development. However, project selection and data filtering play a vital role in separating large numbers of redundant and toy projects from projects containing production code and being well suited for training (Lopes et al. 2017;Kalliamvakou et al. 2014). A difficulty in using real program code is the typically large imbalance of vulnerable vs. non-vulnerable code in software projects. Therefore, often also synthetic program code is used to overcome the limited availability of labeled code samples and their strong imbalance. The most costly step in preparing a training set is labeling. Many vulnerability analysis approaches therefore employ labels produced by a static code analysis tool. As an alternative to assembling training data from scratch, already existing datasets can be used. Pre-Processing Phase In the pre-processing phase, code samples are prepared and potentially annotated for the subsequent learning phase (cp. Fig. 1). While a major benefit of DL is representation learning, program code analysis still typically includes a certain amount of feature engineering, e.g., by annotating type information along the code. Extracted code samples are translated into single token statements or graphs by means of a specific parser or lexer for the respective source or object code. The generated tokens or graphs are further processed into numeric vectors by means of an encoding or embedding. Learning Phase In this phase, the pre-processed training data is used to train a DL model towards performing a vulnerability analysis of program code (cp. Fig. 1). Typically, a binary vulnerable vs. non-vulnerable or multi vulnerability type classification is being trained. Deep neural networks and their training process do not only consist of parameters automatically optimized in the training procedure, but also of many hyper-parameters with substantial impact on the performance of the resulting model. A validation step with careful hyper-parameter optimization is essential for training well-performing models (Komer et al. 2014). To ensure independent training, validation and evaluation results, training datasets are typically split into three respective parts. The majority of the dataset should be used for training with a typical proportion of the three splits being, e.g., 80:10:10. Evaluation Phase Once training has converged, the resulting model will be evaluated with the independent evaluation split of the dataset or with additional datasets and benchmarks. Methods In order to identify a set of relevant primary studies to answer our research questions, we apply Kitchenham and Charters' four step guidelines for conducting reviews in software engineering: (1) defining research questions and selection criteria, (2) carrying out a comprehensive and exhaustive search for primary studies according to the criteria, (3) extracting data from the primary studies, and (4) answering the research questions with the gained data in a suitable presentation. Research Questions We conduct a systematic literature review on published research in the field of vulnerability analysis of program code using DL methods and thereby aim to answer the following six research questions: RQ1 Data demographics: How has machine learning based vulnerability analysis evolved over time? Motivation: This question aims to overview the development of this research field including a timeline of presented concepts and a categorization of primary studies. RQ2 Training data: How are training sets constructed and how are they structured? Motivation: This question aims to study the utilized training data in detail, their sizes, their class balance, and their methods of construction. RQ3 Code representation and encoding: How is program code made accessible to machine learning models for training? Motivation: This question aims to study program code pre-processing, representations and encodings, also with respect to granularity and arrangement of code samples. RQ4 Proposed and studied models: Which neural network topologies are used and how do they differ? Motivation: This question aims to emphasize the details of the utilized DL topologies and to compare their advantages and disadvantages. RQ5 Evaluation of proposed models: How are proposed models being evaluated? Motivation: This question aims to discuss and compare the evaluation of proposed approaches, their performance and the suitability of utilized metrics for evaluation. RQ6 Model generalizability: Are proposed methods generalizable to new and unseen projects? Motivation: This question aims to discuss the ability to transfer trained models to other software projects and to reuse trained models when training a model for a new problem. Study Selection Process For the search process, we use databases of computer science publishers and synoptic search engines (cp. Table 1). In a first step, we queried a combined pattern of search terms S1 AND S2 AND S3 AND S4 across the databases, where S1 = (vulnerabilit* OR security); S2 = (analysis OR assessment OR detection OR discovery OR identification OR prediction); S3 = ("deep learning" OR "machine learning" OR supervised); and S4 = (code OR "byte code" OR "program code" OR "object code" OR "source code" OR software). For databases 2-5, we were not able to query the entire search pattern at once since the search engine did not support all necessary operators. In these cases, we constructed individual queries containing only one term per group S1-S4 each and subsequently concatenated results. Table 1 shows the total number of retrieved results per database. In a second step, we filtered retrieved publications based on title and abstract applying a set of selection criteria (cp. Table 2). Retrieved results were sorted by databases' own relevance criterion and we terminated the search after 20 successive non-relevant publications due to the high number of results for databases 2-5. In a third step, we carefully read all remaining publications, evaluated our selection criteria again and only accepted primary studies not already in our set. Eventually, we performed an iterative snowballing search through each studies' references as listed in the publication, and citations retrieved through Google Scholar. With this procedure, we retrieved an additional four primary studies resulting in a total of 32. Dataset Collection We also collected datasets useful for researchers that want to evaluate their own vulnerability analysis approaches. We mainly identified these datasets through manual search. We started with the referenced and released datasets across the primary studies. Furthermore, we performed a manual search on Google Scholar, GitHub, and GitLab. We focused on datasets that either directly contain program code or link program code in an external repository and are accompanied by labels referring to vulnerable program code parts. If a primary study did not directly link a utilized dataset or a given link was broken, we manually searched for the download location or a mirror thereof. Discussion In order to get an overview of relevant keywords and their importance in the field of vulnerability analysis of program code we created a word cloud across all selected primary studies (cp. Fig. 2). The word cloud reflects the most prominent terms, i.e., vulnerability, deep learning, and (source) code, also reflecting and confirming our selection criteria. Furthermore, it shows that a function is an important concepts for analysis of code and that neural network, features, and data (source) are especially relevant terms when applying deep learning methods. In the following subsections, we discuss the results that we retrieved from the primary studies in order to answer the research questions outlined in Section 4.1. RQ1: Data Demographics The application of deep learning methods has substantially increased across many research disciplines in the last years. We observe a similar trend also for vulnerability analysis of program code (cp. Fig. 3). Figure 3 presents a timeline depicting the evolution of vulnerability analysis approaches that employ machine learning. We connect primary studies with an arrow if the latter references the former and proposes its improvement. The progression in this figure can be distinguished into three stages based on the proposed methods: traditional machine learning methods (shaded white), shallow learning neural networks (shaded light orange), and deep learning neural networks (shaded dark orange). Initial studies proposing traditional machine learning for vulnerability analysis were published in 2014, while first approaches proposing deep learning were published in 2017. Current research almost entirely focuses on deep learning approaches, as the increase in studies per year shows. This trend is accompanied by increasing complexity of the software as well as size of the software projects since DL can better process complex information. Traditional machine learning has often only one processing layer, which cannot abstract meaningful patterns from the underlying complex data. The advantage of DL are successive abstraction layers for filtering important information and classifying the code snippet to the correct category. We further observe that authors who beforehand proposed traditional machine learning techniques later proposed DL methods. Note that this is not a complete overview of traditional machine learning-based work done in this area. We refer the reader to related surveys mentioned in Section 2. RQ2: Training Data The training of a deep neural model to perform a classification task requires a substantial amount of data representing the classes to be distinguished. Deep neural networks not only Studies with an incoming arrow explicitly state that they build on the connected previous work or build upon their own previous work train network layers performing the final classification task, but also a cascade of additional layers that extract the most relevant features for performing this task. This approach has been demonstrated to be superior for many tasks, but also means that the selection of suitable training data is crucial. A relevant factor is the representativeness of training data for the task to be solved. Below, we discuss individual attributes of training data to be considered when performing vulnerability analysis. Size and composition of the dataset In order to not only train a shallow classifier, but also a cascade of problem-specific feature extractors, deep neural networks require substantially more training data. That means the size of the training dataset influences the performance of a trained model. Especially when training deep models on source code, a large number of training examples are needed to generalize from developer-specific code idioms. Furthermore, a dataset should be representative for the intended application domains (Dam et al. 2018, [S3]), e.g., operating systems or web applications, and accordingly combine vulnerability samples across the different domains. Figure 4 shows a histogram of dataset sizes used across the primary studies. We found that sizes greatly vary from under 10k to more than 1M samples per study. While we observe a trend towards larger studies for DL on source code, e.g., compare the frequently used public Github archive dataset containing around 182,000 software projects amounting to a size of 3 TB (Markovtsev and Long 2018), this trend is not visible for vulnerability analysis yet (cp. Fig. 4).The largest available dataset contains roughly 1M samples (Russell et al. 2018, [S2]). For the training, validation and testing procedures, these datasets are split into shares, with primary studies reporting the following splitting ratios: 8:1:1, 7:1:2, and 6:2:2 (cp. Section 5.5). Synthetic training data An alternative to real training data is the generation of synthetic code. A potential benefit is the chance to generate representatives of vulnerabilities that only rarely occur and are therefore hard to find in reality. Generating labels Today, deep vulnerability analysis is almost solely approached in a supervised manner, i.e., all primary studies use training sets accompanied by labels. A label annotates a code entity as belonging to one or multiple of the classes to be distinguished. For a binary classifier, these would be vulnerable and non-vulnerable, while a more sophisticated approach could, e.g., use enumerated weaknesses (CWE) as classes Niu et al. 2020([S21]). In contrast, unsupervised optimization would train a model to discover and cluster input data automatically according to attributes like vulnerability. While such an approach would have a fundamental benefit by operating without the need for labels, the variability of program code makes training a successful unsupervised approach hard and a future exercise. Labeling a dataset theoretically means that an expert reviews a given codebase and identifies vulnerable parts. This approach is unrealistic when aiming for a large dataset as required for the training of deep models. An alternative source of labels, often retrievable in an automated manner, are vulnerability collections, e.g., CVE reports in the (National Vulnerability Database 2020), and three primary studies apply this strategy. When discovered in an open-source project, listed CVEs can be traced back to the respective code location. Note that a CVE can be associated with a CWE, but this is not necessary and thus not always the case. By analyzing the code versioning system and issue tracker, further information, such as the vulnerability-fixing code change or the introduction of the vulnerability can also be extracted. Harer et al. propose another automated labeling approach by utilizing static code analysis tools (Harer et al., 2018, [S9]). However, static analysis will only discover what is captured by the applied rule set and a model trained on the analysis' results will essentially approximate this rule set. The authors argue that a pessimistic analysis aiming for high recall, could ease a later manual labeling by reducing the amount of code to be evaluated. Nguyen et al. (2019) ([S10]) propose a semi-automated approach by clustering code samples before labeling. Various authors argue that a better labeled dataset would facilitate higher accuracy of their proposed approach and aim to create these in the future [S2], [S5], [S7]). Scope of labels The granularity of a training sample must not necessarily be the same as the the scope of an associated label. ( [S7]) argue that the labeling should be as accurate as possible, optimally on line-of-code level, but automated labeling processes often do not provide this fine-grained information (see above). Label balance An ideal training represents classes in a balanced manner to the model. In contrast, an imbalanced training typically has negative effects on a model's performance as the model implicitly learns the distribution of classes. When trying to distinguish very rare classes like vulnerable code from very dominant classes like non-vulnerable code, it can be a safe and easy to learn strategy for a model to always predict the dominant class while still reaching high accuracy. When considering vulnerability analysis on software projects, this imbalance is naturally given as software typically contains large parts that are non-vulnerable. Figure 5 shows the distribution of vulnerable and non-vulnerable training samples used by the 20 primary studies reporting this information. We observe highly varying proportions of 1% to 52% vulnerable samples per studied dataset. Thereby, 1% of known vulnerable code more realistically reflects the situation in software projects today, but constitutes a highly imbalanced datasets; while values beyond 40% constitute an almost balanced dataset better suited for training. We argue that active measures should be taken to reach a balanced training set. Simple strategies, long known from traditional machine learning, are over-and under-sampling. Over-sampling means that samples of the minority class appear more than once in the training set to counter-balance the dominant class. Three studies apply this strategy (Liu et al., 2020, [S11] Another strategy, also in combination with the ones above, is utilizing an objective function that considers and weighs the imbalance during the training process and was applied by Fang et al. (2019) [S16], Lin et al. (2019) [S17], [S2]. Vulnerabilities A majority of 22 primary studies build their datasets upon known vulnerability categories, 19 studies utilize the CWE types of (The MITRE Corporation 2020) and 3 studies utilize CVE alarms (National Vulnerability Database 2020), while 10 studies use program code samples merely labeled as vulnerable or non-vulnerable. Figure 6 provides an overview across the most common utilized vulnerabilities. The common CWE categorization is divided into classes and sub-classes, so one CWE can be contained in another CWE. We found the most common analyzed CWE to be CWE-119 "Improper Restriction of Operations within the Bounds of a Memory Buffer", a parent category of the classic buffer overflow. Often, this CWE occurs in combination with CWE-399, the parent category of all resource management errors. Both vulnerability types are typically introduced through multiple statements, making a pattern-based detection an appropriate approach. Other investigated CWE-types are OS command injection (CWE-78), e.g., SQL injection, improper input validation (CWE-20), and NULL pointer dereference (CWE-476). We aggregate roughly 50 other CWE types that solely occur in a single study in Fig. 6. We also observed that authors not necessarily classify all vulnerability types available in their utilized datasets but sometimes rather chose to train a binary classifier distinguishing merely between vulnerable and non-vulnerable code. Benchmark datasets Benchmark datasets are essential when comparing results of competing approaches and for researchers that want to develop new and improved approaches. Across the primary studies and through an additional search (cp. Section 4.3), we discovered 20 datasets publicly available for benchmarking (cp. Table 4). Most of the datasets provide samples as source code mostly written in one programming language (considering C and C++ as one language), except for SARD that mixes C/C++ and Java and VulinOSS that includes various languages. One sample in these datasets typically spans one method or one file. All datasets contain either binary labels (vulnerable vs. non-vulnerable) or multi-class labels typically corresponding to CWEs or CVEs annotating multiple tokes, a statement, a method or a whole file. RQ3: Code Representation and Encoding With this research question we explore how primary studies represent and encode program code for making it suitable as input when training a deep model. Table 3 provides an overview across all 32 primary studies in terms of input data, program code representation and the trained model. When discussing this research question, we specifically refer to the table's first five columns. Granularity of training samples When training on program code, a fundamental decision is the granularity in which to represent the code to the model. Granularity thereby refers to the extent of one training sample rather than the input granularity of a trained model which may be smaller, e.g, for sequentially trained recurrent networks. Typical granularities range from statement to file. In general, granularity should be chosen so that a code entity provides sufficient characteristic information for classifying it as potentially vulnerable or not. Thereby, an example for a vulnerability on statement level is an incorrect conversion between numeric types (CWE-681), while storing sensitive data in a mechanism without access control (CWE-921) can typically only be identified when analyzing multiple methods together. Given only this observation, one might consider coarse-grain code entities as universally suitable. However, there are also benefits in processing more fine-grained code entities. Splitting a code base into smaller entities results in more training data, which is typically beneficial for the performance of a deep model. Furthermore, if the vulnerable part is small and constitutes only a fraction of a code entity, it is harder to train a model towards identifying this small portion of characteristic information. The first column of Table 3 refers to the code granularity chosen by the primary studies. Primary studies operate: on single statements; on multiple statements, either consecutive statements or program slices; on single methods; or on file-level containing multiple methods. The table shows that studies often focus on multi-statement and method-level granularities and we observe a trend towards more fine-grained analysis in more recent studies potentially to provide users with more precise results (Li et al. (2020) [S18], Zou et al. (2019) [S23]). Source code functions by nature differ a lot in their size, so aggregating functions by combining several semantically related statements to an intermediate non-executable code can help to adjust the length of very long code fragments (Li et al. 2021, [S7]). An interesting approach is function inlining where method calls within one method are replaced by the code of the other function resulting in a wider scope of the vulnerability analysis [S11], [S19]). We observe correlations between the vulnerability type to be identified (cp. Fig. 6) and the sample granularity for analysis: SQL injections (CWE-89) are identifiable on statement-level granularity, while a buffer-overflow related vulnerability (CWE-119, CWE-121, CWE-122 etc.) typically assembles over multiple statements and at least requires this similarity, SAST -Static Analysis Security Testing granularity for identification. One study (Li et al. 2021, [S7]) proposes differing input and output granularities. Their pipeline receives samples on program level as input and classifies on program slices. Code format and language The second column "Code format" of Table 3 refers to whether a study analyzes object code or source code and lists the analyzed programming languages for primary studies analyzing source code. A majority of 22 primary studies analyze source code, stemming from code written in seven different programming languages. The most often analyzed programming languages are C and C++ possibly due to their wide usage for developing core functionality of connected systems. The languages' versatility and low-level features offer many possibilities for software developers but also allow introducing many vulnerabilities, accordingly C and C++ are considered among the most insecure programming languages (Michaud and Painchaud 2008). Although, other languages like Python and PHP should prevent some vulnerabilities on byte-operation level, they may allow other vulnerabilities and their libraries may also still be vulnerable, e.g., for buffer overflows (National Vulnerability Database 2014), due to the fact that they are, e.g., written in C. PHP, JavaScript and SQL are commonly used for developing web-applications and are vulnerable to code-injections, such as SQL injections (CWE-89). Li et al. (2019) ([S12]) aim to predict the vulnerability of methods written in C++ and Python. Since they solely analyze method names, a joint model for both programming languages could be trained. A potential reason for focussing primarily on source code for analysis is its richer representation containing, e.g., method names, variables names, comments that can be exploited, also in combination with other data sources, such as documentation and issue tracker information, for a vulnerability analysis. Harer et al. (2018) ([S9]) evaluated source code vs. object code for vulnerability analysis and their results in terms of higher ROC AUC and PR AUC for source code support this assumption (cp. Section 5.5). However, several primary studies focusing on source code confess that they aim to expand their work to object code, arguing that a vulnerability analysis may not only be conducted by developers but also by consumers Intermediate code representation There exist multiple strategies for how to prepare program code for training of a model (cp. column "Representation" of Table 3). Primary studies use formats that range from the given plain code to more complex representations, e.g., abstract syntax tree (AST), control flow graph (CFG), program dependence graph (PDG), as known from compilers or static analysis tools. More recent approaches even introduce customized formats, such as the Attributed Control Flow Graph (ACFG), the Code Property Graph (CPG) or a graph combined of several of the former Xiaomeng and Pechenkin et al. (2018) ( [S22]). The intermediate transformation of code into a graph representation is often proposed as code is not executed linearly like text but in branches and conditions preferably reflected in a graph representation. Which graph is most suitable depends on the type of vulnerability to be analyzed (Zhou et al. 2019, [S23]). For example, program or system dependence graphs represent control as well as data dependencies within code executions making it helpful to detect resource management vulnerabilities (e.g., CWE-399). More than half of the primary studies (19) use at least one graph representation or combine several graphs into one meta-graph, suggesting that treating code as graph or tree is suitable for deep vulnerability analysis. Also a combination of input granularities has been proposed in order to combine features of multiple abstractions. Dam Serialization The fourth column "Serial." of Table 3 refers to the serialized features from the intermediate representation that are used as input to the deep model. For plain source code, this is often a word sequence created by a lexer, while object code is often disassembled into a sequence of instructions. Graphs are typically translated into paths, e.g., execution paths, or traversed by a depth first search (DFT). These features need to be further translated into numeric sequences that can be processed by a neural network. This step is called encoding. Encoding program code A neural network expects a n-dimensional vector of numerical values representing a program code sample to be trained or analyzed. This necessitates a transformation of program code given in one representation, e.g., text or graph, into this numerical representation. Ideally, this transformation should preserve all relevant information for the given task, e.g., similar tokens are assigned similar numerical representations (Chen and Monperrus 2019). There are various ways for encoding program code (cp. column "Encoding" of Table 3). A simple approach is building a dictionary and enumerating each token to be encoded in a sparse vector of binary values containing a single one at the position of the given token. This form of encoding is called one-hot encoding and is applied by ten primary studies. A numerical encoding is similar to that and assigns every word a consecutive number. Cui et al. (2019) ([S14]) propose a more exotic form of encoding by reading tokens of object code bit-wise and representing 8 bits as a pixel in a grayscale image, thereby, gaining the ability to use a standard convolutional neural network as frequently applied for computer vision tasks. Li et al. (2019) ([S19]) compared a numeric encoding and a one-hot encoding and found the latter to yield higher accuracy at the cost of an increased training time. Word embeddings A more advanced encoding is an embedding, e.g., learning a transformation of sparse natural language data into a dense, low-dimensional representation. In recent years, several successful embeddings for text, e.g., word2vec (Mikolov et al. 2013), Glove (Pennington et al. 2014), and Fasttext (Bojanowski et al. 2017), have been proposed. The most wide-spread of these embeddings is word2vec (Mikolov et al. 2013), which predicts a word based on its context words (CBOW) or a context word based on the target word (skip-gram). Nine primary studies use word2vec, but not all specify whether they used the CBOW or the skip-gram version. Important parameters of an embedding are vocabulary size and the dimension of the embedding. Unfortunately, only five primary studies mention these parameters. Duan et al. (2019) ( [S25]) found that a vocabulary size of 128 and an embedding dimension of 144 worked best for them. Graph embeddings Word embeddings, such as word2vec, only embed tokens and thereby neglect known dependencies among them. In contrast, graph embeddings aim to encode tokens as well as their dependencies into a dense, low-dimensional representation. Given the graphical nature of program code, graph embeddings appear to be a more suitable encoding. Five primary studies propose graph embeddings: the Graph-GRU (GGRU) studied by RQ4: Proposed and Studied Models The aim of this research question is exploring the model topologies proposed by the different primary studies and their characteristics (cp. column "Model topology" of Table 3). Figure 8 illustrates the distribution of network topologies proposed by the primary studies with quantities referring to the usage across the typically multiple performed experiments per study. Figure 7 provides an overview of the attempted vulnerability analysis per primary study in terms of vulnerability classes to be distinguished. A majority of 28 primary studies perform binary classification, e.g., differentiating vulnerable from non-vulnerable program code samples. Merely four primary studies aim to distinguish multiple vulnerability types and select 5 to 40 classes to be identified and differentiated. Recurrent topologies Primary studies predominantly propose recurrent neural networks (RNN) for performing vulnerability analysis of program code (cp. red pie in Fig. 8). Recurrent network topologies maintain a memory of previously seen input data by incorporating the previous state of hidden units when computing an updated state. This property makes recurrent topologies especially suitable for sequential data of variable input length as given when analyzing program code. RNNs have been first proposed by Hopfield (1982) and since been evolved substantially. A main driver in this evolution is a problem called vanishing gradients, existing for all deep neural networks but becoming especially relevant for recurrent topologies exposed to long sequences of input data. A major improvement in mitigating the vanishing gradient problem are gated recurrent topologies that allow the model to learn and control which previous information shall be maintained and which other information can be discarded. Gated topologies appear in two main flavors: (1) long short term memory networks (LSTM) (Hochreiter and Schmidhuber 1997) and (2) gated recurrent units (GRU) (Cho et al. 2014); with the latter being essentially a simplification of the former in order to reduce necessary parameters and thereby training time. Systematic comparisons of LSTMs and GRUs found that none is superior in general, but both showed problem-specific benefits (Zheng et al., 2019, [S8]), . In this meta-study, we observe the same trend with eight primary studies applying LSTMs in their experiments, and seven proposing GRUs. RNNs face another potential shortcoming due to their sequential processing. They can, within a sequential input, only reason about the input data that they have been exposed to so far and not the remaining part of the input. This problem becomes relevant for encoder-decoder (aka sequence to sequence) problems like translating text from one language into another since output is produced synchronous to the input. The solution is a bidirectional topology consisting of two RNNs (e.g., BiRNN, BiLSTM or BiGRU) that process the sequence simultaneously from both sides and to combine their current hidden states per step upon inference. When performing a classification task on full input sequences, however, typically only the last hidden state, representing the whole sequence, is used for the classification. Vulnerability analysis is such a classification problem and one should not expect substantial benefits from bidirectional topologies. However, 11 primary studies utilize a BiLSTM, 3 use a BiGRU and 1 uses a vanilla BiRNN. Li et al. (2021) ([S7]) found that bidirectional LSTMs (BiLSTM) and GRUs (BiGRU) slightly outperformed their unidirectional equivalents (e.g., LSTM and GRU) reducing error rates by some tenth of percentage points for accuracy, precision and F-measure Zheng et al. (2019) and Li et al. (2021) ([S8], [S7]). In the study of Fang et al. (2019) ([S16]), BiLSTM and LSTM perform similarly having their largest divergence in 1.3 % F-measure in one experiment. One study even found that a vanilla BiRNN outperformed BiLSTM and BiGRU topologies on their dataset (Zheng et al., 2019, [S8]). Another powerful concept proposed to overcome the vanishing gradient problem of long inputs in RNNs is attention. It allows a network to learn which former inputs and their respective hidden states are more and which are less relevant for a given task. Attention has been considered by two primary studies (Zhou et al. 2019 [S23], [S11]). The authors found that using attention resulted in a 14% higher F-measure. Deep recurrent topologies The column "Depth" in Table 3 refers to the number of neural network layers employed in a model topology. We counted the main layers of the best performing model topology (marked bold in column "Topology") excluding layers that do not directly contribute to the representation learning ability of a topology, such as embedding, dropout, and classifier layers. Eight primary studies utilize only one main layer, while others propose up to six RNN layers. Li et al. (2018) ([S5]) studied the influence of topology depth in terms of BiLSTM layers. They found that two to three BiLSTM layers yield the highest F-measure and that topologies beyond six layers drastically dropped in terms of Fmeasure. Deep BiLSTM layers also performed best in terms of highest F-measure in Li et al., (2019, [S12]). While inputs are typically processed sequentially, e.g., layer by layer, in these deep topologies, also skip connections and short circuit branches have been studied (Duan et al. 2019, [S25]). Feedforward topologies In contrast to recurrent topologies, feed forward topologies are designed for processing all inputs at once. Fully connected neural networks, also known as dense networks or multilayer perceptron (MLP), have connections among all neurons of two successive layers. This fully connected nature makes them ideal classifiers often used in combination with other feedforward or recurrent topologies. However, fully connectedness also goes along with large numbers of parameters making them computationally expensive and memory intensive and therefore hard to scale in depth and width. A specific form of feedforward network is the convolutional neural network (CNN) that uses convolution operations, which aggregate units in the same spatial region and operate with substantially less parameters. CNNs were originally developed for processing n-dimensional arrays (Fukushima et al. 1983) and primary studies frequently propose them in comparison to RNNs. Hybrid topologies, proposed by five primary studies, combine feedforward layers with recurrent layers. However, Wu et al. (2017, [S28]) found that a CNN yielded higher accuracies than a hybrid topology in their study. The depth, number of hidden layers, and width, e.g., number of neurons per layer, of feedforward topologies vary greatly and are typically chosen in an empirical manner (Abaimov and Bianchi 2019, [S24], Yan et al. 2018 [S29]). In contrast to recurrent topologies, feed forward topologies, including CNNs, expect a static input size. However, samples in the respective datasets typically vary in length and may be longer or shorter than the expected input size of the model. A standard strategy to handle these mismatches is padding shorter inputs with a fix value and truncating longer inputs to the expected input size of the network. This strategy has, e.g., been applied by Wu et al. (2017); ; Lin et al. (2019). This strategy still leaves the problem of deciding how large the input of the model shall be, e.g., simply choosing the size of the largest training sample does not guarantee that test data will not contain a larger sample, while truncating to samples average length means that potential relevant information will be cut away. Cui et al. (2019) ([S14]) evaluated various input lengths for a CNN and found that models performed better on larger input vectors at the cost of increasing resource consumption for training. Topologies for graph processing The majority of primary studies transform their intermediate program code graphs into input sequences or matrices. However, five studies employ a topology that can directly operate on graphs as input. In these studies, the graph is processed in the embedding layer of the respective network (cp. Section 5.3). The structure2vec framework (used by two studies Xu et al. 2017;Gao et al. 2018) iteratively creates an embedding to be further processed in a fully connected network, the Graph-GRU uses a specific recurrent layer, and the GCN uses a specific convolutional layer as first processing layer operating on graphical input data. Authors found the Graph-GRU network to be superior when compared to a BiLSTM network with an attention mechanism (Zhou et al. 2019, [S23]). The encoded graph tensor proposed by Duan et al. (2019) ([S25]) uses several convolutional layers for processing. Both networks employ further convolutional and maxpooling layers following the initial embedding to reduce feature dimensionality and several fully connected layers for the final classification task. Representation learning vs. similarity-based search A majority of 30 primary studies propose an approach that is called representation learning meaning that a model is trained towards identifying and extracting the relevant information from input data and then performing a classification. This classification is realized by an algorithm applied in the last or penultimate layers of the trained model. The most common classifier consists of one or several fully connected neural network layers (FC) (cp. column "Output" of Table 3). Alternative classification approaches are Random tree (RT) and random forest (RF). Random trees try to find the best splitting feature or predictor from a randomly selected subset of features at each node. Random forests use decision trees with a randomly sampled subset of the full dataset. In contrast to representation learning, there exists the possibility to solely extract and potentially store relevant features of known vulnerability instances and to compare these "signatures" with each new sample that is analyzed. This approach is called similarity-based search since it essentially constitutes a comparison of feature vectors where similar ones are considered a match. Similarity-based search is less computationally intensive than representation learning since only the feature extractor is needed, which can often be reused from a related task. For example, a pre-trained word embedding may be used as a feature extractor for source code. However, the drawback of this approach is that no abstraction of the analyzed concept will be learned. Similarity-based search may still be worthwhile when no sufficient training data or computational resources are available. Two studies propose a similarity-based analysis (Gao et al. (2018) [S1], Xu et al. (2017) [S27]). Gao et al. propose a cross-platform binary vulnerability analysis based on the structure2vec embedding (Gao et al. 2018, [S1]). Their model computes an embedding vector per object code file to be analyzed as well as for each vulnerability to be detected. Vulnerabilities are retrieved from the CVE database (National Vulnerability Database 2020) and their implementation is derived from the Genious-tool by Feng et al. (2016). The actual analysis is then merely a vector comparison, computed as cosine similarity, between all files and all vulnerability embeddings. Dam et al. (2018) ([S3]) propose deep learned features to compare methods of different software projects. A LSTM is trained on code snippets represented as token sequence and output vectors which represent the distribution of semantics of a code token. The vectors of all code tokens are saved in a codebook as so-called global features and can be compared to others by a centroid assignment. Minimizing generalization error Data augmentation techniques can help to increase the generalization performance and robustness of a trained model, feedforward as well as recurrent, by adding plausible deviations to the training data, e.g., changes to code samples (Cui et al. (2018) [S13], Cui et al. (2019) [S14]), or adding noise (Li et al. 2019, [S12]). An alternative and complementary approach to improve generalization performance and to prevent over-fitting of a model is adding a dropout mechanism [S5], Fang et al. (2019) [S16]). Models with Dropout randomly disable connections among neurons during training forcing the model to compensate for the missing connections and thereby becoming more robust. The Dropout ratio refers to the proportion of connections to be randomly disabled. Figure 9 shows that 16 primary studies apply dropout with ratios ranging from 20% to 50%, with 50% being to most common choice. Hyper-parameter optimization The adaptation of deep learning methods to a given problem is essential for their performance, but only a quarter of the studies (7 out of 32) perform and describe a systematic hyper-parameters optimization, e.g., a grid search (Cheng et al. 2019[S26]). RQ5: Evaluation of Proposed Models This research question explores the evaluation of the proposed approaches. The basis for the evaluation is the availability of ground truth labels (cp. column "Ground truth" of Table 3) that we discuss in Section 5.2 and that serves not only in the supervised training process but also as ground truth for computing evaluation metrics. The last column "Artifacts" of Table 3 provides links to code and datasets of the primary studies if available. Quality of the evaluation The training and also the evaluation can only be as precise as the utilized labels. There are several difficulties in generating a sufficient and high-quality amount of training and evaluation samples (cp. Section 5.2). For example, labels may not reflect the ground truth due to mistakes in the labeling process. To test the quality of their evaluation, Li et al. (2020) ([S18]) tried to identify vulnerabilities in projects without known ground truth. They labeled 200 program files manually for a qualitative evaluation and used mediocre results by manually checking only false positives as subset. This emphasizes the need to pay great attention to the labeling process and inv esting as a community into highquality benchmarks. Metrics We collected the metrics used across all reported experiments of the primary studies (cp. Fig. 10). An often reported metric is accuracy measuring the closeness of a measurement to the true value (International Organization for Standardization 2020). While 18 primary studies report accuracy, it is not a proper evaluation metric for imbalanced datasets, such as many of those evaluated by these studies (cp. Figure 5). In a typical evaluation procedure, a dataset is divided into training, validation, and test splits. That is the test split inherits the imbalance of the overall dataset resulting in a situation where the more dominantly trained classes are also more dominantly tested. This leads to an overly high accuracy neglecting the performance for minority classes, in our case vulnerable code samples. A simple solution is computing an accuracy per class first and to average those accuracies into a so-called class-averaged accuracy. Three primary studies have reported this metric. Other appropriate metrics for evaluating unbalanced datasets are precision, recall and the F-measure, a weighted mean of the former. One or multiple of these measures are reported by 25 primary studies. Precision area under curve (PR AUC) is deviated from the precision recall curve. The receiver operating characteristic (ROC) AUC is the area under the curve of true positive rate as a function of false positive rate. The precision recall curve highlights the skewed data, while the ROC curve concentrates more on the performance (Branco et al. 2016). With ROC, the imbalance is not taken into account. The measure describes the proportion between TPR and FPR. These metrics are ratios of correctly and incorrectly predicted samples and are calculated independently on one side of the confusion matrix meaning that class skew does not influence them (Fawcett 2006). Testing time reported by three studies aims to compare resource usage when applying a previously trained model in production. Matthews correlation coefficient (MCC) is a reliable statistical rate, because all four confusion matrix categories are used equally in its calculation (Chicco and Jurman 2020). MCC is also invariant to class swapping in contrast to the F-measure, which varies when binary classes are accidentally renamed (Chicco and Jurman 2020). This metric is only used once. For a multi-class problem, the confusion matrix between all the vulnerability types is meaningful. For the use case of vulnerability detection, MCC and the confusion matrix are good working measures for binary classification rather than using accuracy and F-measure (Chicco and Jurman 2020). Note that trusting only one measure is often not meaningful, one should always consider several metrics. Cross-validation For evaluation, more than half of the studies use cross-validation, either as 1-fold, 5-fold or 10-fold cross-validation. Thereby, the number of folds refers to the number of completely repeated training processes, e.g., a 10-fold cross-validation means 10 completely trained models. Twenty primary studies apply such a cross validation strategy, while twelve do not explicitly describe or mention it. Comparative evaluation Since studies use varying datasets and varying proportions of vulnerable to non-vulnerable examples, we could not compare their raw results to one another. To still be able to contrast studies' approaches, column "Topology" of Table 3 highlights the best performing model topology across a study's experiments in bold. 's study compares a large set of topologies for vulnerability analysis, e.g., FCNN, CNN, LSTM, GRU, BiLSTM, and BiGRU. The comparative study is performed on a dataset of 126 CWEtypes represented by 811 security-related C/C++ library function calls (Li et al. 2021, [S7]). The authors found that bidirectional recurrent networks (e.g., BiLSTM and BiGRU) trained on source code accompanied by data and control dependencies (c.p. Section 5.3) resulted in the highest precision and F-measures, but closely followed by feedforward networks (e.g. FCNN and CNN). In total, eight primary studies compare recurrent and feedforward topologies in their experiments. Four of these found recurrent topologies to be superior, while the other identified feedforward topologies as superior for code analysis. Approaches for graph processing use both types of topologies, e.g., Graph-GRUs are based on recurrent GRU layers, while GCNs are based on feedforward CNN layers. We conclude that both topology types are suitable for code analysis. RQ6: Model Generalizability Cross-project learning Cross-project learning refers to training a model with program code stemming from one set of software projects, while later analyzing program code from new and unseen projects. To successfully train a generalizing model, typically a substantial amount of representative training data stemming from various projects and many developers is needed. There are manifold levels of generalization in this context that can make a model wider applicable, but also harder to train, e.g., differing application domains or differing development methodologies. Among the primary studies, 18 train and test solely with code samples from one and the same project or generated by the same synthetic sample generator, while seven train on multiple datasets but do not explicitly separate projects for testing. The remaining seven primary studies propose cross-project learning (cp. Fig. 11) in different approaches. While ten primary studies approach cross-project learning solely from a dataset perspective (Russell et al. 2018, [S2]), e.g., training on multiple projects and testing on others, there are also two studies that propose a specific set of "global" features for cross-project learning (Dam et al. (2018) [S3], Zou et al. (2019) [S6]). Global features shall represent some broader semantics about several program slices as high-level view, while local features shall represent individual statements. Lin et al. (2018) ([S30]) compared cross-project learning methods [S12], Li et al. (2020) [S18], [S30], Nguyen et al. 2019 [S10], Pechenkin et al. (2018) [S4]). Similar to crossproject learning is cross-version learning proposed by Dam et al. (2018) ([S3]), where the progression over time is the main subject of investigation and new code is introduced in later versions. Transfer learning Transfer learning refers to a two-stage training procedure originally developed for increasing performance on problems with limited training data. In a first step, Fig. 11 Consideration of cross-project prediction a model is pre-trained with a larger, similar dataset to convergence. In a second step, this pre-trained model is taken and then trained with the actual dataset in a procedure called finetuning. Thereby, depending on the available training data in the actual dataset, a stronger or softer fine-tuning in terms of retrained parameters and update strength, i.e., learning rate, is chosen. This procedure is very popular for computer vision problems and recently also became popular for natural language processing (Mou et al. 2016). Lin et al. (2018) ([S30]) compare transfer-learning methods and found that representations learned in this manner are more effective than traditional code metrics. In a follow-up study, Lin et al. compared a transfer-learned BiLSTM network with a BiLSTM network trained from scratch showing that the former outperformed the latter in terms of precision and recall. Trends and Future Directions Better vulnerability differentiation Proposed methods should evolve from today's predominant binary vulnerable vs. non-vulnerable classification to a vulnerability type classification or ranking (Harer et al. 2018, [S9]). The CWE catalog currently lists 839 individual vulnerability types, while the most elaborate primary study merely aimed to distinguish 40 vulnerabilities. Primary studies acknowledge this shortcoming and plan to adopt their works to more types of vulnerabilities (Xu et al. 2018 [S31], Wu et al. 2017 [S28], Fang et al. (2019) [S16]) or to introduce patterns uncovering multiple types of vulnerabilities (Zou et al. 2019, [S6]). CWE vulnerabilities are partly defined in taxonomic relationship, i.e. more abstract parent CWEs and derived child CWEs. Note that CWEs are based on humandefined categorization and may change from time to time. It may be helpful to exploit these relationships for analysis and presentation of results, e.g., results with limited reliability could be propagated to the parent class. Object code analysis as fall back On the one hand, vulnerability analysis tools shall intensively be used by software developers to make their source code more secure from the beginning; and source code clearly is the richer information source containing, e.g., identifier names and comments. On the other hand, the ability to analyze binary files enables an analysis of software for which source code is not publicly available. For example, security experts, would be able to study a wide variety of software and users, could establish more trust in their tools. Accordingly, the authors of multiple primary studies argue that in a future extension they aim to expand their work towards object code Liu et al. (2020) Utilizing unlabeled program code and additional data sources The success and popular- ity of open-source software gives researchers access to unprecedented amounts of source code written in different languages, targeting different domains and being of varying quality. Future studies should make more use of this valuable resource. Therefore, they have to, however, overcome the problem of vulnerability labeling. Manual labeling whole projects seems unfeasible and using other analysis approaches, such as static code analysis, will limit labeled classes to the capabilities of the respective approach. A potential solution could be unsupervised approaches analyzing program code for uncommon and therefore potentially vulnerable code constructs. Another direction that future studies should explore is utilizing additional datasources upon program code analysis. These can be project-specific datasources, such as version control information, issue tracker information and requirements, in order to identify atypical deviations from specifications and processes. Also general information, such as discussion of programming questions on platforms like Stack Overflow or security bulletins should be utilized. Towards program code embeddings We found that primary studies use simple bag of words approaches or word embeddings to encode their input data, typically a token stream. These encodings are relatively simple to apply, but have substantial shortcomings. Bag of words approaches neglect semantics among tokens. Word embeddings are able to capture these semantics, but require training with a large representative corpus. When analyzing program code, researchers can decide between (1) training their own embedding meaning that they need to assemble a large corpus of program code or (2) reusing an actionable embedding pretrained on large text corpora like Wikipedia meaning that the embedding is not specifically trained to capture program code semantics. We argue that training specific program code embeddings with large corpora of representative code is an open research question that could substantially improve analysis results. Another shortcoming of word2vec word embeddings is that they can only embed previously trained tokens. While the grammar of a programming language is rather static, user defined identifiers like function and variable names can vary a lot and have an unlimited amount of neologisms (Chen and Monperrus 2019). The FastText word embedding (Bojanowski et al. 2017) overcomes this shortcoming by training not only the actual word but also its character n-grams, i.e., previously unseen words can still be encoded. The latter was not used in any primary study. Other topologies for analyzing program code We found that primary studies prefer recurrent topologies over feedforward topologies in their experiments, which is motivated by the fact that code is of sequential nature and variable length making recurrent topologies more suitable than standard feedforward topologies including CNNs. However, even the most advanced recurrent topologies suffer from two fundamental problems: (1) recurrence makes their training in large parts sequential and therefore slow; and (2) long input sequences lead to vanishing gradients in the training process. Feedforward Transformer topologies have been proposed to analyze input data of variable length while overcoming these problems (Parmar et al. 2017). Transformers employ sophisticated attention mechanisms and positional encodings to compensate for the missing recurrence and have been demonstrated to substantially outperform RNNs in natural language processing regarding runtime and memory (Botha et al. 2017). We argue that Transformers should also be studied for vulnerability analysis. Furthermore, we found little research on processing program code as graph omitting the typical transformation into input sequences or matrices. A graphical representations seems more natural for representing program code given the underlying control and data flows. We argue that representing and analyzing program code in a graphical way should be a major focus of future research. Explainability of analysis results Despite its often unprecedented inference quality facilitating more precise analysis results, representation learning with deep neural networks is also criticized as black box, nontransparent and hard to interpret (Selvaraju et al. 2017). Explainability of model decisions is of high importance for vulnerability analysis in order to support developers in understanding and fixing discovered problems [S7], Wu et al. 2017 [S28], Zhou et al. (2019) [S23], Lin et al. (2019) [S17]). An essential precondition is having fine-grained analysis results, e.g., highlighting problematic code tokens (Russell et al. ,[S2]), rather than declaring an entire code sample vulnerable. For example, layer relevance propagation, an explanation technique propagating the prediction of neural networks layer-wise back to its inputs, could be utilized to report which tokens influenced the current decision of a model (Warnecke et al. 2020). Such methods would allow highlighting the most problematic code locations to a user and to guide further inspection and should be explored for vulnerability analysis [S6], [S12]). As a future exercise it would also be interesting to deliver an actual explanation of how a given code sample is vulnerable and how an identified weakness may be effectively fixed. Cross-language and cross-architecture learning Cross-language learning is the abstraction of vulnerability patterns from a specific language in order to facilitate languageagnostic vulnerability analysis of source code Zou et al. (2019) [S6], Dam et al. (2018) [S3]). None of the primary studies approach vulnerability analysis in this manner. However, it seems a relevant future research topic as it promises more versatile and universal models, and could uncover a more holistic view on vulnerability types, e.g., which are language specific and which are language agnostic. A related concept is cross-architecture learning aiming to abstract object-code from the architecture it was compiled for and producing models that are applicable to a wide variety of object code (Gao et al. (2018) [S1], Xu et al. (2017) [S27]). Towards usable analysis systems Methods should evolve from proprietary implementations on tailored datasets to end-to-end software analysis tools applicable in practice and supporting software developers in the development of secure software. There is still room for improvement of DL-based methods as well as SAST tools (Johnson et al. 2013;Christakis and Bird 2016). An end-to-end software analysis tool shall be able to analyze projects created in a large variety of programming languages and with differing development methodologies. This includes delivering pre-trained models that generalize from a specific software project and potentially also from an application domain. For the training of such models a substantial and representative training corpus is needed. Threats to Validity Below, we discuss threats to the validity of our meta study grouped according to four commonly used categories: construct, internal, external, conclusion validity (Easterbrook et al. 2008). The construct validity threats concern the relationship between the theory and the application. We carefully defined our search terms and selection criteria according to Kitchenham and Charters's guidelines, but during construction of these we may have missed an important keyword. We tried to mitigate this threat (i) by performing a search on Google Scholar as meta search engine with richer and broader results; and (ii) by conducting a citation check forward and backward on the primary studies identified up to this point. Furthermore, we limited our meta-study to English-language publications as English is the common language for publishing in computer science. Moreover, this field is fast-evolving, so the number of relevant studies can grow fast. At the time of completing this meta-study (May 2020), we consider our survey complete. The main threats to internal validity are author bias, author influence and the understandability of the written reports leading to a possible inaccuracy in data analysis and extraction. To overcome the mentioned issues, we documented all executed steps in a protocol and re-checked these steps between the authors. Our search strategy included a filter on publication title and abstract in an early phase of the study search process. We used a predefined search string ensuring that we only searched for primary studies having their main focus on vulnerability analysis in the deep learning field. Therefore, studies that propose a more general deep learning on code classification approach with different classes than vulnerability types may have been excluded by this filter, but could be easily adapted to this application. This is valid, because further work is needed for their adaption. External validity refers to the representativeness of results. To overcome potential threats in this direction, we used different representative publication repositories. Given the fact that we study a relatively new field of research and there is not an enormous amount of relevant papers, we restricted quality analysis of primary studies to a minimum. That means that some primary studies are short papers and important details of their approaches are not reported making it hard to fully compare them against others. Therefore, we had to infer certain information during the data extraction process, but intensively discussed these cases among the authors of this survey. Our discussion is not entirely complete due to this missing information. We would have liked to collect additional information on utilized models, e.g., the number of tokens per input sample or the number of parameters per layer across topologies. Finally, conclusion validity involves the threat of deriving correct conclusions from this literature review. We counteract to this by a discussion between the authors to reach consensus for each conclusion. However, the reader could come to her own conclusion. Conclusions In this meta-study, we surveyed publications on deep learning assisted vulnerability analysis of source code and object code. We present the evolution from traditional machine learning to deep learning approaches for this application that took place in the last three to four years. Training data forms the basis for the studies and was inspected in Section 5.2. We also provide different categorization schemata according to the studies' input pre-processing and the level of granularity in Section 5.3 to show the differences across the primary studies. We compared studies regarding code representation, topology and evaluation corresponding to the dataset structure. This work also advises which topology and evaluation metrics are suitable for follow-up studies. Furthermore, we point out limitations of primary studies and discuss future research directions, such as more extensive dataset labeling, cross-project prediction, processing program code as graph, and providing explainable code analysis results. No benchmark metric available, because the dataset has been published without analysis results. We only considered the primary publication. - * 2 (National Vulnerability Appendix A: Datasets for Vulnerability Analysis Database 2020)
16,198
2021-10-21T00:00:00.000
[ "Computer Science", "Engineering" ]
Covers and partial transversals of Latin squares We define a cover of a Latin square to be a set of entries that includes at least one representative of each row, column and symbol. A cover is minimal if it does not contain any smaller cover. A partial transversal is a set of entries that includes at most one representative of each row, column and symbol. A partial transversal is maximal if it is not contained in any larger partial transversal. We explore the relationship between covers and partial transversals. We prove the following: (1) The minimum size of a cover in a Latin square of order n is n+a\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n+a$$\end{document} if and only if the maximum size of a partial transversal is either n-2a\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-2a$$\end{document} or n-2a+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-2a+1$$\end{document}. (2) A minimal cover in a Latin square of order n has size at most μn=3(n+1/2-n+1/4)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu _n=3(n+1/2-\sqrt{n+1/4})$$\end{document}. (3) There are infinitely many orders n for which there exists a Latin square having a minimal cover of every size from n to μn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu _n$$\end{document}. (4) Every Latin square of order n has a minimal cover of a size which is asymptotically equal to μn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu _n$$\end{document}. (5) If 1⩽k⩽n/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\leqslant k\leqslant n/2$$\end{document} and n⩾5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\geqslant 5$$\end{document} then there is a Latin square of order n with a maximal partial transversal of size n-k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-k$$\end{document}. (6) For any ε>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon >0$$\end{document}, asymptotically almost all Latin squares have no maximal partial transversal of size less than n-n2/3+ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-n^{2/3+\varepsilon }$$\end{document}. Introduction A Latin square of order n is an n × n matrix containing n symbols such that each row and each column contains one copy of each symbol. Unless otherwise specified, we use Z n as the symbol set and also use Z n to index the rows and columns. Where convenient (such as when embedding a Latin square inside a larger one), we consider Z n to be the set of integers {0, . . . , n − 1} rather than a set of congruence classes. For a Latin square L = [L i j ], we define E(L) = {(i, j, L i j ) : i, j ∈ Z n } to be the set of entries. The set of all entries in a row, all entries in a column or all entries containing a given symbol is called a line. In particular, a Latin square of order n contains exactly 3n lines. We say a line is represented by an entry e whenever e ∈ . For a set of entries C ⊆ E(L), we say is represented by C whenever |C ∩ | 1, and we say it is represented |C ∩ | times by C . If C ∩ = {e}, we say that is uniquely represented by e. We define a c-cover as a c-subset of E(L) in which every line is represented. In order for a Latin square of order n to have a c-cover, we must have c n. A partial transversal of deficit d is an (n − d)-subset of E(L) in which every line is represented at most once. Since an entry (r, c, s) in a partial transversal uniquely represents three lines (its row, column and symbol), a partial transversal of deficit d represents exactly 3(n − d) lines. A transversal is a partial transversal of deficit 0. Figure 1 gives examples of a partial transversal, a transversal and a cover. In a Latin square L, we say a cover C of L is minimal if, for all e ∈ C , the set C \ {e} is not a cover. If C is not minimal, then it has a redundant entry e ∈ C for which C \ {e} is also a cover. We also say C is minimum if every cover of L has size at least |C | (for any subset of the entries, size means the number of entries involved). We say a partial transversal T of L is maximal if, for all e ∈ E(L) \ T , the set T ∪ {e} is not a partial transversal. We stress that the maximality of a partial transversal T is always relative to the whole Latin square L, even when we locate T inside some proper subset of E(L). There are many tantalising open questions regarding transversals [21]. One of the more famous problems is Brualdi's Conjecture, which asserts that every Latin square possesses a near transversal, that is, a partial transversal of deficit 1. The current best result in this direction is due to Shor and Hatami [12] who showed that every Latin square has a partial transversal of deficit O(log 2 n). There are a great many Latin squares that do not possess transversals [5], although no such example of odd order is known. In fact, Ryser [19] conjectured that there is no transversal-free Latin square of odd order. In this paper, we introduce the notion of covers with the primary aim of using them to facilitate the study of transversals. Pippenger and Spencer [17] showed a very powerful and general result that includes covers of Latin squares as a special case. They showed that as n → ∞, the entries of a Latin square of order n can be decomposed into n − o(n) covers. In particular, this means that all Latin squares have a cover of size n + o(n). A better upper bound on the size of the smallest cover is given in Corollary 1. A Latin square L of order n is equivalent to a tripartite 3-uniform hypergraph with n vertices in each part (corresponding respectively to rows, columns and symbols) and n 2 hyperedges (corresponding to the entries of L). In this framework, a cover of L is precisely an edge cover (a set of hyperedges whose union covers the vertex set) of this hypergraph. Alternatively, L can be considered as an n-uniform hypergraph of order n 2 with edges that are precisely the 3n lines of L; a cover of L is precisely a vertex cover (a set of vertices that intersects every edge) of this hypergraph. This relationship with hypergraph covers is one justification for our choice of terminology. Another reason for our terminology is the connection to covering codes. Writing each entry of L as a codeword of length 3 over an alphabet of size n, we obtain a maximal distance separable code (the connection between Latin squares and MDS codes was described by McWilliams and Sloane [15] as "one of the most fascinating chapters in all of coding theory"). By extending the alphabet to Q = Z n ∪ {∞}, a cover C ⊆ E(L) of a Latin square L of order n is, in the sense of [3], a special case of a 2-cover in Q 3 of the set {(i, ∞, ∞) : i ∈ Z n } ∪ {(∞, j, ∞) : j ∈ Z n } ∪ {(∞, ∞, k) : k ∈ Z n }. It has the additional property that C is contained in E(L) for some Latin square L, which implies codewords in C do not contain ∞. A Latin square L of order n also has a natural representation as an n 2 -vertex graph, called a Latin square graph which we denote L , with vertex set E(L) and an edge between two distinct entries whenever they share a row, column or symbol. An example is shown in Fig. 2. The graph L is thus the union of 3n cliques of size n (one for each line), and a cover in L is equivalent to a selection of vertices in L in which each of these cliques has at least one representative. A cover of L does not necessarily map to a vertex cover of L (the example in Fig. 2 is not a vertex cover of L ). Any cover of L maps to a dominating set of L . In fact, any cover of L corresponds to a 3-dominating set of L , i.e., any entry outside the 3-dominating set has 3 or more neighbours inside the 3-dominating set [13,Sec. 7.1] (see also [6]). The converse is not true, i.e., not every 3-dominating set is a cover: a 3-dominating set (actually a 4-dominating set) is formed in L by the entries with symbols 0 and 1 in any Latin square L of order n 2. Yet, when Converting between a Latin square L and the equivalent Latin square graph L , with an (n + 1)-cover highlighted in both. Edge colours are added to indicate the relationship between neighbouring entries (dotted for the same row, dashed for the same column and solid for the same symbol) (Color figure online) n 3, this 4-dominating set does not cover the symbol 2. A cover therefore corresponds to a special kind of 3-dominating set, where each n-clique (arising from each line in the Latin square) has a representative in the cover. Let L be a Latin square of order n 3. The domination number of L , i.e., the size of the smallest dominating set of L , denoted γ ( L ), is less than n: to form an (n − 1)-entry dominating set, select all but one of the entries with symbol 0. In fact, γ ( L ) will likely be smaller than n − 1, since any maximal partial transversal corresponds to a dominating set in L . However, for a 3-dominating set of cardinality a to exist in L , we must have since each of the a entries in the 3-dominating set dominates at most 3(n − 1) vertices, and there are n 2 − a entries dominated at least 3 times each. This implies that a n, implying the 3-domination number of L , denoted γ 3 ( L ), is strictly greater than the domination number, i.e., γ 3 ( L ) > γ ( L ). (In fact, γ 3 (G) > γ (G) holds for all graphs G with minimum degree at least 3 [13,Cor. 7.2].) For each Latin square L there are six conjugate squares obtained by uniformly permuting the three coordinates in E(L). An isotopism of L is a permutation of its rows, permutation of its columns and permutation of its symbols. The resulting square is said to be isotopic to L. The isotopism class of L is the set of Latin squares isotopic to L. The autotopism group of L is the group of isotopisms that map L to itself. The species of L is the set of squares that are isotopic to some conjugate of L. A theme in our work is to explore a loose kind of duality between covers and partial transversals. In Sect. 2 we demonstrate some relationships between the size of maximum partial transversals and the size of minimum covers, and between the numbers of these objects. In Sect. 3 we look at the other end of the spectrum, namely small maximal partial transversals and large minimal covers. Here we find less of a connection. We show that Latin squares of a given order have little variation in the size of their largest minimal covers, but can vary significantly in the size of their smallest maximal partial transversals. In Sect. 4 we summarise our achievements and discuss possible directions for future research. Covers and partial transversals In this section, we explore some basic relationships between covers and partial transversals. We first consider how to turn a partial transversal into a cover. Throughout, we will use • in an entry when its value is irrelevant to our argument. For example, (i, j, •) is the entry in row i and column j, while (•, •, k) is an arbitrary entry with symbol k. Theorem 1 In a Latin square L of order n 2, any partial transversal T of deficit d is contained in an (n + d/2 )-cover. Moreover, if T is maximal, then the smallest cover containing T has size n + d/2 . Proof We begin assuming T is maximal, in which case any entry in E(L) \ T covers at most two previously uncovered lines. Let r 1 , . . . , r d , c 1 , . . . , c d and s 1 , . . . , s d denote, respectively, the rows, columns and symbols that are unrepresented in T . Start by setting C = T . Then for i ∈ {1, . . . , d/2 } we add (r 2i−1 , c 2i−1 , •), (r 2i , •, s 2i−1 ) and (•, c 2i , s 2i ) to C . Finally, if d is odd we add (r d , c d , •) and (•, •, s d ) to C . This produces a cover of size n − d + 3d/2 = n + d/2 . As we covered the maximum possible number of uncovered lines at each step, no smaller cover contains T . If T is not maximal, then the above approach gives a cover C of size at most n + d/2 , since there may be duplication among the entries that are added. Assuming n 2, we can simply add entries from E(L) \ C until we have a cover of size n + d/2 . Since Shor and Hatami [12] have shown the existence of a partial transversal with small deficit, we immediately get: Corollary 1 Every Latin square of order n has a cover of size n + O(log 2 n). We now consider how to turn a cover into a partial transversal. Theorem 2 Let L be Latin square of order n 1. Any (n + a)-cover of L contains a partial transversal of deficit 2a. Proof Let R, C and S respectively be n-subsets of an (n + a)-cover C in which each row, column and symbol is (necessarily uniquely) represented. Note that T = R ∩ C ∩ S is a partial transversal of L. Since |C | = n + a and |C \ R| = |C \ C| = |C \ S| = a, has size at least n − 2a, so has deficit at most 2a. Finally, if T has a smaller deficit, we can delete entries to obtain deficit exactly 2a. For any (n + 1)-cover of a Latin square L, the corresponding n + 1 vertices of the Latin square graph L induce a subgraph with 3 edges. (This is an example of a partial Latin square graph [10]. A partial Latin square is an n × n matrix containing at most n symbols, possibly with some empty cells, such that each row and each column contains at most one copy of each symbol. Alternatively, a partial Latin square can be viewed as a set of triples where no two triples agree in more than one coordinate.) Ignoring isolated vertices and edge colours, there are only 5 such graphs, which we denote G 1 , . . . , G 5 , depicted in Fig. 3. We will refer to these graphs as being the graph induced by the cover (specifically, this terminology ignores isolated vertices). Taking a conjugate of L permutes the edge colours in the graph induced by a cover, which does not change the type of graph according to our classification. A consequence of Theorem 1 is that any Latin square of order n 2 with a partial transversal of deficit 1 has an (n + 1)-cover. Thus, if Brualdi's Conjecture is true, then all Latin squares of order n 2 have an (n + 1)-cover. A converse of this statement is not immediate since we cannot always delete 2 entries from an (n + 1)-cover to give a partial transversal of deficit 1; see Fig. 3 (under graph G 1 ) for an example. However, Theorems 1 and 2 imply that a Latin square L of order n 2 has a partial transversal of deficit 2 if and only if it has an (n + 1)-cover. We now extend this observation to minimum covers. Proof First suppose that L has an (n + a)-cover and no smaller cover. By Theorem 2, there is a partial transversal of deficit 2a. If L has a partial transversal of deficit at most 2a − 2, then Theorem 1 implies there is a cover of size at most n + a − 1, which we are assuming is not the case. Hence, the minimum deficit of a partial transversal is either 2a or 2a − 1. For the converse, suppose the minimum deficit of a partial transversal is either 2a or 2a − 1. By Theorem 1, there is an (n + a)-cover. If there is a cover of size at most n + a − 1, then Theorem 2 implies there is a partial transversal of deficit at most 2a − 2, which we are assuming is not the case. For the a = 0 case in Theorem 3, a transversal of a Latin square of order n is also an n-cover. For the a = 1 case, cyclic group tables of even order are examples for which the minimum size of a cover is n + 1 and the minimum deficit of a partial transversal is 1. Brualdi's Conjecture implies the minimum size of a cover of an order-n Latin square is n or n + 1. Figure 3 also includes an example of a Latin square of order 5 in which all five of the possible induced subgraphs are achieved by different (n + 1)-covers. We make the following observations about deleting vertices from the graphs in Fig. 3. -For graph G 4 , we can delete one vertex to create an edgeless graph, so deleting the corresponding entry from the (n + 1)-cover gives a transversal. Thus (n + 1)-covers that induce G 4 are not minimal, unlike the other four graphs (G 1 , G 2 , G 3 and G 5 ). -For graphs G 2 , . . . , G 5 , we can delete two vertices to create an edgeless graph, and deleting the corresponding entries from the (n + 1)-cover gives a near-transversal. -For graph G 1 , we must delete at least 3 vertices to create an edgeless graph. -For any vertex v of any of the five graphs G 1 , . . . , G 5 , it is possible to delete 3 or fewer vertices to create an edgeless graph without deleting v. Thus when n 2, every entry in an (n + 1)-cover belongs to a partial transversal of deficit 2. We define q i = q i (L) to be the number of (n + 1)-covers that induce G i in a Latin square L. Across all species of order n 8, we found all (n + 1)-covers. Table 1 lists the average number (rounded off) of (n + 1)-covers that induce each graph across these species. Table 1 also gives the least number of (n + 1)-covers found of each of the 5 types. It is interesting to note that for each n, the number of all (n + 1)-covers is fairly consistent across the Latin squares of order n (in the sense that the range is small compared to the average). This is not true, for example, for the number of transversals. Table 1 The number of (n + 1)-covers that induce G i , averaged over species of Latin square of order n We also give the minimum and maximum numbers of (n + 1)-covers found in a Latin square. The columns headed "All" refer to the count of all (n + 1)-covers irrespective of which G i they induce Theorems 2 and 3 leave open some possibilities, e.g., a Latin square might have two minimum covers that differ in terms of the smallest deficit of the partial transversals that they contain. The following theorem gives some restrictions in this context. Theorem 4 Let L be a Latin square of order n 2 in which the minimum deficit of a partial transversal is d. Then: 1. the minimum size of a cover of L is n + d/2 , 2. any partial transversal T of deficit d is contained in a minimum cover of L, 3. any minimum cover contains a partial transversal of deficit d if d is even, or deficit d + 1 if d is odd, and 4. if d is even, any minimum cover that contains an entry e contains a minimum-deficit partial transversal that contains e. Proof If L has a cover of size less than n + d/2 , then Theorem 2 implies it has a partial transversal of deficit less than d, contradicting the assumption that d is the minimum deficit. So any cover has size at least n + d/2 , and Theorem 1 implies T is contained in some cover of size n + d/2 . Theorem 3 implies that any (n + d/2 )-cover contains a partial transversal of deficit 2 d/2 or 2 d/2 − 1. When d is odd, these equal d + 1 and d, respectively, and we can delete an entry from a partial transversal of deficit d to obtain one of deficit d + 1. When d is even, Theorem 2 implies the cover contains a partial transversal of deficit d, and we can ensure e belongs to this partial transversal by choosing e ∈ R ∩ C ∩ S in the proof of Theorem 2. Theorem 4 implies nothing consequential when d = 0. Cyclic groups of even order have minimum deficit d = 1, and are thus a convenient example to verify that the conditions of Theorem 4 cannot be tightened. To date, we only have examples of Latin squares where the minimum deficit of a partial transversal is d ∈ {0, 1}, with Brualdi's Conjecture implying that this is always the case, so we cannot inspect d 2 cases. By inspecting cyclic group tables of even order, we make the following observations: -A minimum cover might not contain a partial transversal of minimum deficit d, but instead have one of deficit d + 1. Figure 4 gives an example of this; we give four covers of the Cayley table of Z 8 , one of which contains no partial transversal of deficit 1. -A minimum cover might contain two maximal partial transversals, one of deficit d and one of deficit d + 1. The cover that induces G 2 in Fig. 4 has this property. Minimum covers of the Cayley table of Z 8 that induce graphs isomorphic to G 1 , G 2 , G 3 and G 5 , respectively. For the three right-most covers, we can delete 2 entries from the highlighted cover to give a partial transversal of deficit 1, but we must delete at least 3 entries from the left-most cover to obtain a partial transversal, which will have deficit at least 2 Table 2 The number of distinct partial transversals of deficit d within an (n + 1)-cover that induces the subgraph G i Number of partial transversals of deficit d contained in an (n + 1)-cover that induces G i d 0 1 2 -A minimum cover might contain no partial transversals of minimum deficit d, with the partial transversals of deficit d + 1 it contains all being maximal. For the the Cayley table of Z 10 , Fig. 5 shows an 11-cover that contains no partial transversals of deficit 1, and the 8 partial transversals of deficit 2 it contains are maximal. Given the five possible graph structures of (n + 1)-covers in Fig. 3, we can enumerate the number of partial transversals of deficit d they contain, which we do in Table 2. The terms in Table 2 are derived as follows: For each way we can delete b + 1 vertices from the graph G i to form an edgeless graph, we can form a partial transversal of deficit d by deleting them along with a further d − b entries that are not involved in G i . Each partial transversal of deficit d generated this way is distinct. Generally, these are not maximal partial transversals, but they are maximal partial transversals when d = 0, or when d = 1 for graphs other than G 4 . Table 2 shows that the number of partial transversals of deficit d in an (n + 1)-cover depends significantly on its structure, e.g., when d = 2, the number varies from (1) to (n 2 ). We therefore do not anticipate a simple relationship between the number of partial transversals and the number of (n + 1)-covers in general. However, we have the following results for (n + 1)-covers. Theorem 5 Let T be a maximal partial transversal of deficit d in a Latin square L of order n 3. Proof The d = 0 case is trivial, so we begin with the case d = 1. Assume row i, column j and symbol k are unrepresented by T . Since T is maximal, (i, j, k) / ∈ E(L), so to extend it to an (n + 1)-cover, we must add entries of the form This gives 3n − 3 distinct ways to extend T to an (n + 1)-cover. Each cover induces a graph of type G 2 , G 3 or G 5 (cf. Fig. 3). In particular, as G 4 does not arise, the (n + 1)-covers are minimal. Now assume d = 2. Assume rows i and i , columns j and j and symbols k and k are unrepresented by T . Since T is maximal, there are no entries of the form and we obtain all others by some combination of swapping i and i , swapping j and j , and/or swapping k and k . When d 3, Theorem 1 implies that T does not extend to an (n + 1)-cover. In the d = 2 case of Theorem 5, the 8 distinct (n + 1)-covers may or may not be minimal depending on the structure of L. For example, when n = 3, they are all non-minimal (since Latin squares of order 3 have no minimal 4-covers). Theorem 6 Let L be a Latin square of order n 3. Let p max be the number of maximal partial transversals of deficit 1 in L. Let q min be the number of minimal (n + 1)-covers in L. Then q min = q 1 + q 2 + q 3 + q 5 and If t is the number of transversals in L, then the number p of (not necessarily maximal) partial transversals of deficit 1 of L and the number q of (not necessarily minimal) (n + 1)-covers of L satisfy Proof The theorem is easily checked when n = 3 since p max = q min = q 1 = 0 in this case, so assume n 4. Theorem 5 implies that each maximal partial transversal T of deficit 1 embeds in exactly 3(n − 1) distinct minimal (n + 1)-covers. Moreover, in the proof of Theorem 5, we observed that these (n + 1)-covers are of type G 2 , G 3 or G 5 , which contain exactly 2, 3 and 3 maximal partial transversals of deficit 1, respectively. Thus, We also know q min = q 1 + q 2 + q 3 + q 5 since (n + 1)-covers are minimal if and only if they do not induce G 4 . Hence Let T be a maximal partial transversal of L of deficit 1. Up to isotopism of L, we may assume that T = {(i, i, i) : i ∈ Z n \ {z}}, where z = n − 1, and L zz = 0. Define r such that L r z = z and define c such that L zc = z. Among the 3(n − 1) distinct minimal (n + 1)-covers containing T , we have the following three families: Each family accounts for at least n − 4 distinct minimal (n + 1)-covers containing T and inducing G 2 . Since there are 3(n − 1) minimal (n + 1)-covers containing T , there can be at most 9 that do not induce G 2 , and hence either induce G 3 or G 5 . We note that T is contained in at least 3 distinct minimal (n + 1)-covers that do not induce G 2 , corresponding to the three choices of two entries from {(r, z, z), (z, c, z), (z, z, 0)}. This means there are between 3 and 9 distinct (n + 1)-covers that induce G 3 or G 5 and contain T . Also, recall that each (n + 1)-cover that induces G 3 or G 5 contains exactly 3 maximal partial transversals of deficit 1. This gives 3 p max 3q 3 + 3q 5 9 p max or simply p max q 3 + q 5 3 p max , which we substitute into (4) to obtain (1). The number p of (not necessarily maximal) partial transversals of deficit 1 of L is p = p max + nt and the number q of (not necessarily minimal) (n + 1)-covers of L is q = q min + q 4 = q min + n(n − 1)t. Combining this with (1), we get (2). Our next result is motivated by the work of Belyavskaya and Russu (see [7, p. 179]) who showed that Cayley tables of certain groups do not have maximal partial transversals of deficit 1, in which case p max = q 2 = q 3 = q 5 = 0. This is an obstacle to finding a non-trivial lower bound on p max that is only a function of q min and n. Lemma 1 Let L be the Cayley table of an abelian group G of order n. If the Sylow 2subgroups of G are trivial or non-cyclic then L has no maximal partial transversal of deficit 1 (and hence has no (n + 1)-cover inducing G 2 , G 3 or G 5 ). On the other hand, if the Sylow 2-subgroups of G are non-trivial and cyclic then L has no transversal (and hence has no (n + 1)-cover inducing G 4 ). Proof Let X G denote the sum of the elements of G . It is well-known (see, for example, [7, p. 9]) that X G is the identity if the Sylow 2-subgroups of G are trivial or non-cyclic and is otherwise equal to the unique element of order 2 in G . In the latter case there are no transversals in L ( [7, p. 8]) as claimed, so we concentrate on the former case. Suppose T is a partial transversal of deficit 1 in L and that r , c and s are respectively the row, column and symbol that are not represented in T . As s = r + c, we see immediately that T is not maximal, from which the result follows. Table 3 gives the value of q i for the Cayley table of Z n . The zeroes in Table 3 are all explained by Lemma 1, except that q 5 = 0 in Z 6 , which may just be a small order quirk. The maximal partial transversal highlighted in the Latin square Table 3 The number q i of (n + 1)-covers of Z n that induce G i q 1 q 2 q 3 q 4 q 5 All is contained in exactly 9 (necessarily minimal) (n + 1)-covers that do not induce G 2 . We also saw during the proof of Theorem 6 that all maximal partial transversals of deficit 1 are contained in at least 3 (necessarily minimal) (n + 1)-covers that do not induce G 2 . These observations present some obstacles to improving the bounds given in (1). We also observe that in a general Latin square, (3) implies that q 2 ≡ 0 (mod 3). Akbari and Alipour [1] showed that p max ≡ 0 (mod 4). Thus, 2q 2 ≡ q 3 + q 5 (mod 4) by (3). We also note q 4 = n(n − 1)t, where t is the number of transversals, and t is even when the order n is even [4]. Also, we can delete an entry from any (n + 1)-cover of type G 3 (when n 5) or type G 5 (when n 4) and add another entry to obtain an (n + 1)-cover of type G 2 , implying that if q 2 = 0, then q 3 = q 5 = 0 (which occurs for the odd-order cyclic group tables). The question of which entries within Latin squares belong to transversals has also been studied. The parallel topic for covers plays a role throughout this paper, so we mention the following theorem, which can be derived from [8,22]. Theorem 7 For every n 5, there exists a Latin square of order n that has transversals, but also has an entry that is not in any transversal. Consequently, for every n 5, there exists a Latin square of order n that contains an entry that is not in any minimum cover nor in any partial transversal of minimum deficit. Theorem 7 does not extend to any order n 4 since all Latin squares of those orders are isotopic to the Cayley table of a group. Such Latin squares have autotopism groups that act transitively on entries, and hence every entry will be in a partial transversal of minimum deficit and also every entry will be in a minimum cover. Consider a Latin square of L in which the minimum deficit of a partial transversal is d. By Theorem 4, every entry of L that is in a partial transversal of deficit d is also in a minimum cover. It is not clear if the converse holds when d is odd (although Theorem 4 shows the converse holds when d is even). There is no known Latin square of order n 2 that has an entry that is not in a partial transversal of deficit 1. If this property holds in general, then every entry that is in a minimum cover is also in a partial transversal of minimum deficit. To finish this section, we observe that if a Latin square L of order n 5 has a transversal, then any entry in L belongs to a minimal (n + 1)-cover. Therefore, the entries in Theorem 7 that are not in minimum covers do belong to minimal covers of size 1 larger than minimum. Theorem 8 If a Latin square L of order n 5 has a transversal T , then each entry of L belongs to some minimal (n + 1)-cover. Proof A computer search reveals that any entry in any Latin square of order n ∈ {5, 6} belongs to a minimal (n + 1)-cover. Now assume n 7. By applying an isotopism, we may assume that Let (a, b, c) be an arbitrary entry of L. Now we may assume that (a, b, c) ∈ T and that a = b = c = 0. Let i be such that i / ∈ {0, 1} and L 1i = 0. Consider By a similar argument as before, if j, j and j are not all the same, then C 1 is a minimal (n + 1)-cover containing (a, b, c). If j = j = j , then note that j / ∈ {0, 1, i}, and then let k be such that k / ∈ {0, 1, i, j} and L 1k / ∈ {0, i} (this choice of k is possible since n 7). Consider By a similar argument as before, if , and are not all the same, then C 2 is a minimal (n + 1)-cover containing (a, b, c). If = = , then note that / ∈ {0, 1, j, k} and L must have the following structure: By removing the entries containing the symbols 1, i, j, k and from T and adding the shaded entries, we have a minimal (n + 1)-cover containing (a, b, c). Theorem 8 does not hold for orders n ∈ {1, 3, 4} as the Latin squares of those orders that have transversals do not have minimal (n + 1)-covers (and Theorem 8 is vacuously true when n = 2). Large minimal covers In this section, we consider the question of how large a minimal cover in a Latin square of order n can be. When n 3, a transversal (which has size n) is the smallest minimal cover possible in a Latin square of order n. The size of the largest minimal cover is harder to establish. It is clear that it cannot be larger than size 3n, since there are only 3n lines and each entry in a minimal cover uniquely represents at least one line. Perhaps surprisingly, this is close to the true answer. We will show that every Latin square of order n has a minimal cover with size asymptotically equal to 3n as n → ∞. To work towards finding the size of the largest minimal covers, we begin with a simple observation. Lemma 2 Every Latin square L of order n 1 contains a minimal cover of size 2n − 1. Furthermore, any entry of L belongs to a minimal cover of size 2n − 1. Proof Take all entries that are either in the r th row or in the cth column. This gives a set of 2n − 1 entries in which every line is represented. The entry (r, c, L rc ) uniquely represents its symbol. The other entries in row r uniquely represent their respective columns, and the other entries in column c uniquely represent their respective rows. Hence the cover is minimal. We consider a more general problem that will be easier to handle. If an n × n partial Latin square on the symbol set Z n has each row, column and symbol represented at least once, we call it a potential cover of order n. By definition, a cover admits a completion to a Latin square, whereas not all potential covers admit a completion. Figure 6 gives an example of two potential covers, one of which is a cover. A potential cover C of order n is minimal if, for all e ∈ C , the set C \ {e} is not a potential cover of order n. We will bound the maximum size of minimal potential covers, thereby giving an upper bound on the cardinality of minimal covers. Given a potential cover C , define U R = U R (C ) to be the set of all entries that uniquely represent a row but no other line, U RC = U RC (C ) to be the set of all entries that uniquely represent a row and a column but no other line, U RCS = U RCS (C ) to be the set of all entries that uniquely represent a row, column and symbol, and define U C , U S , U RS and U CS accordingly. An example is given in Fig. 7. If an entry does not uniquely represent a row, column or symbol, then it can be deleted to give a smaller potential cover, i.e., the potential cover is not minimal. If a potential cover C is minimal, then Throughout the next proof, we edit a minimal potential cover C by deleting a few entries from it, and adding others, which creates a modified minimal potential cover. After such edits, to verify the result is indeed a minimal potential cover, we need to check the following three properties: 1. Partial Latin square When adding entries, we must ensure we do not violate the partial Latin square property by adding an entry to an already filled cell, or by adding a symbol to a row or column that already contains that symbol. 2. Potential cover After deleting entries from a minimal potential cover, we necessarily end up with some rows, columns and/or symbols unrepresented. These rows, columns and/or symbols must be represented by newly added entries. 3. Minimality We need to verify that each entry uniquely represents some row, column or symbol. We need only check this for the newly added entries and any entries that share a row, column or symbol with a newly added entry. This last point is the most subtle: it is easy to overlook that adding an entry might make another entry redundant. We omit details of such routine checks without further comment. Lemma 3 Let n 2. There exists a minimal potential cover M of order n, which is at least as large as all other minimal potential covers of order n, and has the following additional properties Proof We assume that C is some minimal potential cover of order n of the largest possible size. If |C | 2n − 1, then the cover described in the proof of Lemma 2 satisfies the required conditions. So we may assume that |C | 2n (which implies that n 3). We first argue that U RCS = ∅. If (r, c, s) ∈ U RCS , then since |C | 2n there is a row r = r that contains at least two entries in C and, similarly, there is some column c = c that contains at least two entries in C . But then r , c, s), (r, c , s) is a larger potential cover, contradicting the choice of C . So U RCS = ∅. Next, we explain how we can modify C in such a way that U S and/or U RC becomes empty, without decreasing the size of C nor violating the minimal potential cover property. Suppose that there exists (r 0 , c 0 , s 0 ) ∈ U S and (r 1 , c 1 , s 1 ) ∈ U RC . Note that the entries (r 0 , c 0 , s 0 ) and (r 1 , c 1 , s 1 ) cannot agree in any coordinate. We now split into three cases. Case I: Symbol s 1 does not appear in row r 0 nor in column c 0 . In this case, is a larger minimal potential cover than C , contradicting the choice of C . Case II: Symbol s 1 is represented at most twice in C . Since (r 1 , c 1 , s 1 ) ∈ U RC , we know that s 1 must be represented exactly twice in C . It follows that s 1 cannot occur in both row r 0 and column c 0 . Suppose s 1 does not occur in row r 0 (the case when s 1 does not occur in column c 0 can be resolved symmetrically). Since |C | 2n, there exists a column c = c 0 that is not uniquely represented in C . Case I implies s 1 occurs in column c 0 and hence does not occur in column c . Thus, c 1 , s 1 ), (r 1 , c , s 1 ) is a larger potential cover than C , contradicting the choice of C . Case III: Symbol s 1 is represented at least three times in C . In this case, is another minimal potential cover, with the same cardinality as C . The switching (5) removes one entry from each of U S and U RC , and replaces them with new entries in U C and U R respectively. By iteration, we can reach a point where at least one of U S and U RC is empty. A similar process of switchings allows us to reach a point where one of U R and U CS is empty, and also one of U C and U RS is empty. We continue this process until at least one set from each pair is empty. Note that while making switch (5), we will increase the size of two sets in question. However, no matter which switching we perform, the number of entries in U RC ∪ U RS ∪ U CS decreases, so the process terminates. Call the resulting minimal potential cover M. Note that M satisfies |U R | + |U RC | + |U RS | < n, since there are only n rows and not all of them are uniquely represented. Similarly, |U C | + |U RC | + |U CS | < n. If U S = ∅, then which contradicts the assumption that |C | 2n. Therefore U S = ∅. By similar arguments, U R = ∅ and U C = ∅. By the deductions above, we have U CS = U RS = U RC = ∅, implying that |M| = |U R | + |U C | + |U S |. The entries in U S cannot share a row with any entry in U R , nor share a column with any entry in U C , so they lie in an (n − |U R |) × (n − |U C |) submatrix, implying that |U S | (n − |U R |)(n − |U C |). Symmetric results hold for U R and U C , which completes the proof. Theorem 9 Every minimal cover of a Latin square of order n has size at most 3(n + 1/2 − √ n + 1/4) . We note that −1/2 + √ n + 1/4 is a positive integer t when n = t 2 + t. We next show that the bound in Theorem 9 is achieved for orders n of this form, and therefore, by infinitely many covers. Moreover, we show that all theoretically possible minimal cover sizes are simultaneously achieved by different covers in a single Latin square of order t 2 + t. Lemma 4 Let t 2 and let L be a Latin square of order n = t 2 + t with a transversal T and a minimal cover C of size 3t 2 such that |U R | = |U C | = |U S | = t 2 and |U R ∩ T | = t. Then L contains a minimal c-cover for all c ∈ {t 2 + t, . . . , 3t 2 }. Proof Since |U R | = t 2 , all elements in U C ∪ U S must be contained in t rows. Similarly, U R ∪ U S must be contained in t columns, and thus, U S is a t × t submatrix. We now argue Similarly, U C ∩ T = ∅ since at most n − |U S | = t distinct symbols occur in U R ∪ U C and |U R ∩ T | = t. Permute the rows, columns and symbols of L in such a way that (a) T = {(i, i, i) : 0 i < t 2 + t}, (b) the entries in U S comprise the bottom-left t × t submatrix, and (c) the symbol in the bottom-left entry is t 2 − 1 (this simplifies Case III below). Thus, L has the following structure: Clearly, T itself provides a minimal (t 2 + t)-cover, and we also know that L has a minimal (t 2 + t + 1)-cover by Theorem 8. For c ∈ {t 2 + t + 2, . . . , 3t 2 }, we break into 3 cases. In each of these cases, a set of entries from T is added to C , and then entries that have become redundant are removed. For each entry added that is not in the first t columns nor last t rows, three redundant entries will be removed (one from each of U R , U C and U S ). These entries correspond to the set Y below. For each entry added in the last t rows, two redundant entries will be removed (one from each of U C and U S ). These entries correspond to the set X below. The 3t lines that are not uniquely represented by C are (a) the first t columns, (b) the last t rows and (c) the symbols in U R ∩ T . In all cases the modifications that we make leave U R ∩ T in the resulting cover, so the lines in (a) and (c) will still be represented. The representatives of the last t rows will be addressed in each case. The other checks required to show that the resulting set of entries is a minimal c-cover are straightforward and will be omitted. If Z ⊆ Z n , we define V R (Z ) = {(i, •, •) ∈ U R : i ∈ Z }, and we define V C and V S similarly. Whenever we use this notation, the elements in Z will be in one-to-one correspondence with elements of V R (similarly for V C or V S ). In each case, will be a minimal cover of the appropriate size. Note that in each case, and Y = ∅. Note that since |X | + 2|Y | < t, the elements of C (X, Y ) in the bottom-left t × t submatrix cover the last t rows of L. Thus, C (X, Y ) is a minimal c-cover. Proof For each order, we will give an example of a square that satisfies the properties required in Lemma 4. When t = 2, the following Latin square satisfies the requirements: and when t = 6, the Latin square given in Fig. 12 in the Appendix satisfies the requirements. We may now assume that t / ∈ {2, 6}, so there exists a pair (A, B) of orthogonal Latin squares of order t. Define a (t 2 + t) × (t 2 + t) matrix D by filling cell (αt + r, βt + c), for α, β ∈ {0, . . . , t} and r, c ∈ {0, . . . , t − 1}, with the symbol This means, for example, that row 0 has cell (0, βt +c) filled with symbol (A 0c −(β +1), B 0c ) whenever 0 β t and 0 c t − 1. Thus each symbol in Z t+1 × Z t occurs exactly once as we iterate over β and c, so the first row is Latin. A similar argument holds for each row and each column, so D is a Latin square. An example of this construction when t = 4 is given in Fig. 8. Consider the set of entries in the bottom-left t × t submatrix of D: We next argue that C = D R ∪D C ∪D S is a minimal cover of D, where U R = D R , U C = D C , and U S = D S . Each symbol is covered by C , as described above. The first t columns are covered by D S . For any other column βt + c (with β ∈ {1, . . . , t} and c ∈ {0, . . . , t − 1}), let r be such that A rc = β − 1. The entry (t 2 + r, βt + c, •) ∈ D C covers column βt + c. Since there were t 2 such columns to cover and |D C | = t 2 , no entries in D C are redundant (all entries in D R and D S are contained in the first t columns). A similar argument holds for covering the rows. Thus, C is a minimal cover of size 3t 2 with |U R | = |U C | = |U S | = t 2 . Before we can apply Lemma 4, we must now find a transversal T in D such that |U R ∩ T | = t. Our next goal is to show that all Latin squares have a minimal cover that is asymptotically equal to the bound in Theorem 9. To do so, we introduce the notion of a partial minimal cover. If L is a partial Latin square and P ⊆ E(L) such that, for some e ∈ P, both P and P \ {e} represent the same lines, then we call e redundant. An entry (r, c, s) ∈ P is redundant if and only if there exists three other entries of the form (r, •, •), (•, c, •) and (•, •, s) in P. We define a partial minimal cover as any P ⊆ E(L) that has no redundant entries. We can iteratively delete redundant entries from any P ⊆ E(L) to obtain a partial minimal cover of size no more than |P| in which the same lines are represented. It is important to note that not every partial minimal cover can be extended to a minimal cover, and Fig. 9 gives two examples of partial minimal covers that cannot be extended to a minimal cover (nor even a larger partial minimal cover). Even though a partial minimal cover does not necessarily extend to a minimal cover, we can find a minimal cover that is at least as large as any partial minimal cover. Lemma 5 Let L be a Latin square of order n and P be a partial minimal cover of L. Then L contains a minimal cover of size at least |P|. Proof If |P| 2n − 1, then the cover described in the proof of Lemma 2 satisfies the constraints, so assume |P| 2n. If P is a minimal cover, then the statement is trivial so suppose there is some line, say row r , that is not covered by P. Since |P| 2n, there exists a column c that is represented at least twice in P. Define P = P ∪ {(r, c, s)} where s = L rc . If P is not a partial minimal cover, then there must be some entry in row r , in column c or with symbol s that is redundant. Since (r, c, s) is the only entry in row r , it is not redundant. Since there are at least three entries in column c in P , no entry in column c is redundant in P (otherwise we contradict the minimality of P). However, if s is represented exactly once in P, by (r 0 , c 0 , s) say, then that entry is redundant in P if and only if there are other entries covering row r 0 and column c 0 . In this case, we define P = P \ {(r 0 , c 0 , s)}, otherwise, we define P = P . Note that in either case, P is a partial minimal cover that covers strictly more lines than P and is at least as big as P. We repeat the above process until all lines are covered. Next, we need a technical lemma. Proof Let B ⊆ B be the set of vertices in B of degree at least 1 2 n 1/2+ε . Counting edges, we have that which implies that |B | n − O(n 1/2+2ε ). Consider choosing a set U ⊆ A of size |U | = n 1/2+ε uniformly at random. For any v ∈ B the probability that v has no neighbour in U is (1). It follows that for large n there is some choice of U whose neighbourhood includes B , and we are done. Proof If ε 1/2, then the theorem follows from Lemma 2, so assume that ε < 1/2. Suppose L is a Latin square of order n and let ψ = n 1/2+ε . We gradually build a large partial minimal cover C for L. Define ) entries with distinct symbols, and we (provisionally) initialise C to be this set of entries. By removing at most ψ entries from C if necessary, we identify a set of ψ symbols that are not yet represented in C . Next we form a bipartite graph B 2 . The vertices of B 2 correspond to the rows and columns of L that do not intersect S. We place an edge from row vertex r to column vertex c if and only if L rc ∈ . Since S has O(n 1/2+ε ) rows and O(n 1/2+ε ) columns, B 2 has nψ − O(n 1/2+ε ψ) = n 3/2+ε − O(n 1+2ε ) edges and maximum degree at most ψ. Hence we can apply Lemma 6 twice to find a set U 2 of rows and a set U 3 of columns with desired properties that we now describe. First, they do not intersect S. Second, they are small enough that |U 2 | = O(n 1/2+ε ) and |U 3 | = O(n 1/2+ε ). We (provisionally) include in C any entry containing a symbol in in the rows in U 2 and/or the columns of U 3 . Lemma 6 implies that these entries cover a set U 4 of n − O(n 1/2+2ε ) rows and a set U 5 of n − O(n 1/2+2ε ) columns. At this point, C may not be a partial minimal cover, so we iteratively remove redundant entries from C . Afterwards, the following three sets, each comprising of n − O(n 1/2+2ε ) lines, are covered and no entry in C can cover more than one of the following lines: -The rows in U 4 that are not in U 2 . -The columns in U 5 that are not in U 3 . -The symbols other than those in . Thus C is a partial minimal cover of size at least 3n − O(n 1/2+2ε ). By Lemma 5, there is a minimal cover of L of size 3n − O(n 1/2+2ε ). We replace ε by ε/2 to complete the proof. Next, we report on some computations of sizes of minimal covers for small Latin squares. The Cayley tables of the groups Z 3 and Z 2 × Z 2 have transversals and (n + 2)-covers, but do not have any minimal (n + 1)-covers. Thus the spectrum of sizes of minimal covers is not continuous in these two cases. However, these two Latin squares may be small-case anomalies, since we found no other Latin squares of order up to 8 with a gap in their spectrum. For orders n 5, the minimal cover constructed in Lemma 2 meets the bound in Theorem 9 and hence has maximum possible size. For each order in the range 6 n 9, we found a Latin square that has no minimal cover meeting the bound in Theorem 9. Our computations were exhaustive for 6 n 8, where there is a gap of only 1 between the size of the cover in Lemma 2 and the bound in Theorem 9. For n = 6, there are 6 species that meet the bound and 6 that do not; neither group table meets the bound. For n = 7, there are 145 species that meet the bound. The 2 species that do not meet the bound contain the group Z 7 and the Steiner quasigroup. For n = 8 there are 283654 species that meet the bound. The 3 species that do not meet the bound contain the dihedral group, the elementary abelian group, and the Latin square obtained by turning an intercalate in the elementary abelian group (that is, by replacing a 2 × 2 Latin subsquare with the other possible subsquare on the same two symbols). Note that the autotopism group of the elementary abelian group acts transitively on the intercalates, so it does not matter which intercalate gets turned. We could not do exhaustive computations for all Latin squares of order 9, but we confirmed that Z 3 × Z 3 meets the bound in Theorem 9, whilst Z 9 does not. The largest minimal cover in Z 9 has size 18, which is one more than the size of the example in Lemma 2 but one less than the bound in Theorem 9. In Sect. 2, we showed a kind of duality between minimal covers and maximal partial transversals. However, we next reveal a distinction between the behaviours of these objects. We begin with the following theorem, which gives the values of k and n for which there exists a Latin square of order n 5 with a maximal partial transversal of deficit k. The n = 2k case of the theorem is immediate, by simply taking a direct product of an idempotent Latin square of order k with a Latin square of order 2. So we may assume that k n − k − 1. Also, note that n − k n/2 3. If n − k = 6 and k ∈ {4, 5}, we define M as given in Fig. 10. In all other relevant cases, we can find a Latin square of order n − k with k + 1 disjoint transversals [21]. Applying an isotopism, we get an idempotent Latin square M = [M i j ] of order n − k. It has k disjoint transversals, denoted d σ for σ ∈ {n −k, . . . , n −1}, which do not intersect the main diagonal. We replace the symbols in {k, . . . , n − k − 1} in each d σ by the symbol σ , and call the result M . We give an example of this construction in Fig. 11. Thus, M is idempotent and contains n − k copies of each symbol in {0, . . . , k − 1} and n −2k copies of each symbol in {k, . . . , n −1}. Ryser's Theorem [18] implies that M embeds in a Latin square L of order n; this is illustrated for the example in Fig. 11. Moreover, since M contains each symbol in {0, . . . , k − 1} exactly n − k times, the intersection of the k rows and columns indexed by {n − k, . . . , n − 1} in L must be a subsquare on the symbols {0, . . . , k − 1}. Any partial transversal of deficit exceeding n/2 can be extended. Thus, a consequence of Theorem 12 is that among all Latin squares of order n 5, the smallest maximal partial transversal has deficit n/2 . Theorem 11 shows that the upper bound on minimal covers described in Theorem 9 is achieved asymptotically for all Latin squares of order n. However, as we establish in the following theorem, most Latin squares do not come close to achieving a maximal partial transversal of deficit n/2 . While minimum covers directly relate to maximum partial transversals (see Theorems 1 and 2), maximum minimal covers seem not to have a direct relationship with minimum maximal partial transversals. Theorem 13 Fix ε > 0. With probability approaching 1 as n → ∞, a Latin square of order n chosen uniformly at random has no maximal partial transversal of deficit exceeding n 2/3+ε . Proof Let L be a random Latin square of order n. Suppose that L has a maximal partial transversal T of deficit d. Let S be the d × d submatrix of L induced by the rows and columns that are not represented in T . By the maximality of T , we know that S contains none of the d symbols that are not represented in T . However, if this is the case and d = n 2/3+ε , then [14,Thm 2] would imply that n 1+3ε = d 3 /n = O(n 1+3ε/2 log n), which is a contradiction, so no such submatrix S exists in L. Concluding remarks We have introduced covers of Latin squares with the aim of using them to understand partial transversals better, focusing primarily on topics relating to extremal sizes. We found that some properties of covers have analogous properties for partial transversals, while others do not. For example, the maximum size of partial transversals is closely related to the minimum size of covers. However, the smallest possible maximal partial transversal has deficit n/2 , which most Latin squares do not come close to achieving (see Theorem 13). In contrast, the maximum size of a minimal cover is 3n − O(n 1/2 ), which is asymptotically achieved by all Latin squares (see Theorem 11). There are (n + 1)-covers that contain no partial transversals of deficit 0 or 1. The error on the upper bound on the number of partial transversals in Theorem 5 grows with the number of such (n + 1)-covers. Also, while Brualdi's Conjecture implies the existence of (n + 1)-covers in all Latin squares of order n, we have not established the converse. Instead, a weaker form of the converse is true: if every Latin square of order n 2 has an (n + 1)-cover, then every Latin square of order n 2 has a partial transversal of deficit 2. Relating the enumeration of partial transversals with small deficit (d ∈ {1, 2}) to the enumeration of (n + 1)-covers is also difficult because the number of embeddings of a maximal partial transversal of deficit d within an (n + 1)-cover depends on the structure of the Latin square. There are switches that can be performed among (n + 1)-covers, such as which converts an (n + 1)-cover inducing G 5 into an (n + 1)-cover inducing G 3 . However, we did not succeed in making switchings work for converting (n + 1)-covers inducing G 1 into the other structures, which would yield a partial transversal of deficit 1. It is possible that more complicated switching patterns might succeed in changing the graph structure in (n + 1)-covers inducing G 1 , but it is also possible that identifying such switchings would not be possible without, say, proving Brualdi's Conjecture. In the case of minimal covers of maximum size, the results in Sect. 3 make significant progress, finding an explicit upper bound that is achieved infinitely often, and that is achieved asymptotically by all Latin squares. Within the proof of Theorem 11 we showed the following result, which may be of independent interest: This raises the question as to whether stronger results in this direction hold. Not every n 2 × n 2 Latin square contains an n × n submatrix that includes every symbol. The 4 × 4 Latin squares each have 2 × 2 submatrices containing all four symbols, but the 9 × 9 Latin square found by White [23], has the property that no 3 × 3 submatrix contains all nine symbols. It would be of some interest to find more precise results for general Latin squares as to how small a submatrix contains every symbol, and/or how many distinct symbols we can be sure to find in at least one submatrix of given dimensions. There are multiple directions in which the study of covers could be extended; we describe some below. Some of the results here could be extended to Latin rectangles or even special kinds of partial Latin rectangles such as plexes [21]. It would also be interesting to extend the investigation to Latin hypercubes, sets of mutually orthogonal Latin squares, or to MDS codes more generally. The Cayley tables of groups are of particular interest, since transversals in them are equivalent to orthomorphisms, and problems such as enumeration of orthomorphisms (particularly for cyclic groups) have been studied [16]. Moreover, cyclic group tables have a lot of structure (see, e.g., Lemma 1) that may permit a more successful study of switchings than in general Latin squares. Each of the five structurally distinct (n + 1)-covers can be embedded in a Latin square of order 5, as shown in Fig. 3, so by replacing the 5 × 5 subsquares in the k = 5 case of Theorem 12, we find that every potential (n + 1)-cover embeds in a Latin square of order n, for all n 10. In fact, the same is easily found to be true for orders in {5, . . . , 9} (by searching random Latin squares of these orders). It would be interesting to resolve the general case of this embedding problem, i.e., for which orders n does every potential (n + a)-cover complete to a Latin square? A famous problem along these lines is the Evans Conjecture [9], now a theorem [2,11,20], which states that a partial Latin square of order n with at most n − 1 entries can be completed. Balasubramanian [4] showed that Latin squares of even order have an even number of transversals. Exhaustive computations for orders n 8 suggest the following: Conjecture 1 Let L be a Latin square of even order n, with t transversals and q covers of size n + 1, of which q min are minimal. Then t ≡ 2q ≡ 2q min (mod 4). We do know that 2q ≡ 2q min (mod 4) for any order n, because q = q min + n(n − 1)t ≡ q min (mod 2). Another curious observation from our computations is that the number of (n + 1)covers in every Latin square of order 7 is divisible by 3. Finally, we mention that the data in Table 1 shows approximate consistency in the number of (n + 1)-covers that Latin squares of order n have. If this is a pattern, it might be worth investigating as a means to prove a weakened form of Brualdi's Conjecture (via Theorem 2). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
16,578.8
2018-06-20T00:00:00.000
[ "Mathematics" ]
Study of local anaesthetics . Part 172 * Comparison of the influence of auxilliary substances on physicochemical parameters of two phenylcarbamic acid derivatives It this work was studied the influence of polyols (propylene glycol, glycerol and sorbitol) on the physicochemical parameters (partition coefficient and surface tension) and liberation from aqueous solutions with propylene glycol of two derivatives of phenylcarbamic acid. Studied parameters and liberation were influenced by the structure of the studied derivatives and with the type and concentration of polyols. Introduction The partition coefficient and the surface tension are physicochemical parameters that considerably influence absorption and effect of the drug.The determination of partition coefficient was studied in many papers.Beside classical shake-flask method of determination, which is in actual practice still used, were developed some new experimental methods [1,2,3,4,5,6,7]. The physicochemical properties and relationship between structure, physicochemical properties and biological activity of potential local anaesthetics drugs from the group of phenylcarbamic acid derivatives were the matter of many studies published in several papers [8,9,10,11,12,13,14,15,16,17,18]. The dosage forms in which the local anaesthetics are often applied are hydrogels.They are used as lubricants e.g. for insertion of catheters.Because the necessary components of hydrogels are humectants, the physicochemical properties were determined in water solution of humectants: propylene glycol (PG), glycerol (GL) and sorbitol (SO) in concentration 5,10,15 a 20%. Results and Discussion The two derivatives of phenylcarbamic acid with local anaesthetic effect, with the operative names -substance XXIII Z and XIV were compared.The substance XXIII Z, N-[2-(3-oktyloxyfenylkarbamoyloxy) -ethyl] -pyrrolidiniumchloride, is 16 times more effective in surface anesthesia in comparison with cocaine and 7 times more effective in infiltration anaesthesia in comparison with procaine as standard [19].The substance XIV, N-[2-(3-penthyloxyfenylkarbamoyloxy) -ethyl]piperidiniumchloride is 80 times more effective in surface anesthesia in comparison with cocaine as standard and 135 times more effective in infiltration anaesthesia in comparison with procaine as standard [20].The both substances differ with the structure of molecule, i.e. with substitution of aromatic ring and with the length of alkoxy substituent whereby lipophility is affected.It has been assumed that different structure of substances under investigation can affect the examined physicochemical parameters.Partition coefficient (P) and surface tension ( ) were determined, because both parameters can influence liberation, absorption and drugs action. From the results (Table 1) appears that the structure of potential drug and used auxilliaries and their concentration influence both physicochemical parameters.The influence of physicochemical parameters on the partition coefficient is bigger than on the surface tension. Experimental determined values partition coefficients of substance XXIII Z were about 13 -15 times bigger than of substance XIV and confirm higher lipophility of substance XXIII Z.The differences between surface tensions of compared substances were smaller. Substance The of XXIII Z substance was about 1.3 times lower in comparison with XIV substance. Generally humectants with the XXIII Z substance reduce P and increase the , and with the XIV substance reduce both P and . The influence of polyols and their concentrations on the determined parameters were different.With increased concentration of PG and GL is decreased value of P of XXIII substance.On the contrary SO increases the value of P. With increased concentration of PG is the value increased.The influence of GL and SO is insignificant.With increased concentration of polyols is P of the XIV substance insignificantly decreased.The value is decreased with PG and SO, but only in 15-20% concentration of PG and SO.The GL is without any influence. From the results it can be concluded that from the humectants the biggest influence on the studied parameters has PG, therefore it was used on the study of in vitro liberation of potential drugs from the solutions.The results of liberation indicate that it was liberated 3 to 5 times less of the substance XXIII Z in comparison with substance XIV (Table 2 and 3).The presence of PG in the sample increased the liberation of substance XXIII and decreased the liberation of substance XIV in comparison with the sample without the humectant.The work confirmed the influence of molecule of potential drugs on the partition coefficient and surface tension and the effect of the type and concentration of auxilliary substances on the physicochemical parameters and the liberation of potential drugs from aqueous solution of PG. Determination of partition coefficient Partition coefficient values (P) were determined in the system consisting of noctanol/aqueous phase at 37°C [11].The aqueous phase comprised either pure water or water with 5 , 10, 15 and 20 wt.% of polyol added.The potential drug was determined in the aqueous phase by UV spectrophotometry (XXIII Z at = 235 nm, XIV at = 236 nm) against similarly prepared reference containing no drugs. Determination of surface tension Surface tension ( ) was evaluated 24 h after preparation of the solution at a temperature of 20°C in stalagmometer by weighing the drops.It was used 0.1 wt. % water solutions of potential drug with or without 5 , 10, 15 and 20 wt.% of polyol added. Determination of liberation Potential drug was left to permeate at 37 °C through a hydrophilic membrane (19.6 cm 2 ) (Nephrophan, Filmfabrik Wolfen, Germany) into isotonic NaCl solution. Released drug amounts were determined by spectrophotometry (Philips Pye UV VIS, Unicam Ltd., UK) in the respective intervals [21]. *Part 171: Die Pharmazie in press
1,213.8
2007-06-29T00:00:00.000
[ "Chemistry", "Medicine" ]
Adaptive tuning of Majorana fermions in a quantum dot chain We suggest a way to overcome the obstacles that disorder and high density of states pose to the creation of unpaired Majorana fermions in one-dimensional systems. This is achieved by splitting the system into a chain of quantum dots, which are then tuned to the conditions under which the chain can be viewed as an effective Kitaev model, so that it is in a robust topological phase with well-localized Majorana states in the outermost dots. The tuning algorithm that we develop involves controlling the gate voltages and the superconducting phases. Resonant Andreev spectroscopy allows us to make the tuning adaptive, so that each pair of dots may be tuned independently of the other. The calculated quantized zero bias conductance serves then as a natural proof of the topological nature of the tuned phase. Introduction Majorana fermions are the simplest quasiparticles predicted to have non-Abelian statistics [1,2]. These topologically protected states can be realized in condensed matter systems, by making use of a combination of strong spin-orbit coupling, superconductivity and broken time-reversal symmetry [3][4][5][6][7][8]. Recently, a series of experiments have reported the possible observation of Majorana fermions in semiconducting nanowires [9][10][11][12], attracting much attention in the condensed matter community. Associating the observed experimental signatures exclusively with these non-Abelian quasiparticles, however, is not trivial. The most straightforward signature, the zero bias peak in Andreev conductance [13,14], is not unique to Majorana fermions, but can appear as a result of various physical mechanisms [15][16][17][18][19][20][21], such as the Kondo effect or weak anti-localization. It has also been pointed out that disorder has a detrimental effect on the robustness of the topological phase, since in the absence of time-reversal symmetry it may close the induced superconducting gap [22]. This requires experiments to be performed with very clean systems. Additionally, the presence of multiple transmitting modes reduces the amount of control one has over such systems [23][24][25][26], and the contribution of extra modes to conductance hinders the observation of Majorana fermions [27]. Thus, nanowire experiments need setups in which only a few modes contribute to conductance. In this work, we approach the problem of realizing systems in a non-trivial topological phase from a different angle. Following the work by Sau and Das Sarma [28], we wish to emulate the Kitaev chain model [29], which is the simplest model exhibiting unpaired Majorana bound states. The proposed system consists of a chain of quantum dots (QDs) defined in a twodimensional electron gas (2DEG) with spin-orbit coupling, in proximity to superconductors and subjected to an external magnetic field. Our geometry enables us to control the parameters of the system to a great extent by varying the gate potentials and superconducting phases. We will show how to fine tune the system to the so-called 'sweet spot' in parameter space, where the Majorana fermions are well localized at the ends of the system, making the topological phase maximally robust. A sketch of our proposed setup is presented in figure 1(a). The setup we propose and the tuning algorithm are not restricted solely to systems created in a 2DEG. The essential components are the ability to form a chain of QDs and tune each dot separately. In semiconducting nanowires the dots can be formed from wire segments (a) A chain of QDs in a 2DEG. The QDs are connected to each other, and to superconductors (labeled SC), by means of quantum point contacts (QPCs). The first and the last dots are also coupled to external leads. The normal state conductance of QPCs between adjacent dots or between the end dots and the leads is G , and of the QPCs linking a dot to a superconductor is G ⊥ . The confinement energy inside each QD can be controlled by varying the potential V gate . (b) Realization of the same setup using a nanowire, with the difference that each dot is coupled to two superconductors in order to control the strength of the superconducting proximity effect without the use of QPCs. separated by gate-controlled tunnel barriers, and all the tuning can be done by gates, except for the coupling to a superconductor. This coupling, in turn, can be controlled by coupling two superconductors to each dot and applying a phase difference to these superconductors. The layout of a nanowire implementation of our proposal is shown in figure 1 This geometry has the advantage of eliminating many of the problems mentioned above. By using single-level QDs, and also quantum point contacts (QPCs) in the tunneling regime, we solve issues related to multiple transmitting modes. Additional problems, such as accidental closings of the induced superconducting gap due to disorder, are solved because our setup allows us to tune the system to a point where the topological phase is most robust, as we will show. We present a step-by-step tuning procedure which follows the behavior of the system in parallel to that expected for the Kitaev chain. As feedback required to control every step we use the resonant Andreev conductance, which allows us to track the evolution of the system's energy levels. We expect that the step-by-step structure of the tuning algorithm should eliminate the large number of non-Majorana explanations of the zero bias peaks. A related layout together with the idea of simulating a Kitaev chain was proposed recently by Sau and Das Sarma [28]. (See also the two-dot limit of that proposal, analyzed in [30].) Although similar in nature, the geometry which we consider has several advantages. First of all, coupling the superconductors to the QDs in parallel allows us to not rely on crossed Andreev reflection. More importantly, being able to control inter-dot coupling separately from all the other properties allows us to address each dot or each segment of the chain electrically. This in turn makes it possible to perform the tuning of the system to the sweet spot regime in a scalable manner. This can be achieved by opening all the QPCs except for the ones that contact the desired dots. This setup can also be extended to more complicated geometries which include the T-junctions of such chains. Benefiting from the high tunability of the system and the localization of the Majorana fermions, it might then be possible to implement braiding [31,32] and demonstrate their non-Abelian nature. The rest of this work is organized as follows. In section 2, we briefly review a generalized model of the Kitaev chain, and identify the 'sweet spot' in parameter space in which the Majorana fermions are the most localized. The system of coupled QDs is described in section 3. For the purpose of making apparent the resemblance of the system to the Kitaev chain, we present a simple model that treats each dot as having a single spinful level. We then come up with a detailed tuning procedure describing how one can control the parameters of the simple model, in order to bring it to the desired point in parameter space. In section 4, our tuning prescription is applied to the suggested system of a chain of QDs defined in a 2DEG, and it is shown using numerical simulations that at the end of the process the system is indeed in a robust topological phase. We conclude in section 5. Generalized Kitaev chain In order to realize unpaired Majorana bound states, we start from the Kitaev chain [29] generalized to the case where the on-site energies as well as the hopping terms are not uniform and can vary from site to site. The generalized Kitaev chain Hamiltonian is defined as t n e iθ n a † n+1 a n + n e iφ n a † n+1 a † n + h.c. + ε n a † n a n , where a n are fermion annihilation operators, ε n are the on-site energies of these fermions, t n exp(iθ n ) are the hopping terms, and n exp(iφ n ) are the p-wave pairing terms. The chain supports two Majorana bound states localized entirely on the first and the last sites, when (i) ε n = 0, (ii) n = t n and (iii) φ n+1 − φ n − θ n+1 − θ n = 0. The larger values of t n lead to a larger excitation gap. Condition (iii) is equivalent, up to a gauge transformation, to the case where the hopping terms are all real, and the phases of the p-wave terms are uniform. The energy gap separating the Majorana modes from the first excited state then equals The above conditions (i)-(iii) constitute the 'sweet spot' in parameter space to which we would like to tune our system. Since all of these conditions are local and only involve one or two sites, our tuning procedure includes isolating different parts of the system and monitoring their energy levels. For that future purpose we will use the expression for excitation energies of a chain of only two sites with ε 1 = ε 2 = 0: Exactly at the sweet spot, in order to couple Majorana fermions formed at the ends of the chain, one needs to change at least L Hamiltonian parameters, where L is the length of the chain. This happens because any local perturbation would only delocalize Majorana fermions between the dots on which it acts. Hence if a typical imperfection of the tuning due to the presence of noise or the imperfection of tuning itself is of an order δ, then the residual coupling between Majoranas will be of the order of (δ/t) L . Quadratic protection from noise for two such dots in the sweet spot regime was reported in [30]. While for quantum computation applications the length of chains required for sufficient noise tolerance may be relatively large, as we show in section 4, in order to detect robust signatures of Majorana fermions, three dots may be sufficient. System description and the tuning algorithm The most straightforward way to emulate the Kitaev chain is to create an array of spinful QDs, and apply a sufficiently strong Zeeman field such that only one spin state stays close to the Fermi level. Then the operators of these spin states span the basis of the Hilbert space of the Kitaev chain. If we require normal hopping between the dots and do not utilize crossed Andreev reflection, then in order to have both t n and n non-zero we need to break the particle number conservation and spin conservation. The former is achieved by coupling each dot to a superconductor; the latter can be achieved by spatially varying Zeeman coupling [33,34] or more conventionally by using a material with a sufficiently strong spin-orbit coupling. Examples of implementation of such a chain of QDs in a 2DEG and in semiconducting nanowires are shown in figure 1. We neglect all the levels in the dots except for the one closest to the Fermi level, which is justified if the level spacing in the dot is larger than all the other Hamiltonian terms. We neglect the Coulomb blockade, since we assume that the conductance from the dot to the superconductor is larger than the conductance quantum [35]. We consider a single Kramers doublet per dot with creation and annihilation operators c † n,s and c n,s , with n the dot number and s the spin degree of freedom. Since we consider dots with spin-orbit interaction, c n,s is not an eigenstate of spin. Despite that, only singlet superconducting pairing is possible between c n,s and c n,s as long as the time-reversal symmetry breaking in a single dot is weak. By applying a proper SU (2) rotation in the s-s space we may choose the Zeeman field to point in the z-direction in each dot. As long as the Zeeman field does not change the wave functions of the spin states the superconducting coupling stays s-wave. The general form of the BdG Hamiltonian describing such a chain of spinful single-level dots is thus given by where σ i are Pauli matrices in spin space. The physical quantities entering this Hamiltonian are the chemical potential µ n , the Zeeman energy V z , the proximity-induced pairing ind,n exp(i n ) and the inter-dot hopping w n . The vector λ n characterizes the amount of spin rotation happening during a hopping between the two neighboring dots (the spin rotates by a 2|λ| angle). This term may be generated either by a spin-orbit coupling, or by a position-dependent spin rotation, required to make the Zeeman field point in the local z-direction [33,34,36]. The induced pairing in each dot ind,n exp(i n ) is not to be confused with the p-wave pairing term n exp(iφ n ) appearing in the Kitaev chain Hamiltonian (1). In order for the dot chain to mimic the behavior of the Kitaev chain in the sweet spot, each dot should have a single fermion level with zero energy, so that ε n = 0. Diagonalizing a single dot Hamiltonian yields the condition for this to happen When this condition is fulfilled, each dot has two fermionic excitations The energy of a n is zero, and the energy of b n is 2V z . If the hopping is much smaller than the energy of the excited state, w n V z , we may project the Hamiltonian (4) onto the Hilbert space spanned by a n . The resulting projected Hamiltonian is identical to the Kitaev chain Hamiltonian of equation (1), with the following effective parameters: t n e iθ n = w n (cos λ n + i sin λ n cos ρ n ) [sin (α n+1 + α n ) cos(δ n /2) n e iφ n = iw n sin λ n sin ρ n e iξ n [cos (α n+1 + α n ) cos (δ n /2) + i sin (α n+1 − α n ) sin (δ n /2)] , where λ n = λ n (sin ρ n cos ξ n , sin ρ n sin ξ n , cos ρ n ) T and δ n = n − n+1 . It is possible to extract most of the parameters of the dot Hamiltonian from level spectroscopy, and then tune the effective Kitaev chain Hamiltonian to the sweet spot. The tuning, however, becomes much simpler if two out of three of the dot linear dimensions are much smaller than the spin-orbit coupling length. Then the direction of spin-orbit coupling does not depend on the dot number, and as long as the magnetic field is perpendicular to the spin-orbit field, the phase of the prefactors in equations (8) becomes position independent. Additionally, if the dot size is not significantly larger than the spin-orbit length, the signs of these prefactors are constant. This ensures that if δ n = 0, the phase matching condition of the Kitaev chain is fulfilled. Since δ n = 0 leads to both t n and n having a minimum or maximum as a function of δ n , this point is straightforward to find. The only remaining condition, t n = n at δ = 0, requires that α n + α n+1 = λ n . The above calculation leads to the following tuning algorithm: 1. Open all the QPCs, except for two contacting a single dot. By measuring conductance while tuning the gate voltage of a nearby gate, ensure that there is a resonant level at zero bias. After repeating for each dot the condition ε n = 0 is fulfilled. 2. Open all the QPCs except the ones near a pair of neighboring dots. Keeping the gate voltages tuned such that ε n = 0, vary the phase difference between the neighboring superconductors until the lowest resonant level is at its minimum as a function of phase difference, and the next excited level at a maximum. This ensures that the phase tuning condition φ n+1 − φ n − θ n+1 − θ n = 0 is fulfilled. Repeat for every pair of neighboring dots. 3. Start from one end of the chain, and isolate pairs of dots like in the previous step. In the pair of nth and (n + 1st) dots tune simultaneously the coupling of the (n + 1st) dot to the superconductor and the chemical potential in this dot, such that ε n+1 stays equal to 0. Find the values of these parameters such that a level at zero appears in two dots when they are coupled. After that proceed to the following pair. Having performed the above procedures, the coupling between all of the dots in the chain is resumed, at which point we expect the system to be in a robust topological phase, with two Majorana fermions located on the first and last dots. In practice one can also resume the coupling gradually by, for instance, isolating triplets of adjacent dots, making sure they contain a zeroenergy state, and making fine-tuning corrections if necessary and so on. Testing the tuning procedure by numerical simulations We now test the tuning procedure by applying it to a numerical simulation of a chain of three QDs in a 2DEG. The 2D BdG Hamiltonian describing the entire system of the QD chain reads Here, σ i and τ i are Pauli matrices acting on the spin and particle-hole degrees of freedom, respectively. The term V (x, y) describes both potential fluctuations due to disorder, and the confinement potential introduced by the gates. The second term represents Rashba spin-orbit coupling, ind (x, y) exp ( (x, y)) is the s-wave superconductivity induced by the coupled superconductors and V z is the Zeeman splitting due to the magnetic field. A full description of the tight-binding equations used in the simulation is presented in the appendix. The chemical potential of the dot levels µ n is tuned by changing the potential V (x, y). For simplicity, we used a constant potential V n added to the disorder potential, such that V (x, y) = V n + V 0 (x, y) in each dot. Varying the magnitude of ind,n is done by changing conductance G ⊥ of the QPCs, which control the coupling between the dots and the superconductors. Finally, varying the superconducting phase (x, y) directly controls the parameter n of the dot to which the superconductor is coupled, although they need not be the same. The tuning algorithm requires monitoring of the energy levels of different parts of the system. This can be achieved by measuring the resonant Andreev conductance from one of the leads. The Andreev conductance is given by [37,38] G/G 0 = N − tr(r ee r † ee ) + tr(r he r † he ), where G 0 = e 2 / h, N is the number of modes in a given lead, and r ee and r he are normal and Andreev reflection matrices. Accessing parts of the chain (such as a single dot or a pair of dots) can be done by opening all inter-dot QPCs, and closing all the ones between dots and superconductors, except for the part of the system that is of interest. We begin by finding such widths of QPCs that G ≈ 0.02 and G ⊥ ≈ 4G 0 . This ensures that conductance between adjacent dots is in the tunneling regime and that the dots are strongly coupled to the superconductors such that the effect of the Coulomb blockade is reduced [35]. The detailed properties of QPCs are descrbed in the appendix and their conductance is shown in figure A.3. First step: tuning chemical potential. We sequentially isolate each dot, and change the dot potential V n . The Andreev conductance as a function of V n and bias voltage for the second dot is shown in figure 2. We tune V n to the value where a conductance resonance exists at zero bias. This is repeated for each of the dots and ensures that µ n = 0. Second step: tuning the superconducting phases. We now set the phases of the induced pairing potentials n as constant. As explained in the previous section, this occurs when n and t n experience their maximal and minimal values. According to equation (3) this happens when the separation between the energy levels of the pair of dots subsection is maximal. Figure 3 shows the evolution of these levels as a function of the phase difference between the two superconductors. The condition δ 1 = 0 is then satisfied at the point where their separation is maximal. The arrow indicates the evolution of the first peak upon tuning, and the number of successive changes of G ⊥ and V n are shown for each curve. By bringing the first peak to zero, the third tuning step is achieved. Third step: tuning the couplings. Finally, we tune t n = n . This is achieved by varying G ⊥ , while tracking the Andreev conductance peak corresponding to the t n − n eigenvalue of the Kitaev chain we are emulating. After every change of G ⊥ we readjust V n in order to make Conductance as a function of bias voltage for a system composed of three tuned QDs (dashed line). The zero bias peak signals the presence of Majorana bound states at the ends of the chain. The first and second excited states are consistent with those expected for a three-site Kitaev chain, namely E 1 = 2t 1 and E 2 = 2t 2 (vertical dashed lines), given the measured values of t 1 = 1 and t 2 = 2 , obtained after finalizing the two-dot tuning process. As described in the main text, after increasing the transparency of the lead the QPC, we get a zero bias peak having a height G = 1.98G 0 (solid line). sure that the condition ε n = 0 (or equivalently V 2 z = 2 n + 2 n ) is maintained. This is necessary because not just n , but also µ n depend on G ⊥ . Therefore, successive changes of G ⊥ and V n are performed until the smallest bias peak is located at zero bias. The tuning steps of the first two dots are shown in figure 4. We repeat steps 2 and 3 for each pair of dots in the system. Finally, having all three conditions required for a robust topologically non-trivial phase, we probe the presence of localized Majorana bound states in the full three-dot system by measuring Andreev conductance (see figure 5). In this specific case, the height of the zero bias peak is approximately 1.85G 0 , signaling that the end states are well but not completely decoupled. Increasing the transparency of the QPC connecting the first dot to the lead brings this value to G = 1.98G 0 . Conclusion In conclusion, we have demonstrated how to tune a linear array of QDs coupled to superconductors in the presence of a Zeeman field and spin-orbit coupling to resemble the Kitaev chain that hosts Majorana bound states at its ends. Furthermore, we have presented a detailed procedure by which the system is brought to the so-called 'sweet spot' in parameter space, where the Majorana bound states are the most localized. This procedure involves varying the gate potentials and superconducting phases, as well as monitoring the excitation spectrum of the system by means of resonant Andreev conductance. We have tested our procedure using numerical simulations of a system of three QDs, defined in a 2DEG, and found that it works in systems with experimentally reachable parameters. It can also be applied to systems where QDs are defined by other means, for example formed in a one-dimensional InAs or InSb wire. The characteristic length and energy scales of this system are the spin-orbit length l SO = h 2 /mα and the spin-orbit energy E SO = mα 2 /h 2 . We simulate an InAs system in which the effective electron mass is m = 0.015 m e , where m e is the bare electron mass, taking values of E SO = 1 K = 86 µeV and l SO = 250 nm. We consider a setup composed of three QDs, like the one shown in figure A.1. Each of the three dots has a length of L DOT = 208 nm and a width W DOT = 104 nm. QPCs have a longitudinal dimension of L QPC = 42 nm, which is the same as the Fermi wavelength at quarter filling. The value of the hopping integral becomes t =h 2 /(2ma 2 ) = 55.8 meV, with a = 7 nm. Disorder is introduced in the form of random uncorrelated on-site potential fluctuations, leading to a mean free path l mfp = 218.8 nm. The system is placed in a perpendicular magnetic field characterized by a Zeeman splitting V z = 336 µeV, which, given a g-factor of 35 K T −1 , corresponds to a magnetic field B z = 111 mT. Each dot is additionally connected to a superconductor characterized by a pairing potential | SC | = 0.86 meV. The potential profile across a QPC is given by where x ∈ [−L/2,L/2] is the transverse coordinate across the QPC,h is the maximum height of V QPC ,s fixes the slope at which the potential changes andw is used to tune the QPC transparency. Two examples of potential profiles are shown in figure A.2.
5,948.6
2012-12-06T00:00:00.000
[ "Physics" ]
Better Incentives, Better Marks: A Synthetic Control Evaluation of the Educational Policies in Ceará, Brazil This article evaluates the effects of two educational policies implemented in the Brazilian state of Ceará. The first was a tax incentive (TI) for mayors to improve municipal education. Under this policy, municipal tax transfers were conditioned on educational achievement. The second was a program to offer educational technical assistance (TA) to municipalities. The impact of these policies was estimated by employing the synthetic control method to create a synthetic Ceará not affected by TI and TA. When the two policies were combined, the results were consistent with a 12 percent increase in Portuguese test scores in primary education and a 6.5 percent increase in lower secondary education. There were similar increases in mathematics test scores; however, these were not statistically significant. This study also investigates the impact of educational interventions on upper secondary schools, which, despite not being directly affected by the new policies, received better-prepared students from lower secondary schools. The findings show no effect on this level of education, highlighting the need for debate on how to extend the benefits of educational policies to upper secondary schools, as well as to other Brazilian states. This research is the first to analyze the impacts of the policies in Ceará on primary, lower secondary, and upper secondary schools using data from 1995 to 2019. Considering this literature into account, I assume that the policies implemented in Ceará increased the scores of local students in mathematics and Portuguese tests.Furthermore, I assume that, although not directly affected by the new policies, upper secondary schools also experienced improvements because they received better-prepared students from lower levels schools. To gauge the effect of the interventions in Ceará, I employ the synthetic control method (henceforth referred to as SCM).In the context of this investigation, SCM is an algorithm that selects a set of Brazilian states not affected by the educational interventions to create a control unit.Each selected control state contributes to the synthetic control unit according to a specific weight.Simply put, SCM estimates a synthetic Ceará whose performance in education is a weighted average of the performance of a set of chosen control states.This method provides transparency and a data-driven tool to select an adequate control unit (ABADIE, 2021). When TI is combined with TA, the findings are consistent with increases of 12 and 6.5 percent in Portuguese test scores in primary and lower secondary schools, respectively.Regarding mathematics, the effects were similar, but not statistically significant.There was no evidence of impact of the new policies on upper secondary education in Ceará. These findings are in line with the literature highlighting the positive impact of technical assistance and incentives on educational outcomes (ANGRIST and LAVY, 2001;BRANDÃO, 2014;BRESSOUX et al., 2009;CARNEIRO and IFFI, 2018;FREDRIKSEN et al., 2015;FUJE and TANDON, 2015;LAUTHARTE et al., 2021;McEWAN, 2015).They also seem to support the model of educational production proposed by Bishop and Woessmann (2004), which links higher political priority for education with better student performance.In the final section, I highlight my main conclusions. The new educational policies devised in Ceará Ceará is located in the northeastern region of Brazil and had an estimated population of approximately 9.2 million inhabitants in 2020.Its area is 148,895 km2 , which is slightly larger than Greece.In 2019, the state had a monthly 'per capita' In this article, TI and TA are regarded as incentives to increase educational quality.TI is clearly an incentive, since mayors receive higher revenues as a reward for improving education.On the other hand, PAIC's nature as an incentive is less obvious.Bonamino et al. (2019) see PAIC as a complex arrangement with a high capacity to articulate the cooperation between the state government and municipalities.In a way, TA offers a set of incentives for municipal governments to improve learning quality.Training for teachers and civil servants working in school management can be understood as incentives for improving their teaching and management skills. Moreover, the Escola Nota Dez prize, an initiative related to PAIC, awards the best schools in learning achievement.These schools are granted financial resources, but only receive the complete prize if they offer support to a lowerperforming school (CRUZ et al., 2020;SUMIYA et al., 2017).Thus, Escola Nota Dez is an incentive for schools to achieve better results and to cooperate with other schools. The following sections explore in detail each of the new educational policies implemented in Ceará. Programa Alfabetização na Idade Certa (TA) In Brazil, federated states and municipalities collaboratively organize their educational systems (BRASIL, 1996).As shown in Table 01, both municipalities and states are responsible for providing primary and lower secondary education.This leads to an overlap of responsibilities and ambiguity in the role of each sphere of the federation.Source: Created by the author, based on Brasil -LDB (1996). Unlike other Brazilian states, Ceará started to address this problem decades Brazil.In the same year, 96 percent of lower secondary schools were managed by municipalities in Ceará, compared to only 50.5 percent in Brazil (LOUREIRO et al., 2020). With clear-cut competencies for municipalities and for the state, the collaboration between the two spheres became smoother.By not providing primary and lower secondary education, the state could focus on offering TA to the municipalities.The collaboration between the state and municipalities was institutionalized by Law 14.026, from 2007, which established PAIC (SEGATTO, 2015). PAIC resulted from the articulation of several organizations and actors (SEGATTO and ABRUCIO, 2018).In 2004, the state parliament of Ceará created the committee for the elimination of illiteracy in Ceará, with the aim of investigating the quality of education in the state.This initiative received support from the United Nations International Children's Emergency Fund (UNICEF), the association of mayors of Ceará, the union of municipal educational leaders (Undime), the state and federal universities of Ceará, private universities, and specialized civil servants working in education (BONAMINO et al., 2019;SEGATTO, 2015). The committee's investigation showed that only 40 percent of the students in the analytical sample were literate.To change this reality, PAIC was implemented as a state program starting in 2007 (SEGATTO, 2015).Although the program was optional, it was adopted by all of the state's 184 municipalities since its beginning (SUMIYA et al., 2017).PAIC established technical and instrumental standards that defined the responsibilities of each stakeholder in the educational process (CRUZ et al., 2020). The main actions of the program were: the training of teachers focused on classroom practice; the provision of literacy materials to schools; the promotion of workshops to disseminate best practices; the strengthening of the state system of evaluation of primary and lower secondary education (SPAECE 5 ); and the training of municipal civil servants with a focus on the management of school systems (LAUTHARTE et al., 2021).These activities were carried out through agreements between municipalities and the State Secretariat of Education (SEDUC). To facilitate the cooperation between the state government and The tax incentive (TI) The Brazilian Constitution states that revenues from the 'Imposto sobre Circulação de Mercadorias e Serviços' (ICMS), the state consumption tax, shall be divided between states (75 percent) and municipalities (25 percent).Furthermore, from the 25 percent of revenues reserved for municipalities, 75 percent should be distributed based on the added benefit criteria, which means that municipalities producing and selling more will receive more resources.The Constitution grants the federated states discretion to define how to distribute the remaining 25 percent among their municipalities. is the education quality index in municipality m; is the health quality index in municipality m; and is a dummy indicating whether municipality m has an operational solid waste management system (LOUREIRO et al., 2020). The methodology to calculate is shown in Equation 02. 8 It was reformulated in 2011 to focus on the lower tail of the distribution of performance, that is, municipalities that improve the outcomes for students lagging behind are benefited more than others (LAUTHARTE et al., 2021). is the Literacy Quality Index for municipality m. is the index that measures the quality of so-called 'fundamental schools' in municipality m -in Brazil, fundamental schools comprise primary and lower secondary schools.Finally, is the average passing rate in primary school for municipality m.Appendix A presents how each component of Equation 02 is calculated.One caveat of the EQI computation is that it considers current and past educational performance.Thus, in the first years of his or her term, a given mayor's incentives are dependent on the past administration.They will only receive incentives integrally dependent on their own performance near the end of their term. Incentives for the actors involved in the educational process A growing literature explores the impact of incentives on the quality of education.Incentives might target several actors involved in the educational process.Regarding incentives for teachers, results are mixed since effectiveness relies on appropriate incentive design (IMBERMAN, 2015).In a randomized controlled experiment in Tanzania, teacher salary bonuses dependent on student performance increased schooling quality (MBITI et al., 2019).Similar incentives implemented in schools in India and Israel also appear to have positively impacted student performance (LAVY, 2009;MURALIDHARAN and SUNDARARAMAN, 2011). However, not all studies found statistically significant impacts; some even found negative effects of financial incentives for teachers (FRYER, 2013;FRYER et al., 2012). Incentives for students have also been a topic of research.Although some interventions had a positive effect on student attendance, the effects on performance are less clear (BARRERA-OSORIO et al., 2011;GALIANI and McEWAN, 2013).One intervention in the United States provided cash transfers for students who successfully completed standardized tests.Results showed improvements in mathematics, but no impact on reading and science scores (BETTINGER, 2011). Bruno Gasparotto Ponne ( 2023) 17 (1) e0005 -11/44 Studies have also shown that central exams incentivize students to increase their performance since scores might be seen by future employers or by educational institutions (BISHOP, 1997;WOESSMANN, 2018).However, there is also evidence that excessive focus on central exam contents might negatively impact student achievement (COLLIER, 2012).Portuguese test scores in primary education.Brandão (2014) and Petterini and Irffi (2013) employed the same methodology to analyze the policy and found positive impacts in mathematics and Portuguese scores. In a more theoretical approach, Bishop and Woessmann (2004) devised a basic model of educational production.According to their model, giving political priority to education has positive effects on student achievement. Technical assistance in education Several works have examined the effect of TA on academic achievement.In a randomized control trial in Mongolia, researchers found that the provision of textbooks increased student scores, and that this improvement was intensified when textbooks were combined with teacher training (FUJE and TANDON, 2015).There is also evidence that the provision of textbooks improves student achievement (FREDRIKSEN et al., 2015;McEWAN, 2015).However, a randomized trial in Kenya showed that textbooks had a positive impact only for the bestperforming students (GLEWWE et al., 2009).Textbook choice also appears to have an impact on student scores.More engaging and demanding textbooks seem to increase scores more than less challenging ones (HADAR, 2017;HAM, 2018). Methods and analytical sample The synthetic control method The fundamental problem of causal inference is that once a policy intervention is implemented in a particular space and time, one cannot no longer assess how the outcome of interest would have developed in the absence of that intervention.SCM is employed in this investigation to overcome this limitation. SCM is a causal inference method that has gained popularity over the last two decades.It has been called "arguably the most important innovation in the evaluation literature in the last fifteen years" (ATHEY and IMBENS, 2017, p. 09).This method was developed to estimate causal effects when there are few aggregate units, with one unit being treated while the others are not.In this context, a combination of non-treated units provides a better control than any single nontreated unit (ABADIE, 2021). To understand how SCM is estimated, let us consider that we have data for J + 1 units and j = 1, 2, ..., J + 1.In this research, j varies from 01 to 27, since Brazil has 27 federative units, 26 of which are states and the other of which is the Federal District.j = 1 is the treated unit, Ceará.The non-treated units constitute the donor pool, that is, all the candidate control states, j = 2, ..., J + 1.For each time t and unit j, we observe the outcome of interest, student performance.Considering that we have T periods and that T0 refers to pre-intervention periods, the effect of the reforms in Ceará when t > T0 is given by: The method is concerned with how to optimize the choice of these weights. Suppose = ( 2 , … , +1 ) is a vector of weights assigned to each donor pool unit and 'V' is a vector of weights assigned to each predictor k. 'W' is defined, dependent on 'V', so that the mean squared prediction error (MSPE) between the treated unit and its synthetic version is minimized in the pre-intervention period9 . The weights are chosen so that the synthetic control most closely resembles The synthetic control findings In this section, SCM findings will be analyzed to further investigate the indications provided by the exploratory plots.I estimated 15 one synthetic control for each level of education and subject, resulting in six models.The donor pool comprised all Brazilian states, except for Ceará.The predictors are the state's characteristics, as presented in Table 02. Table 03 shows the 'W' vector of each synthetic control.It indicates the four states, which contributed to the models: Bahia, Pernambuco, Piauí, and Rio Grande do Sul.The first three states are situated in the same region and share economic, social, and historical characteristics with Ceará.Therefore, it seems reasonable that these states contributed the most to the models.Rio Grande do Sul has distinct socioeconomic characteristics, but it only contributed substantially to the models for upper secondary education, where no statistically significant effects were found.In Figure 06 The graphs in Figure 07 show the difference between the score of Ceará and its respective synthetic version.Table 06 shows the results of TI alone and TI combined with TA.Even in lower secondary education, where TI had more time to develop its effect, the effects only increased substantially when TI was combined with TA.In primary education, over the period between 2011 and 2019, scores increased 16.9 and 18.5 points on average in mathematics and Portuguese, Robustness checks The in-time placebo synthetic control tests the robustness of the findings provided by SCM (Abadie 2021).In this test, an intervention starting in 1999 is ______________________________________________________________________________________________ 16 They were in the 9 th grade in 2008 (TI start) and reached the 12 th grade (Saeb exam) in 2011, assuming they did not fail any grade. 17artificially created to estimate whether SCM still shows any effect from this 'false' intervention.If this were the case, the validity of the results could be put into question.Figure 10 shows no significant difference between the trends after the artificial intervention.Moreover, even when the intervention is artificially backdated by nine years, the effects appear shortly after 2008 with very similar magnitudes as the ones presented in Figure 06.A second recommended test is a leave-one-out re-analysis to test whether the results are sensitive to any units selected to create the synthetic control (ABADIE, 2021;ABADIE et al., 2015).Table 03 showed that SCM selected four Brazilian states: Bahia, Pernambuco, Piauí, and Rio Grande do Sul.To check whether eliminating one of these states affects my results, I estimate four synthetic controls by selecting one of the contributing states from the sample one at a time.In Figure 11, the leave-one-out synthetic controls are shown in gray.They are very similar to the synthetic control estimated using the complete donor pool.All of Finally, the generalized synthetic control method (GSCM) was employed to test the robustness of the findings.This method unifies the SCM with linear fixed effects models to improve efficiency and interpretability.It also avoids specification searches and provides p-values for inference (XU, 2017).The results are presented in Appendix C. Average effects provided by traditional SCM are within the confidence interval of effects provided by GSCM in primary and lower secondary education.However, point estimates and the statistical significance of GSCM findings in primary education are sensitive to the choice of predictors.Moreover, the method suggests statistically significant effects in mathematics, which are further analyzed in the results discussion.The method does not provide statistically significant improvements in upper secondary education.Inference for the SCM findings Abadie (2021) proposes a mode of inference based on permutation methods to assess inferential aspects of synthetic control estimates.In this mode, a permutation distribution is obtained by reassigning treatment to each unit in the donor pool one at a time.Each of these estimated synthetic controls produces a 'placebo effect'.All the placebo effects can then be compared to the effect estimated for the truly treated unit.The effect is only considered significant if it is extreme relative to the permutation distribution (ABADIE, 2021). In Figure 12, the effects obtained with the treatment artificially reassigned to each donor pool unit are presented.The effect in Ceará is highlighted in green, while the effect in control states is shown in gray.The effects for Ceará are always Many of the synthetic controls do not fit the pre-intervention data as well as the control estimated for Ceará.Therefore, to compare my synthetic control only with the ones that had similar pre-intervention mean squared prediction errors (MSPE), I excluded cases in which the MSPE was more than twice the MSPE of the synthetic control for Ceará.The results of this procedure can be seen in Figure 13. For Portuguese scores, the findings are unusually large compared to the estimations for other states.For mathematics scores, however, the rarity of the effects remains unclear.To further investigate the significance of my findings for mathematics, I carried out a post/pre-intervention MSPE test suggested by Abadie et al. (2010).In this test, the post/pre-MSPE ratio distribution is plotted for all placebo gaps.This approach eliminates the need to choose an MSPE cut -off for evaluation.The idea is that a good synthetic control has a low error before the intervention because it closely fits the data.On the other hand, for the treated unit, the error is large after the intervention because there is an intervention effect.Therefore, I expect my estimated synthetic control post/pre ratio to be unusually high compared to the controls of non-treated states. As Figure 14 shows, mathematics ratios are not sufficiently rare in the distributions.If one were to assign the intervention randomly in this data, the probability of obtaining a post/pre-intervention MSPE as large as the one for Ceará in primary education -mathematics -would be 8/27, or 0.3.For mathematics in lower secondary school, this probability is 5/27 = 0.19.Thus, the results are not statistically significant for mathematics.For Portuguese, the probability is 0.037 for both levels of education, and the estimates are statistically significant.In primary education, TI was implemented alone for a short period, and perhaps its effect would increase even without TA.However, in lower secondary education, TI had more time to develop its effects (2008)(2009)(2010)(2011)(2012)(2013)(2014), but still did not reach half of the effect observed when TI is combined with TA.This suggests that TA is a significant driver of the effects and that the policies produce better results when implemented together. Results in mathematics SCM indicates that improvements in mathematics scores are not statistically significant, as highlighted in Figure 14.On the other hand, robustness checks (Appendix C) and previous investigation suggests statistically significant effects also in mathematics (LAUTHARTE et al., 2021).It is important to highlight, however, that both the robustness check and Lautharte et al. (2021) employ ______________________________________________________________________________________________ 18 To make this comparison, the score increase of students between the 5 th and 9 th grades was calculated.Only students in states other than Ceará were considered.It was assumed that the students in 5 th grade in 2007 reached the 9 th grade in 2011.The same procedure was performed for 2011 and 2015; 2015 and 2019.These increases were then averaged and resulted in a 14.9 increase on average per year.Since one year of schooling in Brazil consists of 10 months (200 days) of effective schooling, a 14.9 increase was associated with 10 months of effective schooling. Better Results in upper secondary schools The absence of statistically significant impacts on upper secondary education is plausible because, while TI functions as an incentive for mayors, upper as well as on the later life of students. Further limitations One concern in this study is a pre-intervention upward trend in scores between 2005 and 2007 (Figure 01).This trend could indicate that a third factor, originating in 2005, could be impacting education in Ceará.However, this upward trend is also seen in the control states.A possible explanation for it is the rise in investment in education which occurred between 2005 and 2011 in all of Brazil. This should not bias the results since factors that impact both treated and untread units are already accounted for in the synthetic control.Moreover, investment in education is included as a predictor in the models. An additional issue is that TI might have led mayors to exert pressure over the effects described here would not indicate real improvement in schooling. However, I argue that this is not the case because this study employed Saeb scores (national assessment) to estimate the causal effects.In contrast, the government of Ceará employs SPAECE (a state assessment) to grant TI. The mechanisms driving performance increase TI and TA appear to provide better results when implemented together. Impacts of TI more than doubled when it was implemented combined with TA.This suggests TA drives a substantial share of the observed effects.Possible mechanisms behind TA are the trainings for teachers and school civil servants, the collaboration between schools, and the provision of textbooks.Further research is needed to confirm to what extent each of TA's actions contributed to the effects. Regarding TI, it is plausible that the program's incentives increased the level of political priority accorded to education because mayors wanted to maximize their tax revenues.Local politicians might thus have felt more encouraged to improve school infrastructure, including libraries, science labs, and sports facilities. These amenities, in turn, led to better performance.SPAECE, the state system of evaluation of primary and secondary education, understood as an annual central exam, could also drive part of the effects by increasing students' reward for studying and by strengthening the monitoring of schools (BISHOP, 1997;WOESSMANN, 2018).Finally, Figure 04 shows that Ceará is not among the states with the highest average spending on education, suggesting that higher spending is not a critical channel driving these effects. Conclusion This study provides evidence that incentives and technical assistance can This study contributes to the literature on the connection between technical assistance, notably the provision of textbooks and teacher training, and the quality of education.Furthermore, it provided empirical evidence of the effect of political priority for education on the level of schooling offered to students. Recently, Brazil's Congress passed a constitutional amendment 20 , demanding that all Brazilian states condition tax transfers on educational outcomes. The results presented here appear to support this constitutional change. However, the lower municipalization rate observed in other Brazilian states might jeopardize the positive effects of TI, since fewer schools are under municipal administration than in Ceará.Another challenge is to overcome the resistance of mayors who might be wary of losing municipal revenues. From a policymaking perspective, this study raises a relevant issue regarding the absence of improvements at the level of upper secondary schools. Even though students had a better quality of schooling in primary and lower secondary schools, they did not experience improvements in upper secondary schools.Policymakers should debate incentives directed to governors, the authorities responsible for managing upper secondary education in Brazil. The distribution of a federal tax could be conditioned to the educational outcomes of states in the same way that the distribution of the state tax, ICMS, is conditioned on the educational outcomes of municipalities. Regarding the national system of education 21 , technical assistance and collaboration between the federal government, states, and municipalities could be tools to replicate the successful experience of Ceará.It is important to note, however, that most Brazilian states do not have a history of collaboration between the state government and municipalities comparable to that of Ceará. This article is organized as follows.First, I discuss the interventions implemented in Ceará and analyze the existing literature on education.Then, I present my methodology, along with information on data sources and summary statistics.In the following section, I present, analyze, and contextualize my results. Figure 01 . Figure 01.Scores of students in primary and lower secondary schools from 1995 to 2019. 1995, the state passed the municipalization law.Its objective was to transfer the responsibility for primary and lower secondary schools to municipalities, thus clarifying the role of each sphere of the federation(SEGATTO, 2015).Moreover, Ceará established a forum on education and a permanent program for assistance to municipalities.These initiatives aimed to promote the democratization of access to education, cooperation, and the municipalization of primary and lower secondary education (NASPOLINI, 2001).Ceará's new policies produced remarkable results.Between 1995 and 2000, enrollment in primary and lower secondary schools increased by 35.4 percent in the state, whereas the increase in Brazil as a whole was of only 9.3 percent(NASPOLINI, 2001).Regarding municipalization, in 2018, 99.3 percent of the primary schools in Ceará were under municipal administration, compared to 83.5 percent in all of municipalities, Ceará established the 'Coordenadoria de Cooperação com os Municípios' (COPEM), the 'Coordination for Cooperation with Municipalities'.Figure 02 provides an overview of COPEM.At the state level, experts were hired to train 'specialist teachers', the name given to teachers responsible for disseminating skills and good practices in their municipalities.Each municipality had one local manager and several specialist teachers.The local manager was responsible for managing the actions and establishing communication with the SEDUC.Both local managers and specialist teachers could apply for financial support to improve their qualifications and skills (CRUZ et al.,2020). Figure 02 . Figure 02.Collaborative arrangements between the State government and municipalities ______________________________________________________________________________________________6 In 2020, Constitutional Amendment 108, from 2020, increased the proportion of ICMS over which states have discretion from 25 percent to 35 percent.The Amendment also requires that states condition at least 10 percentage points to performance in education.7After the passing of Constitutional Amendment 108, from 2020, Ceará approved the following new criteria: education (18 percent), health (15 percent), and environment (02 percent).This change does not affect the period studied in this investigation.8For details about and , consultLOUREIRO et al. (2020).Better Incentives, Better Marks: A Synthetic Control Evaluation of the Educational Policies in With regard to incentives for city mayors, LAUTHARTE et al. (2021) studied the interventions in Ceará, employing a regression with year and city fixed-effects.They restricted their sample to schools located at the border between Ceará and neighboring states to make control and treatment groups more similar.The findings showed that TI combined with TA improved student scores in mathematics and Portuguese tests.Carneiro and Irffi (2018) employ a difference-in-differences model to investigate the impact of TI in Ceará between 2007 and 2009.The findings are consistent with an increase of approximately 04 percent in mathematics and Angrist and Lavy (2001) found positive effects of teachers' in-service training on student scores in reading and mathematics tests.In line with these findings are those ofBressoux et al. (2009), who studied the effects of teacher training in French schools.Their estimates showed an increase of 0.25 standard deviations in mathematics scores, but no improvement in reading. Ceará's outcome before intervention.The ability of SCM to estimate the counterfactual depends on how well it predicts the outcome of interest of the treated unit before the intervention.Importantly, SCM provides a transparent and datadriven methodology for choosing the control unit while avoiding specification search(ABADIE, 2021).In a different context, this method was employed to estimate the effects of homicide prevention measures in the Brazilian state of São Paulo(FREIRE, 2018).Analytical sample: summary and descriptive statisticsSaeb is the main source of data for this research.It consists of Portuguese and mathematics exams, and assesses students every two years: at the end of primary school (5 th grade), lower secondary school (9 th grade), and upper secondary school (12 th grade).The exams are carried out by the 'Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira' (INEP), an institute linked to Brazil's Ministry of Education.Saeb was created in 1990 and, in 1995, started to employ the Item Response Theory (IRT) to allow comparisons across time.Therefore, this study ______________________________________________________________________________________________ Better Incentives, Better Marks: A Synthetic Control Evaluation of the Educational Policies in Ceará, Brazil (2023) 17 (1) e0005 -14/44 uses data from 1995 until 2019 (INSTITUTO NACIONAL DE ESTUDOS E PESQUISAS EDUCACIONAIS ANÍSIO TEIXEIRA, 2020b) 10 .During this period, a random sample of the population was assessed (INSTITUTO NACIONAL DE ESTUDOS E PESQUISAS EDUCACIONAIS ANÍSIO TEIXEIRA, 2006).From 2007 onwards, all public schools with at least 30 students were tested (LAUTHARTE et al., 2021).Only 0.8 percent of basic education is managed by Brazil's federal government, most of which consists of upper secondary schools (INSTITUTO NACIONAL DE ESTUDOS E PESQUISAS EDUCACIONAIS ANÍSIO TEIXEIRA, 2020a).Since these schools receive greater financial and technical support, and since their teachers are usually better qualified than those working in state and municipal schools, they were not included in the sample.For the same reasons, private schools were also excluded.The sample thus only consists of schools run by states and municipalities.Each observation in the data set contains the average score of a state in a particular year, grade, and subject.I also created a dataset to control for Brazilian states' social, demographic, and economic characteristics.These indicators come from different sources.The population estimates are from the Ministry of Health 11 .Data on investment in education and industrial electricity consumption 12 are provided by the 'Instituto de Pesquisa Econômica Aplicada' (IPEA) 13 .Real values for investment in education and culture were determined by deflating nominal values against the consumer price index, using 2020 as the base year 14 .The real values were then divided by the population to generate a figure for investment per capita for each state.Homicides per 100,000 inhabitants were compiled with data from IPEA and Brazil's Ministry of Health.Finally, the unemployment rate was compiled with data from IPEA and the 'Instituto Brasileiro de Geografia e Estatística' (IBGE).Linear interpolation was used ______________________________________________________________________________________________ (2023) 17 (1) e0005 -15/44 to impute two missing values in electricity consumption and investment in education and culture.Table 02 presents the analytical sample statistics with standard deviations in parentheses.Before synthetic control findings are presented, exploratory plots will provide an overview of the analytical sample.Figure 03 presents the distribution of scores in mathematics and Portuguese tests before and after the intervention.Other states were more likely to have the best scores before the intervention.However, Ceará had better scores in the post-intervention period for both primary and lower secondary education.The same improvements were not observed in upper secondary education. Figure 03 . Figure 03.Density plots of the scores in mathematics and Portuguese Figure 04 Figure 04 presents the change in test scores versus the average per capita investment in education and culture between 2007 and 2019.In primary and lower secondary education, Ceará achieves the highest score increase with a relatively low level of investment per capita.The plot suggests that Ceará was more efficient than other states.Once again, upper secondary education does not exhibit the same positive results. Figure 04 .Figure 05 . Figure 04.Score change by average investment between 2007 and 2019 , mathematics and Portuguese scores for Ceará and synthetic Ceará are presented.Before the intervention (left of the dashed lines), the synthetic control can emulate the performance of Ceará quite well.For all models, in the post-intervention period, a gap between Ceará and its synthetic version becomes progressively larger, indicating that the intervention had an effect.The yellow lines indicate the performance Ceará would have had in the absence of the reforms, while the green lines indicate the actual performance attained by the state. Figure 06 . Figure 06.Performance of Ceará vs. synthetic Ceará in primary and lower secondary education. Figure 07 . Figure 07.Gap between Ceará and synthetic Ceará in primary and lower secondary education Figure 08 . Figure 08.Performance of Ceará vs. synthetic Ceará in upper secondary education Figure 09 . Figure 09.Gap between Ceará and synthetic Ceará in upper secondary education They were in the 8 th grade in 2015 (TA starts in lower secondary education) and reached the 12 th grade in 2019.Better Incentives, Better Marks: A Synthetic Control Evaluation of the Educational Policies in Figure 10 . Figure 10.In-time placebo test with artificial intervention in 1999 a positive effect of the reforms; in some cases, they point to even larger effects. Figure 11 . Figure 11.Leave-one-out Test positive and continuously increase between 2008 and 2015.By contrast, most of the other models move randomly and show smaller effects compared to the ones observed in Ceará. Figure 12 . Figure 12.Score gaps in Ceará and placebo gaps in the 26 control states Figure 13 . Figure 13.Score gaps comparison with MSPE up to twice the MSPE for Ceará Figure 14 . Figure 14.Post/pre-intervention MSPE ratio Additionally, Lautharte et al. (2021) employ a reduced geographic and time span (2007 to 2017) compared to this study.One explanation for this divergence is that SCM fails to precisely fit Ceará's performance in mathematics before the intervention, which leads to a lower post/pre-intervention MSPE that is not sufficiently unlikely in the placebo post/preintervention MSPE distribution (Figure 14) 19 . teachers to train students specifically to perform well in Saeb tests.If this happened, ______________________________________________________________________________________________19 Data on education performance in Brazil is available only since the beginning of the 1990s.SCM might not perform well with a small number of pre -intervention periods.Classic applications of SCM, like the estimation of the effect of California's tobacco control program(ABADIE et al., 2010), employ around 18 pre-intervention periods while this study had only seven periods, from 1995 to 2007, every two years.For a comprehensive discussion about this issue, please consult Abadie (2021). effectively improve educational outcomes.It has been shown that TI and TA have led Ceará to experience substantial and robust improvements in test scores for Portuguese.Compared to baseline scores, there was an increase of around 12 and 6.5 percent in Portuguese scores in primary and lower secondary education, respectively.These findings present a promising alternative for other Brazilian states pursuing a better quality of education.The new educational policies combined Better Incentives, Better Marks: A Synthetic Control Evaluation of the Educational Policies in Ceará, Brazil (2023) 17 (1) e0005 -32/44provided a performance gain equivalent to approximately 12.4 and 9.7 months of effective schooling in primary and lower secondary education, respectively.These improvements were achieved without increasing public spending on education relative to other Brazilian states. ______________________________________________________________________________________________ 20 Constitutional Amendment 108, 2020, available at ˂http://www.planalto.gov.br/ccivil_03/constituicao/emendas/emc/emc108.htm˃. 21PLP 235, 2019 is a bill currently being discussed in the Brazilian Congress.It aims to establish a national system of education to improve governance and collaboration between the federal government, states and municipalities. Table 01 . Responsibility for education in Brazil according to the national educational guidelines law. Table 02 . Summary statistics of the analytical sample Created by the author, based on data provided by IBGE, INEP, IPEA and SUS. Table 03 . Vector W showing the contribution of each state to each synthetic control Source: Created by the author, based on data provided by IBGE, INEP, IPEA and SUS. Table 04 shows the 'V' vector.It indicates to what extent each predictor contributed to defining the synthetic controls.Industrial electricity consumption, homicides, and population seem to be the predictors with the strongest influence in the models.______________________________________________________________________________________________ 15 SCM was estimated with the R library 'Synth' (ABADIE et al., 2011).Better Incentives, Better Marks: A Synthetic Control Evaluation of the Educational Policies in Ceará, Brazil (2023) 17 (1) e0005 -20/44 Table 04 . Vector V showing the contribution of each predictor to each synthetic control Source: Created by the author, based on data provided by IBGE, INEP, IPEA and SUS. Table 06 . Results by policy, level, and subject Source: Created by the author, based on data provided by IBGE, INEP, IPEA and SUS. secondary education is under the state governor's responsibility.Moreover, the first students affected by TA only reached upper secondary examinations in 2019 and were only partially affected by TA.It is also reasonable that the effects dissipate over the 03 years of upper secondary education.Further research is needed to evaluate the long-term effects of the interventions on upper secondary education,
8,412.6
2023-04-01T00:00:00.000
[ "Economics", "Education" ]
The Cosmological Tension of Ultralight Axion Dark Matter and its Solutions A number of proposed and ongoing experiments search for axion dark matter with a mass nearing the limit set by small scale structure (${\cal O} ( 10 ^{ - 21 } {\rm eV} ) $). We consider the late universe cosmology of these models, showing that requiring the axion to have a matter-power spectrum that matches that of cold dark matter constrains the magnitude of the axion couplings to the visible sector. Comparing these limits to current and future experimental efforts, we find that many searches require axions with an abnormally large coupling to Standard Model fields, independently of how the axion was populated in the early universe. We survey mechanisms that can alleviate the bounds, namely, the introduction of large charges, various forms of kinetic mixing, a clockwork structure, and imposing a discrete symmetry. We provide an explicit model for each case and explore their phenomenology and viability to produce detectable ultralight axion dark matter. I. INTRODUCTION Axions with masses well below the electroweak scale are simple dark matter candidates with novel experimental signatures [1][2][3], a potential solution to the apparent incompatibility of cold dark matter with small scale structure [4,5], and a common prediction of string theory [6,7]. Axions can arise as pseudo-Goldstone bosons of a spontaneously broken global symmetry or as zero modes of antisymmetric tensor fields after compactification of extra dimensions. In either case, their parametrically suppressed mass can result in a length scale comparable to the size of Dwarf galaxies, the so-called fuzzy dark matter regime. Dark matter with a macroscopic Compton wavelength allows for novel detection opportunities and many on-going and future experimental efforts search for ultralight axion relics with masses around this limit. Purely gravitational searches look for erasure of structure on small scales as a consequence of the axion's sizable wavelength, limiting the axion mass, m a > ∼ O(10 −21 eV) [8][9][10][11][12] 1 . If axion dark matter has sizable non-gravitational coupling to the visible sector, there are additional detection strategies dependent on the nature of the coupling. Axion couplings to the Standard Model fields are intimately related to the shape of the axion potential. For a given axion decay constant, f a , axion couplings to matter are suppressed by ∼ 1/f a while the ratio of the axion mass to its quartic coupling is typically at least on the order of ∼ f a . Such non-quadratic contributions to the axion potential play an important role in cosmology and the interplay between the axion's couplings and potential is the focus of this work. find that the cosmic energy density of dark matter redshifts as ∝ R −3 , where R is the scale factor, until well before recombination. Such a scaling is satisfied for an oscillating scalar field if (and only if) the scalar potential is solely composed of a mass term, 1 2 m 2 a a 2 . This potential is typically only a good approximation if the field amplitude is sufficiently small and may not hold for ultralight axions in the early universe. Deviation from a simple quadratic term results in a perturbation spectrum that is no longer scale-invariant, constraining the axion potential. This in turn places a powerful bound on conventional ultralight axion dark matter candidates due to the relationship between f a and the axion-matter couplings. While this point is implicitly acknowledged in some of the axion literature, its significance is not widely emphasized, and its implications for current and future searches is missing altogether. In this work, we provide the constraints from the matter-power spectrum and thereby motivate a natural region of mass and coupling values wherein the axion can constitute dark matter without any additional model building. Circumventing the cosmological bounds requires breaking the parametric relationships between the axion potential and couplings. There are few techniques that can successfully accomplish this task. We consider the possibility of large charges, kinetic mixing, a clockwork structure, and discrete symmetries in the context of ultralight dark matter. These models typically predict additional light states in the spectrum and we survey their phenomenology. The outline of the paper is as follows. In section II we review features of axion models, with particular emphasis on the coupling of axions to visible matter and the axion potential.In section III we study the impact of ultralight axion dark matter on the matter-power spectrum and derive the associated bound. In section IV we examine the axion detection prospects of various experiments in light of the bounds. In section V we study the robustness of the constraints by exploring ways to disrupt the relationship between axion-matter couplings and the axion potential. Finally, we conclude in section VI. II. AXION MASS AND COUPLING The axion decay constant, f a , relates the terms of the axion potential to its couplings with Standard Model fields. The potential arises from non-perturbative contributions of gauge or string theories and explicitly breaks the continuous shift symmetry of the axion. We use the standard parametrization of the potential, which is a simple cosine of the form where µ is a scale associated with the explicit breaking of the global symmetry. If the potential arises from a composite sector (as in the case of the QCD axion), the explicit breaking scale corresponds to the maximum scale at which states must show up in the spectrum. The full axion potential is expected to be more complicated than the simple cosine above, but we can consider 1 to be the first term in a Fourier decomposition of the potential. An important feature of 1 is the existence of terms beyond the mass term and that the coefficients of these higher order terms are not arbitrary. The size of the quartic determines the point at which the quadratic approximation breaks down and has significance for axion cosmology. Axions may also couple to Standard Model fields, with the leading operators obeying the axion shift symmetry. In this work we focus on two types of operators for axion dark matter, the prospective photon and nucleon couplings, 2 where here and throughout we suppress the Lorentz indices on the gauge interactions, FF ≡ F µνF µν and F µν ≡ 1 2 µναβ F αβ . The parameters C aγ and C aN represent combinations of couplings in the UV theory and are O(1) for generic axions. Demanding that the theory be invariant under axion discrete shift transformations requires the coefficient C aγ to be an integer 3 and hence cannot represent a large ratio of scales without additional model building. There may also be contributions to the above couplings from the IR if the axion mixes with dark sector particles, similar to the QCD axion-meson mixing, but such contributions will be unimportant for our considerations. The relationship between the axion potential 2 Other couplings of ultralight axions with matter are the gluon operator, aGµνG µν , the electron operator, ∂αaēγ α e, and the muon operator, ∂αaμγ α µ. The gluon operator requires tuning to be sizable around the fuzzy dark matter regime and has other constraints [18], the electron coupling can be probed using torsion pendulums [19], and the muon operator has other strong constraints making it difficult to see experimentally [20,21]. 3 If there is additional axion coupling in the phase of the mass matrix of some new fermions, Caγ only needs to sum to an integer with the coefficient of the coupling, see e.g., [22] and its coupling to matter is made manifest in (1) and (2). To make contact with other studies, we define In principle it is possible that the particle searched for by dark matter experiments is not a true axion, in the sense that it is not shift symmetric, but a light pseudoscalar with a potential, In this case the corresponding coefficients in-front of the terms in (2) do not correspond to any symmetry breaking scale, but are instead completely free parameters associated with the scale of integrating out heavy fields. This would prevent us from using the arguments of section III to restrict the dark matter parameter space. While such models seem viable, they are highly fine-tuned and do not exhibit the desirable features of axion models. One way to see the tuning is to consider the additional terms in the effective theory that arise when integrating out the heavy fields that lead to the couplings in (2). For example, in addition to the aFF term, the low energy theory of a simple pseudoscalar will include terms such as a 2 F F , a 3 FF , etc. These terms will always be generated as they are no longer forbidden by any symmetry, and lead to corrections to the scalar potential which destabilize the light scalar. Thus any simple pseudoscalar becomes unnatural and the motivation to consider such a particle as dark matter is rendered null. Therefore, we take the position that the target particles of experimental searches are indeed ultralight axions with a full trigonometric potential, and we now examine the cosmological limitations of such dark matter candidates. III. AXION MATTER-POWER SPECTRUM A scalar field evolving in a purely quadratic potential has a scale-invariant matter-power spectrum, matching that of ΛCDM. However, if the potential contains higher order terms, the scalar equation of motion will possess non-linear terms which impact the growth of perturbations, with positive (negative) contributions wiping out (enhancing) small scale structure. For an axion with field amplitude a 0 (z) at redshift z the condition for the axion fluid to behave like cold dark matter is a 0 (z)/f a 1. The cosmic microwave background is the most sensitive probe of the matter-power spectrum, measuring deviations at a part per thousand, and sets a bound around recombination on any additional energy density fluctuations, δρ/ρ < ∼ 10 −3 , corresponding to, a 0 (z rec )/f a < ∼ 10 −3 . Its important to note that this bound does not rely on the specific production mechanism and must be satisfied for any light axion making up the entirety of dark matter. This constraint was studied quantitatively for misaligned axions in a trigonometric potential in [23] (see also [24] for related discussions). The authors considered an axion with the potential in (1), with a field value frozen by Hubble friction until z c , the redshift at which the axion mass is comparable to Hubble and it begins oscillations. The matter-power spectrum then constrains the fraction of dark matter made up by axions as a function of z c . The authors of [23] find that in order for the axion to constitute all of dark matter, z c must be > ∼ 9 × 10 4 . This can be translated onto a constraint on f a by noting that the axion field amplitude is fixed today by the measured dark matter energy density with, ρ DM (z) = 1 2 m 2 a a 0 (z) 2 . Since the amplitude redshifts as a 0 (z) ∝ (1 + z) 3/2 , requiring the axion to oscillate before it exceeds it field range, a 0 (z c ) < ∼ f a , requires, The rough expressions motivated above, a 0 (z rec )/f a < ∼ 10 −3 , gives a similar result. Note that while [23] assumed a misalignment mechanism, it is more general, and will apply (approximately) to any axion dark matter production mechanism as suggested by the rough estimate. The constraint proposed in this work utilizes the matter-power spectrum and is distinct from the work of [25], which presented a bound assuming the misalignment mechanism. The limit in [25] is derived by noting that the maximum energy stored in the axion potential is ∼ µ 4 and, assuming a simple cosmology from the start of oscillations to recombination, demanding that this should be less than the dark matter energy density at z c : ρ DM (z c ) < ∼ µ 4 . This restricts, µ 4 < ∼ eV 4 (z c /z eq ) 3 , or equivalently, The authors of [25] also consider temperature-dependent axion masses which relax the misalignment constraint. While both types of bounds in [25] are more stringent than the matter-power spectrum bound, they are also less robust since they rely on misalignment and on having a simple cosmology from z c to recombination. The bound on f a can be translated into a bound on the coupling to photons and nucleons using the relations in (2) for different values of C aγ and C aN . The results are displayed in Figs. 1 and 2 in black for different values of the coefficients. Since generic axions models predict C aγ , C aN that are at most O(1), this is a powerful bound on the ultralight axion parameter space. Additional model building beyond the minimal scenario is required to access regions with larger coupling. For comparison, we include the regions constrained assuming misalignment as wavy gray lines for different values of C aγ and C aN . The constraints we derive here assumed the axion field makes up dark matter until prior to recombination. An alternative scenario is the case where an axion is only produced at late times, such as through the decay of a heavier state. While an intriguing possibility, decay of heavy states will produce relativistic axions which will in turn modify the equation of state of the universe. Thus evading the matter-power spectrum bound by tweaking cosmology at late times is a formidable task. IV. COMPARISON WITH EXPERIMENTS We now consider the prospects of ultralight axion dark matter searches in light of the implications of the matterpower spectrum as derived above, beginning by summarizing the current experimental constraints. Firstly, the Lyman-α-flux power spectra sets a bound on the axion mass, independent of the size of non-linear terms in the potential. These measurements are sensitive to sharp features in the matter-power spectrum on small scales, which would be present if the axion has a mass comparable to the size of dwarf galaxies, setting a bound on the axion mass of m a > ∼ 10 −21 eV [8][9][10][11][12]. In addition, there are astrophysical bounds on axions that are independent of the their energy density. Axions released during supernova (SN) 1987A would have produced a flux of axions that could convert as they passed through the galactic magnetic fields [26][27][28] setting the strongest bounds on low mass axions coupled to photons. For axion-nucleon coupling, the strongest dark matter-independent bounds arise from excess cooling of SN-1987A [29] and neutron stars [30,31]. There are a large number of searches looking for axions that rely on its relic abundance. Efforts to discover a photon coupling include looking for deviations to the polarization spectrum of the cosmic microwave background [32] (with updated bounds in [33]), influencing the polarization of light from astrophysical sources [34][35][36] and terrestrial experiments [37,38]. 4 Searches for an axion-nucleon coupling focused on the ultralight regime include axion-wind spin precision [39], using nuclear magnetic resonance [40][41][42][43], and using proton storage rings [20]. Several spin precession experimental setups are considered in [19]. 5 The photon bounds are compiled in Fig. 1 and nucleon bounds in Fig. 2. The matter-power spectrum bound derived in section III is displayed in both figures. We use solid (dashed) lines to denote current (prospective) bounds. We conclude that many experimental proposals in this ultralight regime are inconsistent with a generic axion dark matter and require C aγ 1 or C aN 1. Reaching the large couplings considered in various experiments is an issue of additional model building, and is the focus of the next section. V. ENHANCED AXION COUPLINGS We have presented stringent bounds on axions arising from the relationship between the scale of their potential, f a , and their coupling to photons or nucleons. However, there exist model-building techniques that can relax this relationship, which have often been discussed in the context of axion inflation. These methods may also be applied to ultralight axion dark matter and have distinct low energy phenomenology as a consequence of the lightness of the axion and requirement of matching the observed matter-power spectrum. In this section we review these mechanisms, provide explicit realizations of such models, and study their phenomenology. We focus on the photon coupling, though similar models can be built for the nucleon coupling. A. Large Charges One way to enhance the axion coupling to visible matter is to introduce fermions with large charges or a large number of fermions (see e.g., [44,45] for a discussion in the context of inflation). This strategy is limited by the requirement of perturbativity of electromagnetism and the presence of light fermions charged under electromagnetism. To be explicit, consider a KSVZ-like model where a complex scalar Φ (whose phase will be identified with the axion), has Yukawa couplings with a set of Weyl fermions with a electromagnetic charge Q f . Integrating out the fermions leads to an axion-photon coupling, The presence of charged fermions renormalizes the electric charge as computed through corrections to the photon gauge kinetic term. Perturbativity requires that , the perturbativity constraint sets a bound C aγ < ∼ 4π/α. We conclude that large charges can at most enhance the axion-photon coupling by O(10 3 ). 6 6 Here we have taken the Peccei-Quinn charges of the fermions to be O(1). If one chooses larger Peccei-Quinn charges such that the fermion mass only arises through higher dimensional operators, then the photon coupling can be slightly amplified. However, requiring a hierarchy between fa and the cutoff strongly constrains this possibility [44]. B. Kinetic Mixing Kinetic mixing of multiple axion fields can raise the axion coupling to visible matter by (potentially) allowing an axion with a large field range to inherent couplings of an axion with a smaller field range (see [44][45][46][47][48][49][50][51][52] for discussions in other contexts). As a simple example consider two axions a 1 and a 2 , where a 1 obtains a potential while the lighter axion, a 2 (which is massless here), couples to photons: The kinetic term can be diagonalized by the shift a 2 → a 2 − εa 1 , which induces an a 1 -photon coupling, Taking a 1 to be the axion dark matter candidate, we conclude that kinetic mixing gives C aγ = εF 1 /F 2 . If ε is held fixed and the decay constants have a large hierarchy (F 1 F 2 ), then a 1 will have C aγ 1. While this appears to be a simple solution, it is not possible to have C aγ > ∼ 1 within most field theories. This is a consequence of axions arising as Goldstone bosons of a extended scalar sector and hence the axion kinetic mixing is not a free parameter but must be generated. There are two possible sources for ε: renormalization group flow ("IR") and higher dimensional operators ("UV") contributions. To see the suppression from IR contributions, consider a theory of two axions with a fermion, χ, The induced kinetic mixing of the axion is quadratically divergent and goes as , where Λ represents the cutoff scale. Since Λ < ∼ F 1,2 (otherwise the effective theory is inconsistent), the kinetic mixing is bounded by ε < ∼ F 2 /4πF 1 and will result in C aγ < ∼ 1. Alternatively, its possible to induce an axion kinetic mixing through higher dimensional operators (see e.g., [46,49]). Taking a 1 and a 2 to be the phases of complex scalar fields Φ 1 and Φ 2 , there can be an operator, Once the scalar fields take on their vacuum values, the axions get a mixing term with ε = F 1 F 2 /M 2 . This is again suppressed since FIG. 1. Ultralight axion dark matter mass vs photon-coupling parameter space. The cosmological bound requiring ultralight axions to exhibit a matter-power spectrum consistent with that of ΛCDM is shown in black, with the regions below the Caγ = 1 line permitting natural axions without any additional model building (see text). The bounds from Lyman-α is shown in purple [8][9][10][11][12] the lack of axion-to-photon conversion of axions produced during supernova-1987A [28] are shown in green. Additional bounds from current (solid) and proposed searches (dashed) from active galactic nuclei [34](red), protoplanetary disk polarimetry [35] (light blue), CMB birefringence [33] (brown), pulsars [36] (orange), optical rings [37] (dark blue), and heterodyne superconductors [38] (olive). The misalignment bounds for Caγ = 10 2 , 10 4 are displayed by the wavy contours (grey). consistency of the effective theory requires M > ∼ F 1,2 and cannot result in C aγ > ∼ 1. While these examples show that kinetic mixing is not typically sizable for field theory axions, it is has been suggested that certain string constructions allow for sizable mixing coefficients [44]. While we are not aware of a concrete string construction where this is true, this may be a way to have C aγ > ∼ 1. Interestingly, for ultralight axion dark matter, kinetic mixing has additional phenomenological implications. In order for the Lagrangian in (9) to result in a photon coupling for a 1 that is not suppressed by a ratio of axion masses, a 2 must be lighter than a 1 . Since the a 2 photon coupling is not suppressed by factors of ε it may be more detectable than a 1 and drastically influence direct constraints, such as from supernova axion cooling or conversion. This would need to be studied with care for a particular realization of a value of ε. In addition to axion-mixing, kinetic mixing of abelian gauge fields can boost the axion-photon coupling, as considered in [53]. In this case the coupling may be enhanced if the axion-photon coupling inherits the dark photon gauge coupling. To see this explicitly we consider an axion coupled to a dark U(1) gauge field, A , which ki-netically mixes with the electromagnetism, where α is the dark gauge coupling. If A has a mass below the photon plasma mass, then a basis rotation can be performed to diagonalize the kinetic terms through A → A− A . This transformation leaves the dark photon approximately massless and gives the axion a coupling to photons as such that C aγ = 2 α /α. Direct constraints on dark photons permit ∼ 1 (see [54][55][56] for the bounds on ultralight dark photons) while α can be ∼ 1. Taken together, gauge kinetic mixing permits an amplification factor C aγ ∼ O(10 2 ) . So far we have considered the cases of axion-axion and vector-vector mixing. It is also possible for axions to mix with a vector if the axions transform under the gauge symmetry, as is the case for Stückelberg axions (see, e.g., [48,51] for discussions in the context of inflationary [29][30][31] are shown in green. Additional bounds for nucleon couplings are shown from projections of the CASPEr-Zulf experiment [42,43] (dark blue), atom interferometry (brown) [19], atomic magnetometers (light blue) [19], and storage rings [20] (pink).The misalignment bounds for CaN = 10 2 , 10 4 are displayed by the wavy contours (grey). model building, as well as [22,57]). As a simple model, we consider the case of two Stückelberg axions that have gauge interactions with a dark U(1) gauge field, A , and nearly identical interactions with electromagnetism and a dark confining gauge sector: Gauge invariance requires q 2 = −q 1 ≡ −q. 7 We perform a field redefinition, 7 This Lagrangian is invariant under the U(1) gauge transformation a 1 → a 1 + q 1 F 1 α, a 2 → a 2 + q 2 F 2 α, and Aµ → Aµ + ∂µα if q 1 + q 2 = 0. The Lagrangian we consider is a simplified version of the setups in [22,48,51], but the conclusions are unchanged in the more general scenarios. so that the physical axion interactions are, andF = F 1 F 2 /(F 2 1 + F 2 2 ) 1/2 . The axion b remains charged and provides a mass for the dark gauge boson. The surviving axion, a, is neutral under the dark U(1) and is the dark matter candidate. SinceF is smaller than F 1 and F 2 , a is more strongly coupled to photons than either of the original axions [22]. Nevertheless, this does not result in C aγ > ∼ 1. This is because the decay constant of the surviving axion,F , appears in the anomalous coupling to both the non-abelian and electromagnetic gauge sectors and so the canonical relationship between mass and matter coupling is maintained. We conclude that axion-vector mixing cannot be used to evade the cosmological bounds on ultralight dark matter axions. C. Clockwork Clockwork models provide a means to disturb the canonical relationship between the axion mass and photon coupling by introducing a large number of axions, each interacting with both it own confining gauge sector and its "neighbor". After a rotation to the axion mass basis, the lightest axion potential to can be exponentially suppressed without introducing an exponential number of fields and can be understood as a Goldstone boson of a additional global symmetry between the scalars in the UV (see, e.g., [44,45,[58][59][60] for discussions in different contexts). As an explicit model, we consider a set of N axions, a i , with couplings to N SU(n i ) gauge sectors with field strengths G i , and a photon coupling only for a N : The β i factors are integers greater than or equal to unity and we have omitted a bare θ term. Upon confinement, the gauge sectors give rise to the potential for the axions, where µ i is the confinement scale and represents the maximum possible mass for the dark composite states. To get an enhanced photon coupling, we require and we take the F i 's to be comparable to each other. In this case, up to O(µ 1 /µ i ) corrections, integrating out the heavy axions corresponds to iteratively introducing the substitution: This transformation produces the effective Lagrangian The a N -potential is exponentially suppressed by i β i while the photon coupling remains unchanged, resulting in an axion potential exponentially flatter than the naive estimate. Redefining the axion decay constant as in (1) gives, thereby boosting the photon coupling relative to a generic axion. We now consider the phenomenological implications of clockworked axions as dark matter. Firstly, in addition to the light axion, there exist N − 1 axions with masses proportional to µ i≥2 (the only bound on these is from unitarity, requiring µ i < ∼ F i [45]). These would be populated in the early universe if the new non-abelian gauge groups confine after reheating (from their own misalignment mechanisms) or if they are thermalized. Assuming the confinement scales µ i≥2 are comparable, the energy density of the heaviest axion would dominate. However, the photon couplings of the N − 1 heavy axions are suppressed by products of β i relative to the coupling of the lightest axion, and so they cannot be the target particles of the above experimental searches. Furthermore, if the lightest clockwork axion is to be dark matter and the experimental target, the heavier axions must decay into into Standard Model particles (if the axions decayed into lighter axions, they would produce excess dark radiation in conflict with measurements of ∆N eff ). This mandates the need for substantial couplings of the heavy axions to the Standard Model and may lead to observable effects in terrestrial experiments. In addition to the heavy axions, the clockwork model predicts the existence of a non-abelian gauge sector with composite states well below the electroweak scale with masses below µ 1 . Demanding that F N < M pl results in a confinement scale of ∼ 10 keV i β i for ultralight axion dark matter with a mass O(10 −20 ) eV. Depending on the type of interactions this light gauge sector has with the Standard Model, it may be possible to observe these in terrestrial experiments. From the low energy perspective, the clockwork model we have described appears to permit arbitrarily large C aγ values. However, there may be limitations on this enhancement factor if one attempts to embed the model into a string construction. In heterotic string models, the 4 dimensional gauge groups descend from the rank 16 gauge groups E 8 × E 8 or SO (32). Demanding that the Standard Model's rank 4 gauge group be present in the low energy theory restricts the rank of the dark sector to be ≤ 12 and so N is severely limited [61] 8 . We leave an extensive study of string compactification restrictions on clockwork models to future work. D. Discrete Symmetry Finally, axion couplings to visible matter can be augmented by introducing multiple non-abelian gauge sectors related by a discrete symmetry [62]. When the axion potential from confinement of each gauge sector is summed together, one finds the potential may be exponentially suppressed compared to the naive expectation. As an example, we consider a theory with a single axion, a, that couples to N confining gauge sectors with field strengths, G (n) , and impose a discrete symmetry under which, . The symmetry forces all the non-abelian gauge sectors (which may or may not include QCD) to share a common gauge coupling and fermion content. Including an axion-photon coupling, the Lagrangian consistent with the symmetry is, aFF . (26) In contrast to clockwork, the integer β serves no essential purpose here and can be taken to be unity. Each of the N gauge sectors contribute to the axion potential after they confine. If we were to use the leading contribution to the axion potential from (1) for each sector, the total axion potential would vanish. Therefore we must include corrections associated with high modes in the Fourier expansion of the potential, which depend on the light fermion content of the theory. For a sector with two fermions with masses, m 1 and m 2 , below the composite scale, chiral perturbation theory yields the leading order potential (see, e.g., [63]), with z = 4m 1 m 2 /(m 1 + m 2 ) 2 . After the sum in (27) is carried out, one finds the axion mass is exponentially suppressed if there is a small hierarchy between the light quark masses. Taking m 2 > m 1 the axion mass dependence on N is, approximately, Canonically normalizing the decay constant we get C aγ ∼ (m 2 /m 1 ) N/2 , breaking the relation between the axion mass and photon coupling for N 1. While discrete symmetries produce axions with C aγ 1, they do not evade the bounds from the matter-power spectrum. This is a consequence of the axion potential from (27) giving unusually large higher order axion terms. Unlike clockwork, which keeps the axion potential of the form in (1) but just extends the field range, discrete symmetries break this relationship entirely. To see this behavior, we expand (27) about one of its minima, giving the potential, where the C i 's are constants that arise from the sum in (27). The coefficient C 2 determines the exponential suppression of the axion mass and C 4 fulfills a similar role for the quartic. It is convenient to recast the mass suppression factor into an axion-photon coupling enhancement factor via f a ≡ F a / √ C 2 such that m a = µ 2 /f a , λ = C 4 µ 4 /C 2 f 4 a , and C aγ = √ C 2 . The key observation is that the dependence on N is different for the two constants C 2 and C 4 , as displayed in Fig. 3. For large N , C 4 decreases more slowly than C 2 with increasing N . The approximate condition presented above for the axion to behave sufficiently like cold dark matter is, The factor eV 4 C 2 aγ /m 2 a f 2 a is restricted to be greater than unity to get a large enhancement in the photon coupling. From Fig. 3, we see that C 4 /C 2 will also be greater than unity and the bound cannot be satisfied. We conclude that this variety of model cannot be used to boost the axion-photon coupling for ultralight axion dark matter. VI. CONCLUSIONS In this work, we considered the experimental prospects of detecting ultralight axion dark matter through its couplings to the visible sector, focusing on photon and nucleon interactions. We presented a stringent bound on axions that from requiring their matter-power spectrum match that of ΛCDM and concluded that generic axions are constrained to have couplings significantly smaller than often assumed. This bound makes use of the relationship between axion-matter couplings with its potential and is independent of the dark matter production mechanism. This discussion displays the tension between experimental projections and cosmological bounds that has not been widely emphasized in previous literature. Given the up and coming experimental program, the need to understand the landscape of ultralight axion dark matter models with detectable couplings is clear. As such, we studied various strategies to boost axion couplings introduced previously in the literature and applied them to ultralight axion dark matter. In particular, we considered models with large charges, diverse forms of kinetic mixing, a clockwork mechanism, and a discrete symmetry. We examined the extent to which axion couplings can be boosted in each mechanism, if at all, and explored their distinct predictions and phenomenology. In brief, O(10 2 − 10 3 ) coupling enhancements are possible by introducing large charges or vector kinetic mixing. Significantly larger enhancements are possible with clockwork models if one takes an agnostic view towards UV completions, but arbitrarily large amplifications may be stymied in string embeddings. Inversely, axion-axion kinetic mixing can only be effective if some string construction allows one to bypass the field theory arguments presented above. Finally, discrete symmetries and axionphoton kinetic mixing are ineffective in raising the axion coupling to visible matter. If a discovery of ultralight axion dark matter is made by a search in the near future, it would be a clear sign of new dynamics occurring with possible implications for other low energy terrestrial experiments. We note that while we have focused on ultralight axion dark matter and similar ideas, the mechanisms discussed here may be applied in other contexts where large axion couplings to the visible sector are desirable. Some examples include inflation (where most of these mechanisms first arose, see text for references), looking for parametric resonance during axion minicluster mergers [64,65], and monodromy axions [66,67]. Lastly, while we focused primarily on the axion-photon and axion-nucleon couplings, similar bounds can be constructed for axionelectron couplings and, potentially, ultralight neutrinophilic scalars [68] (whose potential likely also needs to arise from breaking of a shift symmetry to be protected from quantum corrections from gravity). We leave a study of such scalars to future work.
8,199.8
2020-08-05T00:00:00.000
[ "Physics" ]
Predicting and Analyzing Law-Making in Kenya Modelling and analyzing parliamentary legislation, roll-call votes and order of proceedings in developed countries has received significant attention in recent years. In this paper, we focused on understanding the bills introduced in a developing democracy, the Kenyan bicameral parliament. We developed and trained machine learning models on a combination of features extracted from the bills to predict the outcome - if a bill will be enacted or not. We observed that the texts in a bill are not as relevant as the year and month the bill was introduced and the category the bill belongs to. Introduction Policy development and law-making affect millions of people. It is important that there is transparency and openness in this decision making process. The rationale behind this work is to give insights to what happens in the Kenyan parliament and possible factors that might influence the verdict of bills. In Kenya bicameral parliament (the Senate and National Assembly), the legislative process goes through five phases: the proposed bill is published in the Kenya Gazette 1 , first reading, second reading, the appropriate committee meets to consider the amendment and finally, third and last reading (Goitom, 2017). After these phases, it is signed into law (or not) by the President of Kenya. Previous works have used word vectors and machine learning to estimate the probability that a United State congressional bill will survive the congressional committee and become law and, to predict policy changes in China (Nay, 2017;Tae et al., 2012;Chan & Zhong, 2018). Although data from debates and votes could not be obtained for this work, hand-crafted features and word vector representations of the texts in bills were used to predict if they will be enacted or not enacted. Data and Methodology 460 Kenyan national assembly and senate bills introduced between 2009 and 2019 were downloaded from the Kenya Gazette website 2 and the corresponding metadata scraped. Of these bills, 395 were not enacted while only 65 bills were passed into law. The highest number of bills introduced in a single year is 88 which was in 2012 and Aden Duale, the Majority Leader of the National Assembly of Kenya under the Jubilee Party introduced about 24% of the bills retrieved. Data Pre-processing and Feature Engineering To develop our model, we extracted information from the dataset and engineered new features. Some of the features we used are: the category of a bill -inspired by the socio-economic labels in (Akinfaderin & Wahab, 2019), election year -a binary feature that represents if a bill was 4th Widening NLP Workshop, Annual Meeting of the Association for Computational Linguistics, ACL 2020 1 Kenya Gazette is an official publication of the government of the Republic of Kenya 2 http://kenyalaw.org/kl/index.php?id=9091 Figure 1: Percent distribution of bills and their corresponding enacted and not enacted percentages. L1 to L8 represent the bill labels. This is part of the features in our model. introduced in an election year or not, sponsor 1 -binary feature for a bill sponsored by Aden Duale or others, sponsor 2 -binary feature for a bill sponsored by members of the parliament or, attorney generals and ministers, year and month a bill was introduced, the length of bill title, word vectors -word vectors for the bill titles and texts using a 100-dimensional GloVe (Pennington et al., 2014) pre-trained word vectors 3 , the difference between the current year and the year the bill was introduced. Figure 1 represents the distribution of the bills for each category label and the percentage of bills enacted and not enacted for each category. To solve the class imbalance problem caused by the ratio of bills enacted to not enacted, we oversampled the minority class in our training set (bills enacted) using SMOTE: Synthetic Minority Over-sampling Technique (Chawla et al., 2002). Model With a 70:30 train-test split on our data, we employed Logistic Regression and Support Vector Machine models to obtain baseline results before proceeding to stack these models as base learners and then used another Logistic Regression classifier as the meta-learner in a bid to obtain better results. Although the accuracy obtained from the results of all three models were very impressive, we focused on other metrics such as the F1-score, precision, recall, AUC (area under the curve) and brier score to analyse our results 4 . The results are displayed in Table 1. While handling the data imbalance problem improved the base models with a 5% and 11% increase in the precision of both classes, the results obtained for the enacted class after oversampling remained the same. However, we obtained impressive results for predicting that a bill will not be enacted. For better context, this means when a new bill is proposed and fed to the final model with all relevant aforementioned features, the model is 81% accurate in predicting if the bill will be passed into law or not. The precision and recall are 65% and 71% respectively. By inspecting the model to understand different features, we observed that the most important features that contribute to the final verdict were month, category and year introduced respectively ( Figure 2). Surprisingly, the title of a bill was a more important feature than the entire text, which is the least important feature -this raises a speculation that not all bills might be read thoroughly by members since the bills have similar titles which might cause voting parties to treat as bills previously introduced to the parliament. In addition, further experiments carried out using bag of words as an alternative representation for the textual features only confirmed that the text of a bill is not very pertinent to decision making. This suggests that there might be other factors considered in the parliament not accounted for here. Conclusion We presented simple baselines for predicting the chance that a new bill introduced will be enacted and, can adequately predict that a bill introduced into the Kenyan parliament will not be enacted (0.86 and 0.41 F1 scores for bills not enacted and bills enacted respectively). Avenues for improving this work include gathering more bills from earlier years, exploring other metadata like parliamentary debates and identifying dynamic structural factors in the behavioral patterns of the Kenya legislature.
1,449
2020-06-09T00:00:00.000
[ "Law", "Political Science", "Computer Science" ]
Dynamics of dissipative coupled spins: decoherence, relaxation and effects of a spin-boson bath We study the reduced dynamics of interacting spins, each coupled to its own bath of bosons. We derive the solution in analytic form in the white-noise limit and analyze the rich behaviors in diverse limits ranging from weak coupling and/or low temperature to strong coupling and/or high temperature. We also view the single spin as being coupled to a spin-boson environment and consider the regimes in which it is effectively nonlinear and in which it can be regarded as a resonant bosonic environment. Introduction Comprehension of the phenomenon of decoherence in open quantum systems has always attracted much attention, in particular as a prerequisite to understand the transition from quantum to classical behavior. The dissipative two-state or spinboson model has been thoroughly studied in wide regions of the parameter space with diverse methods and techniques since the 80's [1,2]. In the last decade, the subject of decoherence has experienced renaissance following the growing interest in the field of quantum state manipulation and quantum computation [3]. Any noise source sensitively leads to a narrowing of the quantum coherence domain. This entails severe limitations for coupled qubits to perform logic quantum operations. For this reason, extensive understanding of the decoherence mechanisms is indispensable. In this work, we focus upon a model which is a generalization of the single spinboson model to the case of two spins which mutually interact via an Ising-type coupling and are coupled to independent environments made up by bosons. The first analysis of this model relying on the influence functional method was given by Dubé and Stamp [4]. They obtained results for the dynamics in analytic form in restricted regions of the parameter space by omitting certain classes of path contributions and bath correlations. Several other previous studies on the same or related models relied on the master equation and/or perturbative Redfield approach [5,6,7]. Besides the weak-coupling assumption, often the secular approximation [8] is made, which breaks down however when the spectrum becomes degenerate. The model allows, for instance, to study decoherence and relaxation of two coupled qubits [5,7], or the influence of a bistable impurity on the qubit dynamics [9]. The latter may significantly degrade coherence in Josephson phase qubits [10]. Other possible application is study of coherence effects in coupled molecular magnets [11]. In earlier works, the model has been analyzed in the pure dephasing regime both by the Feynman-Vernon method [12] and the Lindblad approach [13]. Here we extend the work in Ref. [12] beyond the pure dephasing regime and include the full dynamics of the qubit. In particular, we are interested in the competition between decoherence and relaxation to the equilibrium state. Here we focus on the white-noise regime. We shall derive the exact solution for the reduced density matrix without restriction on the parameters of the model and analyze it in the coherent and incoherent domains and in the crossover regions in between. The model and relevant quantities of the reduced dynamics are introduced in section 2. Section 3 deals with the exact formal solution for the reduced dynamics. In section 4, the path sum is carried out in the white-noise domain without any further approximation, and analytic expressions for the relevant expectation values in Laplace space are presented. After an overview of the qualitative features of the dynamics in section 5, we present in section 6 explicit expressions for decoherence and relaxation in the various parameter regimes ranging from low temperture and/or weak coupling to high temperature and/or strong coupling. Finally, we study in section 7 the influence of a nonlinear spin-boson environment on the second spin in the various limits. We demonstrate that it behaves in the weak-coupling limit as a bosonic (linear) bath with a resonant spectral structure. Model We consider two two-state systems which are coupled to each other via an Ising-type coupling and to independent bosonic environments. In pseudospin representation, we choose the generalized spin-boson Hamiltonian (we use units where = k B = 1) In the basis formed by the localized eigen states |R> and |L> of σ z and τ z , respectively, ∆ 1 and ∆ 2 represent the tunneling couplings between the localized states, and the coupling term − 1 2 vσ z τ z acts as a mutual bias energy of strength v. The collective bath modes X ζ (t) = α c ζ,α [ b ζ,α (t) + b † ζ,α (t) ] (ζ = 1, 2) represent fluctuating bias forces. The Hamiltonian is very rich in content and may model diverse physical situations. It may describe two coupled qubits or a qubit σ in contact with a complex environment formed by a bistable dissipative impurity τ . Other possible realizations are coupled molecular magnets of which the low-energy states can be viewed as a spin [11]. For the model (1), all effects of the environments are captured by the power spectrum of the collective bath modes with the spectral density of the coupling [1, 2] Here, the second form represents the Ohmic case with a high-frequency cut-off ω c . Alternatively, one may choose that the two spins are coupled to a common bath [7]. Here we study the effects of independent environments. This case is realistic in most physical systems of actual interest. The density matrix of a single spin has four matrix elements, the two populations that we shall label as RR ≡ 1 and LL ≡ 3, and the two coherences with labels LR ≡ 2 and RL ≡ 4. The two-spin density matrix has 16 matrix elements ρ n,m (t). We choose for convenience that the first (second) index refers to the states n = 1, · · · , 4 (m = 1, · · · , 4) of the σ-spin (τ -spin). The matrix elements can be expressed in terms of expectation values of 15 operators, σ i ⊗ 1 t = σ i t , 1 ⊗ τ i t = τ i t and σ i ⊗ τ j t = σ i τ j t (i = 1, 2, 3 and j = 1, 2, 3). The 4 pure populations may then be written as Corresponding expressions hold for the 4 pure coherences and the 8 hybrid states. For instance, we have Here we are predominantly interested in the populations. Throughout we will choose that the reduced system starts out from the initial state ρ 1,1 (t = 0) = 1 while the heat reservoirs are in thermal equilibrium at temperature T . In the absence of the environment, the Hamiltonian H = H 0 can be easily transformed into diagonal form The eigenfrequencies are with and they obey the Vieta relations The Liouville equationsẆ j (t) = −i[ H, W j (t) ] (j = 1, · · · , 15), where the set {W j (t)} represents the above 15 operators, yield 15 coupled equations. These are conveniently solved in Laplace space. For instance, we get Here we are interested in the evolution of the two-spin system without restricting ourselves to weak damping. Therefore, we refrain from employing the perturbative Redfield approach. Rather we calculate the reduced dynamics with use of the Feynman-Vernon influence functional method. We show that the solution is available in analytic form in the white noise limit for general parameters ∆ 1 , ∆ 2 , v and T . Formal solution for the reduced density matrix Within the Feynman-Vernon method, the exact formal expression for the RDM of the two-spin system is the quadruple path integral (11) with appropriately chosen boundary values for the spin paths. Here, each of the paths σ(t ′ ), σ ′ (t ′ ), τ (t ′ ), τ ′ (t ′ ) starts out from the localized state |R > at time zero. They end up at time t in the states is the amplitude for the free spin σ to follow the path σ(t ′ ), the functional B[σ, σ ′ ; τ, τ ′ ] represents the coupling of the two spins (see below), and the functional F [σ, σ ′ ; τ, τ ′ ] introduces the environmental influences. For uncorrelated baths, we have Here we have introduced symmetric and antisymmetric spin paths, The correlator Q ζ (t) = Q ′ ζ (t) + i Q ′′ ζ (t) is the second integral of the force autocorrelation function X ζ (t)X ζ (0) β (see eq. (2)). In the Ohmic scaling limit, we have Here, K ζ is the usual dimensionless Ohmic coupling strength for the spin ζ, and β = 1/T is the inverse temperature. To handle the quadruple path integral (11), we follow the procedure for the single spin-boson problem [1,2] and write it as an integral over two paths, one for each spin. Each such path visits the diagonal "sojourn" states and the off-diagonal "blip" states of the respective spin. A path which starts and ends in a sojourn state must contain an even number of transitions with amplitude ∓i ∆ ζ /2 for each flip of spin ζ. The flips occur at times t j for spin 1 and at times s j for spin 2. Upon labeling the sojourn and blip states with charges η 1,j , ξ 1,j (for spin 1) and η 2,j , ξ 2,j (for spin 2), each with values ±1, the paths with 2n 1 and 2n 2 transitions, respectively, may be written as Upon introducing the notation Q ζ;j,k = Q ζ (t j −t k ), we may write the bath correlations between the blip pair {j, k} of spin ζ in the compact form With this, the influence functional for the paths (13) reads Here the first and second term represent the intrablip and interblip correlations, respectively. The phase term is specific to the Ohmic scaling limit and represents correlations of the sojourns with their subsequent blips. The sum over all paths now means (i) to sum over all possible intermediate sojourn and blip states of the two spins the paths with a given number of transitions can visit, (ii) to integrate over the (for each spin) time-ordered jumps of these paths, and (iii) to sum over the possible number of transitions the two spins can take, for j ∈ (1, 2), (2, 1), (4, 3), (3,4) , Combination of the above expressions yields the exact formal solution for the dynamics of the RDM of the two-spin system in the Ohmic scaling limit. Evidently, because of the nonconvolutive form of the bath correlations in the influence functional (20), the path sum can not be performed in analytic form. Alternatively, one may recast the exact formal series expression for the populations in the form of generalized master equations in which the kernels, by definition, are the irreducible components of path segments with diagonal initial and final states [2]. In the general case, the kernels are given by an infinite series in ∆ 1 and ∆ 2 , with the time integrals in each summand being again in nonconvolutive form. Additional difficulties in performing the path sum (21) arises from the spin-spin coupling (22). Exact solution in analytic form in the white-noise limit In the white-noise limit of eq. is a scaled thermal energy, the bath correlation function takes the form § This expression emerges directly from eq. (14) in the high temperature or long-time limit t/β ≫ 1. The first term in eq. (24) leads to an adiabatic (Franck-Condon-type) renormalization factor made up by modes in the frequency range 2πT < ω < ω c . It is natural to assimilate this term, together with the phase term, into an effective temperature-dependent tunneling matrix element, is the standard renormalized tunneling matrix element. All dynamical effects of the environmental coupling are captured by the second term ϑ ζ |t| in eq. (24). From this we see that the weight of the thermal energy relative to the systems energies∆ ζ and v is assessed by the scaled thermal energy ϑ ζ . Based on experience from the single spin-boson system we should expect that the form (24) is a remarkably good approximation in the parameter range [2] and this is corroborated indeed by our study. For a single unbiased spin with Ohmic damping K ≪ 1, the coherent-incoherent "phase"-transition is at temperature T = T * with T * = ∆ r /(πK) [2]. Therefore we should expect, that, for K ζ ≪ 1, the white noise form (24) is valid not only in the incoherent regime but also in a sizeable domain of the coherent regime. This shall be confirmed subsequently. For the form (24) of Q ζ (t), the interblip correlations cancel out exactly in eq. (19), Λ ζ;j,k = 0. As a result, each term of the infinite series for the RDM becomes a convolution. This makes the path sum accomplishable. We shall now exemplify, by taking σ z t and σ z τ z t as examples, that the path sum for the Laplace transform of the RDM can be carried out exactly in analytic form. This is achieved by first calculating the kernels and then summing up the respective geometrical series of these objects. The expectations σ z t and τ z t By definition, the kernels represent irreducible path segments which interpolate between pure sojourn states. Irreducibility means that these segments can not be separated into uncorrelated pieces without removing bath correlations. The analysis gives that every contribution to the kernel K(λ) of σ z (λ) displays initially and finally a transition of spin σ with any number of even hops of spin τ at intermediate times, § The term Υ ζ accounts for deviation of the actual high frequency behavior of G ζ (ω) from the exponential cut-off form in eq. (3) [2]. as shown in the diagrams of Fig. 1. There are no other contributions. For all other irreducible diagrams, one may think of, e.g., where either the first or the last flip, or both of them, are flips of spin τ , the respective contributions from the two different final states of spin τ cancel each other. It is convenient to write The sum over all spin states of order (1, n) yields the expressions Here we have taken into account that, for the white-noise form (24), a correlation of bath ζ stretching over an interval between neighboring hops effectively leads to a shift of the Laplace variable in the respective time integral, λ → λ + ϑ ζ . It is convenient to split the kernels into the contributions which are even and odd in the coupling v, K(λ) = K (+) (λ) + K (−) (λ). The resulting expressions may be written as Paths which visit a pure sojourn state at intermediate times yield reducible contributions. Taking into account all possibilities of such visits yields a geometrical series in the kernel K (+) (λ), while K (−) (λ) occurs only once as initial irreducible contribution. Thus we get for σ z (λ) the concise form Algebraic manipulation gives σ z (λ) in the form of a simple fraction with denominator and numerator in the form of polynomials of degree four. We get Here we have introduced the eigenfrequencies Ω, δ of the undamped coupled two-spin system. The bar denotes adiabatic renormalization in eq. (7) according to eq. (25). The pole at λ = 0 in eq. (32) yields the equilibrium value Hence σ z eq is negligibly small for∆ 2 = 0, while, as∆ 2 → 0, it takes the proper (white noise) equilibrium value v/(2T ) of the single biased spin boson system. The dynamical poles λ j (j = 1, · · · , 4) are given by a quartic equation with real coefficients. Upon collecting the various pole contributions, we get in the time domain Evidently, the expectation τ z (λ) takes similar form, Here, the polynomials N 2 (λ) and D 2 (λ) follow from the expressions (33) and (34) by interchange of the indices 1 and 2. 1 (λ) (right). The intervals are dressed by self-energy contributions of spin σ (black circle) and spin τ (black square), as sketched in Fig. 3. The expectation σ z τ z t Consider first contributions to the kernel of σ z τ z (λ) in which the first and last flip are made by spin σ, as sketched in Fig. 2. In the intervals of the bare diagrams, either spin σ or spin τ or both stay offdiagonal. Every interval in which spin ζ dwells in a sojourn state is dressed by selfenergy contributions schematically given for spin σ (circle) and spin τ (square) in Fig. 3. Diagram 2 (a) yields where the functions α ζ (ζ = 1, 2) are given by The nested diagram in Fig. 2 (b) produces the additional factor α 2 (λ) α 1 (λ), Higher-order nested diagrams of the type sketched in Fig. 2 class into a geometrical series. All these terms are readily summed up to the contribution Similarly, we find that diagrams (a) and (b) in Fig. 4 yield the expressions With all higher-order nested diagrams of this type added, one finds again a geometrical series in α 1 (λ) α 2 (λ), Clearly, we must add those contributions resulting from the terms A 1 (λ) and B 1 (λ) by interchange of the spins σ and τ . The analysis shows that there are no other contributions. Again, we split the kernel C(λ) into the parts which are even and odd in the coupling v. We readily get for These expression represent the entity of irreducible path segments. Next, we observe that the sum of two-spin paths with any number of interim visits of pure sojourn states yields a geometrical series of these objects. In the part which is odd in v, the first irreducible path section is again described by the kernel C (−) (λ). Thus we get The second form is a simple fraction with polynomials N (λ) and D(λ) of sixth order, The odd powers of the pole function can be removed with a shift. Putting D[ λ = x − (ϑ 1 + ϑ 2 )/2 ] ≡D(x), we obtain a polynomial with even powers, Thus we have in the time domain where the {λ j } are the zeros of D(λ) = 0, and the equilibrium value is The expressions (32) -(37) and (47) -(50) are the main results of this work. They represent the exact analytical solutions for σ z (λ) , τ z (λ) , and σ z τ z (λ) , in the white-noise limit for general coupling v and general effective reservoir couplings ϑ 1 and ϑ 2 . Except for use of the form (24), no other approximation has been made. We remark that for all other initial and final states of the RDM we would find the same pole functions (34) and (49). Only the numerator function would be different. Qualitative features The behaviors of the four dynamical poles of σ z t and the six dynamical poles of σ z τ z t , and the respective amplitudes are quite multifarious. In this section, we sketch the characteristics for the symmetric system,∆ 1 =∆ 2 ≡∆ and ϑ 1 = ϑ 2 ≡ ϑ. σ z t : In the coupling range v < v cr =∆/ √ 2, there are three crossover temperatures, denoted by ϑ * 0 , ϑ * 1 , and ϑ * 2 (see Figs. 5 and 6). In the regime ϑ < ϑ * 1 the dynamics is coherent and described by a superposition of two damped oscillations. For ϑ < ϑ * 0 , the oscillations have different frequency and the same damping rate, and the amplitudes are comparable in magnitude. On the other hand, in the range ϑ * 0 < ϑ < ϑ * 1 , they have the same frequency, but different decrement and the amplitude belonging to the larger decrement is negligibly small. In the temperature regime ϑ > ϑ * 1 , the dynamics is incoherent. In the regime ϑ * 1 < ϑ < ϑ * 2 , the four poles are real, and the two smallest rates have largest amplitudes and dominate the relaxation process. In the so-called Kondo regime ϑ > ϑ * 2 , the dominant pole is real and approaches −∆ 2 /ϑ, and its residuum goes to 1 − σ z eq , as temperature is increased. The other real pole takes the value −2ϑ, while its residuum drops to zero. There is also a damped oscillation of which the frequency and rate approach asymptotically v and ϑ, but the amplitude becomes negligibly small. The phenomenon that, in the Kondo regime for K < 1 2 , incoherent relaxation slows down with increasing temperature, is already well-known in the single spin-boson problem [2]. Fig. 7 shows the transition from coherent to incoherent dynamics as ϑ is raised. One can also see that at high ϑ the effective damping decreases with increasing ϑ. For v > v cr , there is only one crossover. It separates the regime with two complex conjugate poles from the regime with one pair of complex conjugate poles and two real poles. Above ϑ * , the relaxation is governed by the Kondo pole. (53) The characteristic behavior of the poles as function of the scaled temperature ϑ is shown in Fig. 8. At low ϑ, all poles contribute to the dynamics. In the Kondo regime ϑ > ∼ 3∆, the leading pole behaves as −2∆ 2 /ϑ, and the amplitudes of the other contributions are negligibly small. 6. Dynamics in the various parameter regimes for differing spins Low temperature behavior Below the first crossover temperature, Ω ± < ∼ T ≤ T * 0 , the real parts of all poles are varying linearly with T . In detail, the poles and amplitudes are as follows. 6.1.1. σ z t : There is a superposition of two damped oscillations with frequencies and amplitudes There is a superposition of two damped oscillations and two contributions describing incoherent relaxation towards the equilibrium value σ z τ z eq , where terms of order O(ϑ 2 ) are disregarded. In leading order, the amplitudes are (58) The regimes of large coupling and/or high temperature When the coupling v and/or the scaled temperatures ϑ 1,2 are large compared to the other frequencies, the amplitudes of three (five) pole contributions to σ z t ( σ z τ z t ) are negligibly small, and among the dynamical poles only the real pole with smallest modulus is relevant. Hence the two spins essentially behave as a single spin which relaxes incoherently to the equilibrium state according to 6.2.1. σ z t : The relaxation rate is found from D 1 (λ) = 0 with the form (34) as This reduces in the parameter regime ϑ 1 ≫ v,∆ 2 to In this regime, σ z t is independent of the coupling v and hence independent of the dynamics of the τ -spin. The temperature dependence γ σ ∝ T 2K 1 −1 distinguishes the so-called Kondo regime, in which, for K 1 < 1 2 , the relaxation dynamics slows down as temperature is increased. On the other hand, when v ≫∆ 1,2 , the two spins are locked together [4], and the effective tunneling matrix element isδ =∆ 1∆2 /v, as follows from (7) with (8). We then get from eq. (60) This yields the limiting expressions The former is the relaxation rate of the biased single spin-boson system at low ϑ 1 . The latter describes Kondo-like joint relaxation of the locked spins. σ z τ z t : In the incoherent regime, the relaxation rate γ στ of the effective single spin receives rate contributions from both the σand the τ -spin as if these were independent biased spins in contact with their own heat reservoir. In the large-coupling limit, v ≫∆ 1,2 , ϑ 1,2 , the relaxation rate is found as The individual contributions are single-spin rates in the large-bias regime. In the high temperature limit ϑ 1,2 ≫ v,∆ 1,2 , on the other hand, both rate contributions are Kondo-like, Consider next the regime ϑ 1 ≫∆ 1 ,∆ 2 , v, in which spin σ behaves Kondo-like, as in eq. (61). Hence the dynamics of the σ-spin is slow compared to that of the τ -spin. Thus we should expect that σ z τ z t approaches the dynamics of the biased single spin-boson case as ϑ 1 is increased. Taking into account terms of linear order in γ σ in the pole equation, the expression (47) with (48) and (49) assumes the form . Indeed, in the limit γ σ → 0, this form reduces just to the analytic expression for τ z (λ) of the biased single spin-boson system in the white-noise limit [2]. Dynamics of a spin coupled to a spin-boson environment Let us now view spin τ with reservoir 2 as an environment for spin σ. This complex environment is in general non-Gaussian and non-Markovian [14]. Recently, the same model has been studied numerically using a Markovian master equation approach [15]. To proceed, we first note that in the absence of bath 1, ϑ 1 = 0, the pole equation N 1 (λ) = 0 is still of fourth order. There is no reduction in the general case. In Fig. 9 we show plots of the four poles as functions of ϑ 2 for a particular set of parameters. Figure 9. σ z with spin-boson environment: Real (a) and imaginary (b) part of λ j . At low ϑ 2 , σ z t is a superposition of two damped oscillations. At high ϑ 2 , there is one damped oscillation and one relevant relaxation contribution. The parameters are v = 0.8, ∆ 1 = 1,∆ 2 = 1.5. High temperature limit Simplification occurs, however, when ϑ 2 is very large compared to the other frequencies. In this regime, the kernel (30) reduces to the form where γ τ =∆ 2 2 /ϑ 2 is the relaxation rate of spin τ in the Kondo regime. With this high-temperature expression for the kernel, the quantity σ z (λ) is found to read σ z (λ) = 1 This expression describes the dynamics of spin σ coupled to a spin-boson environment, where the latter is in the Kondo regime. To leading order in γ τ , the poles of the expression (67) are (see Fig. 9 at large ϑ 2 ) and the amplitudes read The expressions (67) -(69) may now be compared with the corresponding ones of a fictive single biased spin-boson system with parameters ∆ 1 and v in the white noise limit at scaled temperatureθ. The part that is symmetric in the bias reads We see that with the identificationθ ∧ = γ τ =∆ 2 2 /ϑ 2 the expressions (67) and (70) are quite similar. Observe, however, that the damping rate of the oscillation is somewhat different because of the term 2θλ 2 in eq. (70) instead ofθλ 2 in eq. (67) [2]. Most importantly and interestingly in the correspondence, temperature maps on the inverse of it. Linear response limit: spin-boson environment as a structured bosonic bath We should expect that, in the weak-coupling limit, the spin-boson environment is Gaussian and can be represented by a resonant power spectrum of a bath of bosons. The Gaussian approximation of the spin-boson environment is found by matching the power spectrum of the coupling of the σ-spin to the spin-boson environment [14] (with normalization as in eq. (2)) with that of a harmonic oscillator bath. In the white-noise limit of an unbiased spin, the symmetrized equilibrium correlation function Re τ z (t)τ z (0) coincides with the expectation τ z t . Thus we obtain S τ (ω) = (2/π) v 2 Re τ z (λ = − i ω) with τ z (λ) = λ + ϑ 2 λ(λ + ϑ 2 ) +∆ 2 2 . (72) The resulting power spectrum is that of a structured bath of bosons with a resonance of width ϑ 2 at frequency ω =∆ 2 , Due to the coupling to the spin-boson environment, the spin σ performs damped oscillation, σ z t = cos(∆ 1 t) e −γ dec t . Upon calculating the decoherence rate in order v 2 , i.e., the so-called one-boson-exchange contribution of the effective boson bath, we obtain The analysis is completed by observing that this form emerges also upon calculating directly γ dec from the pole equation D 1 (λ) = 0 with the form (34). Conclusions We have studied the dynamics of a spin or qubit σ coupled to another spin, which could be, for instance, another qubit, or a bistable impurity, or a measuring device. We have solved the dynamics exactly for white-noise reservoir couplings, and we have studied the rich behaviors of the dynamics in diverse limits ranging from weak coupling and/or low temperatures to strong coupling and/or high temperature. We have also analyzed the effects of a spin-boson environment on the spin dynamics in the Gaussian and non-Gaussian domains. This paper has not attempted to perform applications to already available experiments, instead we have tried to make some general points on complementary regimes and on the crossovers in between. One possible simple generalization beyond the white noise limit would be to replace the white-noise bath correlations in time intervals in which the Laplace variable is irrelevant by the full quantum noise correlation, for instancē The advantage would be two-fold: (i) the noise integral is known in analytic form for the Ohmic correlation function (14) [2], and (ii) with this substitution the algebraic form of pole equation and residua would be left unchanged. One further generalization of the Hamiltonian (1) is an applied bias acting on one or both of the spins, e.g., of the form ǫ 1 σ z and ǫ 2 τ z . This important extra ingredient could be taken into account exactly in the white noise regime, and would lead to additional shifts of the Laplace variable in all blip states of the σand τ -spin. Then one would end up with expressions of the form (32) and (47) with polynomials ramped up by bias terms. This extension will be discussed elsewhere. Finally, extension of the analysis of the dynamics to the regime T < ∼Ω± requires to revert to the original expression (2) and compute its effect perturbatively in the oneboson-exchange approximation. This can be done either with the self-energy method presented in Ref. [2] or with the Redfield approach. One then find, e.g., that the actual equilibrium state of σ z τ z eq for K ζ ≪ 1 is This reduces for T >Ω ± to the previous form (52) found in the white-noise regime. The corresponding extension of the analysis and of the results given in Subsection 6.1 to the domain T <Ω ± will be reported elsewhere.
7,271.6
2008-11-01T00:00:00.000
[ "Physics" ]
Linguistic Landscape of Languages Used in Signboards in Larkana, Sindh The present study investigates the use of local, official and national languages and the incessant use of English on the localized Sindhi Roman script. Linguistic landscape is the study of written language on public road Signs, advertisements, billboards or front shops. Bilingualism is very common on the public signboards of Larkana city, where English language is used as market language. Many local people consider it as foreign language still English is used on every local and public signboard of Larkana city. The study used semi-structured interviews from different businesspeople, shopkeepers and owners of the institutions. The results show that Romanized Sindhi language/ Sindhlish and Bilingualism is influenced and dominant on the linguistic landscape of Larkana. In the comparison of English language the local/ native languages of the particular area of Larkana city seem missed or least used on signboards. The study focused on the linguistic landscape of Pakistan’s particular area Quetta, they explored the usage of languages and the ubiquities usage of English in localized non-Roman script. As English remained foreign language for many parts of Pakistan, however it covered most of the signboards of Pakistan. Researchers used framework of Gorter and Cenoz (2008: 343) for the analysis of linguistic landscape. They used multiple data collection instruments such as conducted the interviews of businesspeople and captured the pictures of signboards and billboards. The results showed the great impact of Englishized Urdu and Urduized English on the linguistic landscape of Quetta, Pakistan where the local and indigenous languages impact was missed and absent on the signboards. Introduction Linguistically, Pakistan is vast country having more than 60 spoken languages and six major languages. Since the independence of Pakistan, Urdu as National Language and English as Official language enjoyed a great privilege and higher status in Pakistan. The influence of language can be measured in any form. Linguistic landscape is one the important aspect of measuring the impact of any written language on the public or local signboard. The representation of any written language on government boards/ buildings, academic institutes, hotels/restaurants, Marts/Bakers or local shops is known as Linguistic landscape. The study of Linguistic landscape has begun in 1970s. Linguistic Landscape, gradually slowly has flourished its fields in sociolinguistics, social sciences sociology and media studies. The great contribution was given in LL by the seminal work of Landry and Bourhis (1997). Hult (2009) states that the basic role of linguistic landscape is the visible representation of the languages on the local and public spaces and areas. Linguistic landscape functions as the important instrument of connecting us with our everyday places like streets, parks, shops and buildings Ben-Rafel (2008). It holds a bond among the group's, nations and communities. This study contributes to the research gap and serves in the field of linguistic landscape in the context of Pakistan. Akram (2007) maintains that Pakistan addresses English as her official language, from this notion it is obvious that English preserves the significant status as used in Pakistan's Law and court, educational policies, science, technology and media etc. The dominance of English in everything and even on the public and local signboards of Pakistan's cities replaces the local and indigenous languages of Pakistan. As Shohamy (2006) maintains that the presence and absence of any language on the public laces/ local signboards represents the importance of certain languages. This study examines the role of official, national and local/ indigenous languages on the public and local signboards of Pakistan in the context of Larkana City. Literature Review This section represents overview of the studies on Linguistic landscape, and it also highlights some studies of linguistic landscape in the context of Pakistan. Landry and Bourhis cited in their studies of 1997 that Linguistic landscape means the language of public signage on every kind of signboards such as street names, shop advertisements, governmental or private buildings and advertising billboards etc. Blommaert (2014) declared that LL offered us an insight to behold the signs as index which beckons towards social, cultural, ideological and material context of the society. The visibility of the signs not only represents the story of the language but also cultural, social, political story through the LL study we see signs as indexes that point. Each sign indicates production and surrounding of that peculiar area where they have influenced a lot through LL Blommaert (2014). Sonia Yavari (2012) investigated Linguistics landscape Policies: A comparative case study of Linkoping University and ETH Zurich, her study contributed to know that how language policies are distributed at two different universities. With the help of LL researchers can easily estimate the number of languages which were used at university. The study identified the distribution of languages on the Linguistics landscape maintained through top-down and bottom-up signs of the university. The results showed the similarities in the usages of languages at both universities. In Linkoping University Swedish and English were mostly dominant languages and in ETH Zurich German and English languages were mostly dominant in both universities there was the dominancy of National languages. Ying (2019) researched on three Chinese schools to examine the beliefs of EFL learners towards pedagogical values of linguistics landscape. The study showed the attitudes of Learners who belonged to three groups of high school, undergraduate and graduate students. The mixed method was applied to survey the schools and the findings revealed very positive attitudes of Chinese students towards English in linguistic landscape, but distinct level of learners demand to work more on LL for better learning. Alfaifi (2015) investigated the influence of LL on the areas of Khamis Mushait, Saudi Arabia, tourist destination (TD), commercial zone (CZ). His study aimed to examine the use of English and Arabic language on the two locations of tourist destination and commercial zone. More than 200 photographs were taken from both areas but only 150 photos were analyzed through the quantitative method similar to Ben-Raefel (2006) and Backhaus (2007). The findings of the study showed that Arabic language is more dominant in tourist destination (TD) and due to the globalization English has great impact over the areas of commercial zone (CZ) but still Arabic language seem more dominant on (CZ) of Saudi Arabia. The study of Linguistic landscape in Pakistan Manan and Channa (2017) investigated on "The glocalization of English in the Pakistan linguistic landscape." The study focused on the linguistic landscape of Pakistan's particular area Quetta, they explored the usage of languages and the ubiquities usage of English in localized non-Roman script. As English remained foreign language for many parts of Pakistan, however it covered most of the signboards of Pakistan. Researchers used framework of Gorter and Cenoz (2008: 343) for the analysis of linguistic landscape. They used multiple data collection instruments such as conducted the interviews of businesspeople and captured the pictures of signboards and billboards. The results showed the great impact of Englishized Urdu and Urduized English on the linguistic landscape of Quetta, Pakistan where the local and indigenous languages impact was missed and absent on the signboards. Kirk (2018) researched to examine the impact of linguistic landscape on the interplay between multilingualism, orthographic shifts, the urban built environment, and cinema going practices in Lahore, Pakistan. As Urdu is national language and English is international or official language of Pakistan, these languages have great influence in education and considered prestige languages in Lahore, while their own language Punjabi is considered the language of rustic crudity. So, the linguistic hierarchy is clear display in the cinemas of Lahore that show the attitudes of people towards English, Urdu and Local Punjabi films. In India, English and URDU films are shown in elite and posh cinema theaters like all the newly multiplexes while Punjabi and old Urdu films are displayed in old theaters for working-classes. Apart from cinematic trends, the study aimed to find out the influence of languages on the sign boards and advertisement pamphlets of those areas like whether the signs had Urdu, English, Punjabi or Romanized Urdu or Punjabi or any other language, What kind of information is delivered in what language and why? Hence the study reveal the relationship of public and the built environment of cinemas as well as the role of languages which showed the attitudes of public with the connection of the languages and their relation in social spaces like the enjoyment of the films. Research Questions For a better understanding of research problem, the present research focus to address the following questions: 1. What is the role of languages in the designs of local and public signboards of Larkana city in the presence of the dominance of English language? 2. What is the impact of linguistic landscape on the local residents of Larkana city? Research Methodology According to the nature of research objectives and questions, the researcher has used the qualitative approach in the present study. Creswell (1994) maintains that qualitative research offers the natural phenomena to the researcher that helps to access and interact any participant individually and easily. Qualitative research contains interviews, unstructured interviews, semi-structured interviews, group interviews, observations and documentary material etc. The present study has used semi structured interviews. Donryei (2007) maintains that semi-structured interviews are always adopted to measure the gap between the extremes of structured and semistructured interviews. 3.1 Sampling and Participants of the study The present research used purposive sampling to opt the participants for the study. Berg (2001) maintains that the purposive sampling helps the researcher in selecting the participants according to the knowledge and experience of the participants which aid to serve the purpose of the study. The participants were in service as local shopkeepers, it local residents of holding the designation of owners of restaurant, baker or mart or any academic institution. All the participants were chosen from the posh area/ colony of Larkana, it is known as Sachal Colony. The interviews were conducted from Total 5 local residents of the colony. Their age was from 23 to 50 years. All the participants were experienced about their business and each participant had different experience of marketing with the experience of the age differences. Data Collection instruments The data is collected for the present study through single source: The semi-structured Interviews. An interview protocol is designed for the easiness and confidentiality of the participants. The interview protocol form has two sections, in section (a) all the security and confidentiality would be assured to the participants and Section (b) contained all the questions related to the influence of Linguistic landscape and their importance on the business of local people of local area of Larkana. After the consent of the participants the interviews are conducted. McNamara (1999) maintains that interviews are good source to collect exact experience of any participants. Total seven questions are asked from each participant individually. All the interviews are recorded in the written form and in smartphone Vivo V20. All the responses of the participants are analyzed through thematic analysis. Findings and Discussion Bckhaus (2007) maintains that the most significant role of linguistic landscape studies is to indicate the representative value of languages on signboards of public spaces. Connectively, all five Participant's interviews are analyzed and distributed into the themes and their responses are recorded from them. Hence, the result of the shopkeeper's and business people's views represent preference of official language for the easiness of diversities, for better business and marketing, to compete the modernization and influence the local masses of Larkana city. Language plays very important role in every day's life. People always have emotional attachment with languages. Language represents culture and cultures are the significant asset of any nation. But in today's modern world multilingualism becomes so common. The usage of single language is found in particular countries and nations. Moreover, multilingualism is found dominant in many countries. Multilingualism is widespread interaction among the communities; it is the cause of world globalization. Understanding the concept of linguistics landscape of Larkana city in the context of Sociolinguistics, it is very necessary to understand the preferences of languages on the sign boards of Larkana. As Pakistan is the multilingual country, every city of Pakistan has multilingual discourse. Alike, Larkana city bears multilingualism, and the signboards have also the impact of multilingualism. The usage of multiple languages on the sign boards is very common in Larkana most of the participants agreed. They said that they know the importance of official language in Pakistan. Using official language for their business or on advertisement of their shops increase the chances of their business, it also attracts the local people towards their business. Most of the participants agreed that multilingual or bilingual languages on their sign boards always help them to earn good profit due to good publicity therefore they always prefer multilingual sign boards instead of monolingual sign boards. Sign boards are very important for any business. And the language on the sign boards has the equal importance to the enhancement of any business. I (we) know the worth of languages at connecting and advertising the sign boards. The pure local language of our area does not provide standard name to our shops, institutes, marts, bakers or hotels but mixing up both local and official languages provide standardized name to our businesses. To compete the world of modernization and the world of competition the official language is necessary to use in business and market place. Because English is Lingua Franca, it helps to compete the world outside. Most of the participants favored the diversity of languages they believe that diversety or multiple usage of languages create easiness, connectivity among common masses. They said that Larkana city is multilingual city people have multiple mother languages. When they see their mother languages on sign boards they feel more connected emotionally. They get more attracted towards our shops or any advertised business. Therefore we use multiple languages on sign boards. We know that People like the diversity and we know the worth of languages, the worth of official, local and national languages. For the sake of easiness we all business people prefer multiple languages on the signboards, we also believe in standardization and globalization, we prefer to provide easiness for common people therefore we opt multilingualism on the sign boards and the usage of official language also increases profit and speedy publicity among the public. The 21st century is the era of modernism. Modernism has transformed the competitive marketing strategies in every kind of business, and language plays Vitol role in the marketing and publicity of any business. All ten participants agreed about the influence and impact of official language on business. I (we) choose language for signboards after a complete planning, if we need to increase business, sell and marketing in anything we use official language on the sign boards for our shops, schools, restaurants, marts and bakers. Because this language attracts more customers and it helps to get profit financially. In today's modern era mostly people are known to official language of Pakistan therefore we make the sign boards in bilingual languages in this way we enhance the competition in the market which leaves the great influence on people with diversities. Undoubtedly, multilingualism is spread everywhere globally, where meanwhile Linguistic landscape studies provide an obvious introspection into linguistic phenomena, and also helps to investigate the insightful connection of people and languages in better way than any other method of study. Conclusion This study is designed to contribute in the field of linguistic landscape. Most importantly, the study focuses to contribute in the context of Pakistan. The findings of the study indicate that the local shopkeepers and business people of Larkana city give more preference to multilingual or bilingual signboards instead of monolingual signboards or they avoid the usage of local language on signboards. Very few signboards are found in pure local language of Sachal Colony area. While, English having the status of official language in Pakistan, seem more dominant and visible on the signboards of Sachal Colony Larkana. Although the local people are not much acquainted with English properly / completely still it is used everywhere on the signboards. Because the local shopkeepers and business people prefer English for better sell and marketing of their business. Thus, concluding the results of the study, the mindset of local people towards English as International language/ Lingua Franca/ Market Language. This mindset encourages them to use Romanized Sindhi Language or Bilingual language instead of using the only local language of that area. All these factors impact socio-psychological mindset of common people that cause the absence and replacement of local and indigenous languages of the area.
3,858.4
2021-06-01T00:00:00.000
[ "Linguistics" ]
Tissue expression and antibacterial activity of host defense peptides in chicken Background Host defence peptides are a diverse group of small, cationic peptides and are important elements of the first line of defense against pathogens in animals. Expression and functional analysis of host defense peptides has been evaluated in chicken but there are no direct, comprehensive comparisons with all gene family and individual genes. Results We examined the expression patterns of all known cathelicidins, β-defensins and NK-lysin in multiple selected tissues from chickens. CATH1 through 3 were predominantly expressed in the bone marrow, whereas CATHB1 was predominant in bursa of Fabricius. The tissue specific pattern of β-defensins generally fell into two groups. β-defensin1-7 expression was predominantly in bone marrow, whereas β-defensin8-10 and β-defensin13 were highly expressed in liver. NK-lysin expression was highest in spleen. We synthesized peptide products of these gene families and analysed their antibacterial efficacy. Most of the host defense peptides showed antibacterial activity against E.coli with dose-dependent efficacy. β-defensin4 and CATH3 displayed the strongest antibacterial activity among all tested chicken HDPs. Microscopic analyses revealed the killing of bacterium by disrupting membranes with peptide treatment. Conclusions These results demonstrate dose-dependent antimicrobial effects of chicken HDPs mediated by membrane damage and demonstrate the differential tissue expression pattern of bioactive HDPs in chicken and the relative antimicrobial potency of the peptides they encode. Electronic supplementary material The online version of this article (doi:10.1186/s12917-016-0866-6) contains supplementary material, which is available to authorized users. Background Bacterial infections in chickens are important not only for the health and productivity of the animals but also as a reservoir of foodborne human pathogens such as Salmonella enterica. Innate immunity is important in controlling bacterial infection, particularly at mucosal surfaces such as the gastrointestinsal, respiratory and reproductive tracts. Innate immune agents include antimicrobial secretions such as lysozyme, mucocilliary clearance, the acid environment of the gizzard and proventriculus and tight cellular junction at epithelial layers [1]. Host defense peptides (HDPs) are a diverse group of small, cationic peptides present in A wide variety of organisms including both animals and plants [2][3][4][5][6]. HDPs are an important first line of defense, particularly in those species whose adaptive immune system is lacking or primitive. A majority of HDPs are strategically synthesized in the host phagocytic and mucosal epithelial cells that regularly encounter microorganisms from the environment. Mature HDPs are broadly active against Gram-negative and Gram-positive bacteria, mycobacteria, fungi, viruses and even cancerous cells [7][8][9]. Several classification schemes have been proposed for AMPs; however, most AMPs are generally categorized into four clusters based on their secondary structures: peptides with a linear α-helical structure [10][11][12], cyclic peptides with a β-sheet structure [13][14][15][16][17], peptides with a β-hairpin structure [18], and peptides with a linear structure [19,20]. It has become clear that HDPs are important and significant components of host defence against infection. The killing of bacteria appears to be very fast, ranging from 10 to 30 min for killing of S. enteritidis [21] and 30-60 min for killing of E.coli [22]. We have demonstrated chicken NK-lysin to destroy E. coli cell membranes within 5 min [23]. Furthermore, HDPs kill bacteria primarily through physical electrostatic interactions and membrane disruption. Therefore, it is difficult for microbes to gain resistance to HDPs [7,24]. At the same time, most HDP's have the capacity to recruit and activate immune cells and facilitate the resolution of inflammation [24,25]. Therefore, it is not easy to differentiate therapeutic potential of HDPs, particularly against antibiotic-resistance bacteria. A highly promising approach to overcome drug resistance is to explore and exploit the huge diversity of innovative bioactive-engineered molecules provided by nature to fight pathogens. These include HDPs, natural products involved in the defense systems. So, with their obvious potential as novel therapeutic agents, understanding the HDPs, including the relationships between structure and mode of action of these molecules, is essential for the development of novel peptide-based antibiotics and immunotherapeutic tools. Three major groups of HDPs, namely cathelicidins (CATH), defensins and NK-lysin are present in vertebrate animals. Defensins constitute a large family of small, cysteine rich, cationic peptides that are capable of killing a broad spectrum of pathogens [26][27][28][29]. Vertebrate defensins are classified into three subfamilies, the α-, β-, and θ-defensins, characterized by different spacing of the six conserved cysteines. Cathelicidins are recognized by the presence of cathelin-like domains. The signal peptide and cathelin-like domains are well conserved across species, but the mature peptide sequences at the C-terminal regions are highly diverse [30]. Whereas the defensin structure is based on a common β-sheet core stabilized by three disulfide bonds [2], CATH s are highly heterogeneous. NK-lysin is a member of the saposin-like protein (SALIP) family, and is orthologous with human granulysin with a α-helical structure [9]. The first avian HDPs discovered were β-defensins from chicken and turkey, reported in the mid 1990-'s [31], and increasing information about HDPs in other avians species is becoming available [32]. The sequencing of the chicken (Gallus gallus) genome revealed the presence of a cluster of 14 different genes on chromosome 3 coding for avian defensins (AvBD) and designated then as AvBD1 to -14 [33,34] and 4 CATHs densely clustered at the proximal end of chromosome 2 [35,36]. NK-lysin was recently mapped to the distal end of chromosome 22 [37]. The highly inbred Leghorn Ghs-6 line has been used in many studies of immune function, including serving as a parental line of an advanced intercross line used to identify the association of genetic variants in the AvBD gene cluster with colonization of the cecum with Salmonella enterica serovar Enteritidis [38]. The bursa of Fabricius, a specialized immune organ in birds, arises from bursal epithelial cells around embryonic day 4, reaches a maximum size at 6-12 weeks after hatching [39] and previously demonstrated high expression of several of AvBDs [34]. The gene expression and antibacterial efficacy of all four CATH and several AvBDs has been evaluated individually, but there are no reports comparing the full spectrum of tissue expression and antimicrobial activity of chicken HDPs concordantly. Here, we have examined expression patterns of 14 AvBD, 4 CATH and NK-lysin with the highly inbred Leghorn Ghs-6 line and compared the antimicrobial activity of the peptides encoded against E.coli. Morphological change of E.coli membranes by CATH peptide treatment was also examined. Birds Chicks of the highly inbred Leghorn Ghs-6 line were produced and maintained in the Poultry Genetics Program at Iowa State University (Ames, IA). Birds were raised in lightand temperature-controlled pens with wood-shaving bedding and continual access to water and food meeting all NRC nutritional requirements. At 7 weeks of age, birds were euthanized according to the approved Institutional Animal Care and Use Committee protocol (Log #4-03-5425-G) and tissues immediately dissected. Bursa of Fabricius, thymus, spleen, bone marrow, cecal tonsil, duodenal loop, and liver tissue were collected. Samples consisting of either the entire tissue, or sections totalling approximately 1.0 cubic cm from larger tissues were harvested. The cecal tonsil included the lymphoid aggregates and surrounding tissue at the intersection of the two ceca and the gastrointestinal tract. Bone marrow was collected by expressing the marrow from both tibias of each bird with a narrow sterile wooden rod. Tissues were placed into RNAlater until used for isolation of mRNA. RNA extraction and quantitative reverse-transcription polymerase chain reaction (RT-PCR) RNA extraction was performed using RNeasy Mini Kit (Qiagen) according to the manufacturer's instructions. Total RNA samples were extracted from 5 birds tissues and used as a template for reverse transcription. cDNA was obtained by reverse transcriptase SuperScript® III First-Strand Synthesis System using 2 μg total RNA. The relative abundance of mRNA from genes was assessed by real time reverse-transcription (RT)-PCR using a Lightcycler 480 (Bio-Rad) and a Lightcycler 480 SYBR Green I master (Bio-Rad). Primer pairs specific for the amplification of AvBD, cathelicidin and NK-lysin genes are shown in Additional file 1: Table S1. PCR products were subjected to melt curve analysis and sequenced to confirm amplification of the correct gene. Data were analyzed by ddCt method. The mean threshold cycle value (Ct) of each sample was normalized to the internal control, GAPDH, and the expression profiles were obtained by comparing normalized Ct value with the calibrator sample, in which the gene exhibited the lowest expression level. Each analysis was performed in triplicate. Quantification of each sample was calculated with the cycle threshold values and standard curve information using the Lightcycler 480 version 1.5.0 software. Peptide synthesis Nineteen synthetic linear peptides (Table 1) corresponding to chicken defensins, cathelicidins and NK-lysin, were synthesized and purified to >95 % purity grade through reverse-phase high-pressure liquid chromatography (Abclon, Seoul, Korea). Lyophilized peptide (1 mg each) was stored in desiccant at −20°C and dissolved in phosphate buffer (pH 7.2) before use. Cell viability analysis of Escherichia coli after treatment with peptides Gram-negative bacteria, Escherichia coli ATCC 25922, were purchased from Korean Collection for Type Culture and tested against each peptide. Cell viability analysis was carried out as previously reported [23]. Briefly, 6 × 10 6 colony-forming units (CFU)/ml bacteria suspensions (90 μl) were placed into 96 well plates, followed by the addition of 10 μl of serial diluted peptide (final 0, 0.5, 1, 2.5, 5 μM) in triplicate. After 2 h incubation at 37°C, equal volume of BacTiter-Glo™ Reagent (Promega) was added, and incubated for 5 min after which luminescence was measured with GloMax-Multi Detection System (Promega). Detection of damaged E. coli membranes after treatment with synthetic cathelicidin peptides To visualize E. coli membrane damage, 6.5 × 10 6 CFU of E. coli were incubated with 5 μM CATH1, CATH2, CATH3, or CATHB1 in 10 mM phosphate buffer (pH 7.0), respectively, at 37°C for 2 h, and the membranes were observed by confocal laser scanning microscopy (Carl Zeiss, Oberkochen, Germany) after staining with the LIVE/DEAD BacLight bacterial viability kit (Invitrogen) according to the manufacturer's protocol. Statistical analysis GraphPad prism software was used for cell viability data analyses and gene expression analysis and data were expressed as mean ± SD. Statistical significance between groups or conditions was analysed by two-way or oneway ANOVA followed by Bonferroni's post hoc test unless stated otherwise. Differences were considered to be statistically significant when p < 0.05. Tissue expression patterns Quantitative RT-PCR was performed to examine the expression patterns of CATH, AvBD and NK-lysin genes in various chicken tissues. The chicken AvBDgene family has a unique expression pattern. cAvBD1 through 7 are predominantly expressed in bone marrow and weakly expressed in thymus. AvBD5 is an exception with strong expression in the thymus. The other AvBDs, AvBD8 through 10 and AvBD13 are predominantly expressed in liver. AvBD11, AvBD12 and AvBD14 are expressed in all tissues tested (Fig. 1). Chicken CATH1, -2, and -3 are predominantly expressed in the bone marrow and to a lesser extent in bursa of Fabricius and thymus. CATHB1, however, showed abundant expression in bursa of Fabricius with low levels of expression in thymus and cecal tonsils. NK-lysin was predominantly expressed in spleen and in the duodenal loop, with lesser expression in thymus and bone marrow. Antimicrobial activity of chicken HDPs To address and compare the relative antibacterial activity of chicken HDPs, 14 AvBD, 4 CATH and one NK-lysin peptide were synthesized and tested for antimicrobial activity as previously described [23]. Most of the tested HDPs showed strong antibacterial activity against E.coli (Fig. 2a), but 4 peptides AvBD5, -8, -10 and -12, showed very weak lytic activity at 5 μM. The majority of the peptides, however, killed more than 80 % of E.coli under test conditions. We selected peptides that exhibited strong antimicrobial activity at 5 uM (less than 15 % survival rate), and tested antibacterial activity with lower concentrations of peptide (0.5, 1, 2.5 and 5 μM). These HDPs killed bacteria in a dose-dependent manner. Most peptides produced less than 20 % bacterial survival at low micromolar concentration (2.5 μM) and AvBD4, AvBD6, AvBD7, CATH1, CATH2, CATH3 and cNK3 show very strong antibacterial activity at all concentration (Fig. 2b). These peptides killed 50 % of bacteria at very low (0.5 μM) micromolar concentration. AvBD4 and CATH3 displayed the strongest antibacterial effect among all tested chicken HDPs under test conditions. These results suggest that all functional peptides from chicken HDP had effective antibacterial activities under a broad range of peptide concentrations. Membrane damage by synthetic peptides To determine whether chicken HDPs altered the morphology and viability of E. coli, we used four cathelicidin peptides that have strong antimicrobial activity (Fig. 2). The damage to E. coli cell membranes after treatment with peptide was determined with confocal laser scanning microscopy. In the absence of peptide, most of the E. coli cells were stained green, indicating an intact membrane. In the presence of 5 μM peptide treatment, the majority of E. coli cells were stained red, indicating membrane damage (Fig. 3). The membrane damage was greater with CATH2 or CATH3 than with CATH1 or CATHB1, consistent with dose-dependent cell killing data. Scanning electron microscopy showed that untreated E. coli cells had normal, intact shape and uniform membrane surface ( Fig. 4a and b). However, after treatment with each CATH an obvious difference in morphology was observed ( Fig. 4c-j). The treated cells showed shrinkage, were rumpled, and lost their regularly arranged surface layer. The burst and crushed appearing cells were surrounded by debris. These findings suggest that cathelicidin destroy bacterial cells via membrane damage. Fig. 2 Antimicrobial activity of chicken HDPs against E.coli. E. coli (6 × 10 6 ) were incubated with indicated amounts of peptide for 2 h and cell viability was compared to untreated cells. Nineteen chicken HDPs were tested under 5 μM concentrations (a) and 12 HDPs that showed more than 90 % killing activity in 5 μM treatment were selected for assay at lower concentrations (0.5, 1, 2.5, 5 μM) (b). Each bar represents the mean ± S.D. value of at least three independent experiments. Non-treated cell considered as 100 % cell viability and peptides treated cell viability was compared and calculated in percentile. *** indicate p ≤ 0.001, ** indicate p ≤ 0.01 and * indicate p ≤ 0.05 Discussion The aim of this study was to enhance understanding of tissue expression and antibacterial activity of chicken HDPs and to determine whether chickens can be a source of bioactive HDPs. As an initial step to understand HDPs, we examined their tissue expression profile, mainly in immune organs such as bursa of Fabricius, bone marrow, spleen and thymus. Cecal tonsil, duodenal loop and liver were also tested. Like their mammalian counterparts, avian cathelicidins and β-defensins are derived from bone marrow and/or epithelial cells. Chicken cathelicidins CATH1, -2 and -3, are highly expressed in bone marrow with little to no expression in other tissues. In agreement with their myeloid origin, CATH1-3 mRNA has been found abundantly in the bone marrow [36]. On the other hand, chicken CATHB1 mRNA shows a more restricted expression pattern, with preferential expression in bursa of Fabricius. AvBDs expression is seen mainly in bone marrow for AvBD1 through 7, which originate from myeloid cells, and in liver for AvBD8 through 10 and -13. Other lymphoid tissues did not express AvBDs in significant amounts. AvBD2 and -4 expression is especially very weak and is limited to bone marrow and AvBD13 is weakly detected in liver but hardly detected in other tissues even with increased PCR cycles relative to the others. These results are consistent with previous reports [36,40]. Only AvBD5 was strongly expressed in thymus. These results show that the tissue specific pattern varies across the defensin gene family with some members showing expression in all tested tissues, whereas the majority demonstrate more limited expression patterns which can be divided into three groups. Seven genes (AvBD1 through 7) are predominantly expressed in bone marrow, four genes (AvBD8 through 10 and 13) are restricted primarily to liver, and three (AvBD1, -12 and -14) are expressed in all tested tissues. NK-lysin showed strong expression in spleen with intermediate expression in bone marrow, intestine and thymus. Consistent with the role of cathelicidins, defensins and NK-lysin in the first line of host defense, abundant expression of these genes was detected in bursa and bone marrow. The transcriptional regulatory mechanism of these genes during development and under pathogen infection remains to be demonstrated. The antibacterial efficacy of several defensins, cathelcidins and NK-lysin has been evaluated [2,9,23,36,40]. Like their mammalian counterparts, most chicken AvBD, CATHs and NK-lysin are capable of killing bacteria. Cuperus et al. reviewed antimicrobial activity of avian HDPs against selected pathogens [41]. Zhang and Sunkara also reviewed expression, antimicrobial and immunomodulatory activities of HDPs, but there are no direct, comprehensive comparisons of major HDPs [30]. Here, we synthesized 19 chicken HDPs and analyzed antibacterial activity against E. coli in a comprehensive direct comparison. These peptides differ in net charge from 0.1 to 10 and also vary in length from 28 to 44 amino acids. They also vary in expected hydrophobicity from 24 to 61 % (Table 1). Even though many tested chicken HDPs show varying efficiencies against pathogens, the majority kill bacteria at low concentration. AvBD5, -8, -10 and -12 show minimal killing activity among tested HDPs at less than 5 μM. This is consistent with the previous report that AvBD8 activity showed 27 μM as lethal dose (LD) 50 against E.coli [42]. These four peptides have very low net charge from 0.1 to 1.8. AvBD4 has 24 % hydrophobicity with 5.8 net charge and can kill bacteria very well compared to AvBD5, -8 and -13 that have the same hydrophobicity but lower cationicity of 0.1 to 4. This suggests that low net charge results in Fig. 4 Scanning electron micrographs of cathelicidin-induced cell membrane damage. Scanning electron micrographs of E. coli with no treatment (control, a and b) or treated with cathelicidin1 (CATH1, c and d), cathelicidin2 (CATH2, e and f), cathelicidin3 (CATH3, g and h), or cathelicidinB1 (CATHB1, i and j). The rectangle as shown in the large-scale image inefficient antibacterial efficacy, even with a suitable hydrophobicity. However, AvBD3, -4 and -6 have the same net charge (5.8) and these peptides kill effectively over all tested concentrations, although BD4 has 33 % hydrophobicity and the strongest activity among the three. Also, there is 6 % gap in hydrophobicity between AvBD2 and cNK3, which share 3.9 net charge but demonstrate different activity. This result suggests that, hydrophobicity is an important factor in antibacterial activity of peptides. A structural effect cannot be ruled out but this result revealed that antimicrobial activity is strongly influenced by cationicity and hydrophobicity. Although, the CATH family has more overall cationicity and hydrophobicity than the AvBD family, but this does not translate to higher antibacterial activity. The C-terminus of some of our synthesized peptides was abbreviated relative to natural peptides. We recognize that this could impact cationicity and hydrophobicity on antibacterial activity. This potential discrepancy should be clarified in future experiment. In the present study, all four CATHs reduced E. coli cell viability and severely damaged E. coli cell membranes, with CATH2 and CATH3 showing the highest efficiency. In previous studies, the variable domains of CATH1, CATH2, and CATH3 were clearly discriminated, but the variable domain of CATHB1 was not identified [35,36]. Here, we predicted the variable domain sequence of CATHB1 and demonstrated weak antibacterial activity compared to others. Mechanisms for bacteria killing by β-defensin are thought to be similar to those of other cationic HDPs where positively charged residues interact with negatively charged membrane components, after which hydrophobic residues insert into the membrane, disrupting it and killing the cells [43,44]. It is generally accepted that increasing the hydrophobicity of the nonpolar face of the amphipathic α-helical peptides will also increase the antimicrobial activity [45,46]. Increased cationicity also helps to enhance antibacterial activity [9]. Cationicity is important for killing bacteria [47], but simply increasing the net charge does not result in the improvement of antimicrobial potency [48]. Hydrophobicity also requires an optimal range to enhance antibacterial activity [45]. Disruption of structural integrity is another important factor for high efficiency in bacteria killing [49,50]. Our results indicate that the mode of antibiotic action of HDPs requires a balance between cationicity and hydrophobicity to optimize bacteria killing activity. But the relationships between two important factors, cationicity and hydrophobicity, and antimicrobial efficacy of HDPs remains to be determined. Antimicrobial efficacy of 19 peptides against E.coli in this study are consistent with the previous report [35]. Future study against other bacteria that cause bacterial disease in birds will help to improve our understanding of the role of these genes in immunity to bacteria in chickens. Conclusions Antimicrobial activity and differential tissue expression patterns of 19 chicken HDPs were analyzed. In summary, we confirmed that most of the HDPs showed antibacterial activity against E.coli and demonstrate their differential tissue expression pattern. These studies highlight the dose-dependent antimicrobial effects that were mediated by membrane damage and the importance of balance between cationicity and hydrophobicity. Gene expression of chicken HDPs are variable and the AvBD gene family can be divided into two functional expressional groups.
4,877.6
2016-10-13T00:00:00.000
[ "Biology" ]
Optical bonding with fast sol-gel We investigate here the properties of fast sol-gel for optical bonding. The precursors of the fast sol-gel material are organically modified alkoxides generating a transparent hybrid (organic-inorganic) substance with silica glass-like properties whose index of refraction can be modified by the addition of various metal-oxides. The fast sol-gel method consists of rapid fabrication of a viscous resin and its subsequent dilution for long shelf life use. This material, when used as an adhesive offers the option of either a thermal or UV curing procedure. We demonstrate a bonding strength of ∼ 10 MPa when a 15 μm layer is applied between two glass elements. The bonding remained stable after an extensive -40◦C – 120◦C temperature cycling with minimal residual solvent evaporation at 150◦C. The fast sol-gel material was tested for optical bonding between silica bulks, between silica bulk and silicon wafers and as an adhesive in silica fibre couplers. [DOI: 10.2971/jeos.2009.09026] INTRODUCTION Robust optical bonding is essential for advanced optical systems.Most optical adhesive materials that are in use today are based on organic constituents such as epoxy, UV-cured acrylic and silicone polymers, and suffer therefore from poor thermal and irradiation stability and limited transparency [1]- [4].On the other hand, physical bonding methods such as optical contact bonding or diffusion bonding are stronger and more durable but require higher working temperatures, high-level surface flatness and cleanliness which make them impractical for many applications [5]- [9].A novel approach to overcome this problem is to use sol-gel based materials which combine the thermal and optical power stability of inorganic materials with the ease and flexibility of applying organic materials.The use of sol-gel based materials for adhesive bonding was already demonstrated in some studies [10]- [12] but with limited success.We present here an alternative bonding method, based on fast sol-gel, which successfully combines the simplicity of organic bonding materials with optical and physical properties of glass-like materials.This method provides high optical quality bonding with high thermal and irradiation power stability.The adhesive material is easy to process and its optical and physical properties can be adapted to specific requirements. The sol-gel method is a well known process for preparing glass-like materials at low temperatures (25 • C -80 • C) [13]- [15].Using the sol-gel technique thin films can be fabricated with refractive indices over a broad range (1.2 -2.0).Sol-gel materials with high refractive indices (> 1.46) can be obtained by manufacturing a silica skeleton with higher refractive in-dex additives, such as metal oxides (alumina, titania, zirconia etc.).In this case the increase in the refractive index is a linear function of the additives concentration [16]- [21].Low refractive indices can be obtained by controlling the amount of porosity in the matrix [22,23]. A drawback of preparing materials by the conventional solgel process (using only alkoxides as precursors) is the formation of cracks which limits the achievable bulk size to a few cm, or the achievable film thickness to < 1 µm [24,25].The fast sol-gel method allows preparation of crack-free bulks or films without shrinkage and with low residual organic content (∼ 20 wt%) in a relatively short process [26]- [28].These materials exhibit excellent optical qualities, are thermally stable and have good adhesive properties.The fast sol-gel method uses a combination of organically modified alkoxides with traditional alkoxides as precursors, obtaining a final product which is an organic-inorganic hybrid with properties that vary from silicone rubber to silica glass.We have adapted this method for manufacturing optical bonding materials. METHODOLOGY A detailed description of the fast sol-gel method can be found in [26]- [28].Briefly, sol-gel precursors (alkoxides and organically modified alkoxides) are mixed and undergo hydrolysis and condensation.The fast sol-gel reaction is performed at a temperature of about 100 • C under time-varying pressure conditions (from several atmospheres to vacuum).In this way a viscous sol-gel resin is quickly produced which, after a fast and simple curing process, leads to the final glass-like product.However, for optical bonding applications a long shelf-life is required.By diluting the viscous sol-gel resin with an appropriate solvent shortly after preparation, the diluted material can be kept for several months as a solution until required for use.The diluents can be removed by moderate heating or evaporation.The bonding procedure consists of spreading a thin film of the material on each surface by using either a spinner, a dipping technique, or by spraying and applying pressure on the surfaces.The sol-gel layer is subsequently cured either thermally or by UV irradiation (see following Section 3). CURING The fast sol-gel method requires only a simple curing process.After a short preparation time (about 15 minutes) the hydrolysis and condensation reactions are completed and a viscous resin with about only 4% residual liquid is obtained.Therefore, just a short time of low temperature curing (< 100 • C) is required to achieve complete residual liquid evacuation and full solidification.Due to the additional flexible organic tails in the silica skeleton of the fast sol-gel, stresses are released, crack formation is avoided and crack-free bulk monoliths or films are produced.We studied the thermal curing process using FTIR spectroscopy.The measurements were performed with thin sol-gel films coated on silicon substrate in order to overcome the strong absorption in this range, (400 -4000 cm −1 ). Figure 1 presents the FTIR spectrum as a function of thermal curing time for a sample with 18 wt% organic residuals.The spectrum is divided into two absorption ranges, 400 -1800 cm −1 (a) and 2500 -4000 cm −1 (b).An increase in the transmission at 3400 cm −1 and at 900 cm −1 as function of thermal-curing time was observed, corresponding to OH stretching [29,30] and Si-OH bending [29]- [31] modes respectively.This increase is representative of the progress of the polymerization process.The decline of the peaks at 3400 cm −1 and 900 cm −1 halted after 24 hours indicating that the poly- merization was completed.The left residue at 3400 cm −1 and 900 cm −1 is due to surface Si-OH groups and can be completely removed only at high temperature [32].Some applications in electro-optics require a shorter curing procedure, where UV-curing techniques are applied.These techniques are very common and well established in optical bonding and photo-lithography.During the last decade, UV-curing of sol-gel materials with high content of organic residual was demonstrated and applied using organic modified silicates (ORMOSILs) of the class organic modified ceramics (ORMOCERs) [33]- [38] and epoxy based hybrids [39,40]. Here we demonstrate the capability of UV-curing of low content organic residuals fast sol-gel material.To enable UVcuring, a photo-initiator for initiation of the polymerization process, is added to the fast sol-gel material with the diluter solution.This sol-gel solution was coated on a silicon substrate using a spin coating technique, followed by few seconds UV exposure through a lithographic mask, and washing the un-polymerized section using a developer.from sub-micron up to several hundreds microns can be prepared in this way. Figure 3 presents the FTIR spectrum as a function of UVcuring time.The increase in transmission is observed at 3400 cm −1 and at 900 cm −1 and at 1180 cm −1 (corresponding to Si-O-CH 3 ).These spectra reveal that the polymerization is already completed after 60 seconds. PHYSICAL PROPERTIES The optical properties of the fast sol-gel material can be found in [26]- [28].The fast sol-gel bonding material presents excellent optical transmittance in the visible range (400 -1100 nm) with optical loss less than 0.05 cm −1 .There are some absorption peaks in the 1100 -1700 nm range due to vibration of Si-O, O-H bonds, and C-H bonds with an optical loss less than 1 cm −1 . A major aspect of the optical bonding performance is the ability to match the refractive index of the bonding material to the optical components.The sol-gel's refractive index can be controlled as a linear function of added metal oxides such as alumina, titania or zirconia.Figure 4 shows the refractive index value as function of the amount of titania in the fast sol-gel.By controlling the refractive index of the bonding material we succeeded to match its refractive index to bonded fibres and were able to transfer several hundred watts from one fibre to the other with efficiency greater than 95%. Another important characteristic is the viscosity of the material.Each application will require its own optimal viscosity as an adhesive material.In order to avoid solidification of the fast sol-gel resin and to enable a long shelf life, a dilution procedure was developed [41].A standard organic polar solvent was used for dilution to prevent full polymerization.The dilution was done shortly after preparation of the viscous sol-gel resin.The diluting solvent dissolves the sol-gel polymer particles and this suspension can be kept in a solution state for several months.In these suspensions one can achieve any re- in Figure 5.After dilution, the sol-gel solution was easily filtered and did not polymerize for several months.The dilute material can be prepared for use for optical bonding by removal of the diluent by moderate heating or by solvent evacuation, until the appropriate viscosity is achieved. An additional issue with optical bonding applications is the amount of residual volatile solvent left in the final product.The fast sol-gel final samples were tested using TGA to examine the weight loss due to evaporation of residual solvents [41].The weight loss up to 150 • C, 1%, (shown in Figure 6) is due to water and alcohol evaporation and at higher temperatures, 4%, is due to breaking off organics tails.These values are much lower than values reported for other sol-gel hybrid materials [42]- [44], demonstrating that by the fast sol-gel method very dry and stable final bulk monoliths or films can be fabricated in a short time. APPLICATIONS Optical bonding using the fast sol-gel material was demonstrated of bonding silica elements, of silica elements with silicon wafer, and as well in silica fiber couplers.In each case the bonding was tested for transmission efficiency after thermal cycles and for adhesive strength. FIG. 6 TGA curve of fast sol-gel material for fast heating process up to 400 C observed a total weight loss of 4%. Bonding of silica elements Two silica rectangular blocks each 17 × 12 × 6 mm, were bonded using a fast sol-gel resin.The fast sol-gel material was applied by spinner on one surface of each block and, after removal of bubbles, the two elements were attached.A 24 hour thermal curing at 65 • C was used.The bonding layer between the blocks can be controlled using the method of applying the resin, such as the spinner or by dipping.A 15 µm thickness fast sol-gel layer produced strong optical bonding.Figure 7 presents a side view of two bonded silica blocks with a 15 µm fast sol-gel bonding layer.Figure 9 shows the applied load force as function of stretch length.A value of at least 10 MPa was observed for the fast sol-gel materials, at which value a failure in the slides was observed.The bonding area remained intact. Bonding between silica element and silicon wafer A system which combines transparent silica optical elements and semiconductor-based detectors requires the ability to directly bond the silica elements to the detector wafer.We have demonstrated such possibility by bonding a silica rectangular block to a silicon wafer in a procedure similar to the one presented in the previous section.In this case the samples were tested under temperature cycles in the range -10 • C -+120 • C without obtaining any damage to the fast sol-gel bonding layer. Bonding of silica fibres Silica fibres were bonded in several configurations; tip to tip and side attachment, where the intermediate region is the fast sol-gel material.An adhesive strength of about 7 MPa was found for tip to tip bonded fibres.With fibres, matching the refractive index of the optical bonding material to the fibre is essential for the highest transmission performance.By matching the refractive index of the fast sol-gel to the refractive index of fused silica fibres (n = 1.458) we observed more than 95% transmission efficiency through the bonded fibres with irradiation of several hundred Watts.The bonded fibres environmental stability was tested by temperature cycling in the range -40 • C -+85 • C without change in the transmission efficiency.In addition the bonded fibres withstand 5 kW/cm 2 light radiation without decrease in transmission. CONCLUSION A sol-gel hybrid organic/inorganic with a low organic content (∼ 20 wt%) and controllable refractive index was developed for strong optical bonding.It exhibits excellent optical properties with glass-like mechanical characteristics.Due to the low organic content the dominant properties are glass-like, allowing the material to withstand high temperatures and high light power.The fast sol-gel material was proven to be useful for optical bonding of silica elements, silica elements to silicon wafers and silica fibres.It was tested under temperature cycles in the range -40 • C up to 120 • C and several hundred Watts illumination (5 kW/cm 2 ), without causing any damage to the bonding, while retaining transmission efficiency of ∼ 95%. The optical and physical characterization of the adhesive material was performed as follows; Thermo-gravimetric analyses (TGA) were conducted with three different commercial instruments; Pyres-1 by (Perkin Elmer), SDT 2960 (TA Instruments), and SDTA 851 (Mettler-Toledo).Two kinds of measurements were done; fast scanning (50 • C/minute) up to 400 • C and slow scanning (10 • C/minute) up to 150 • C with dwell time of one hour at 150 • C. The optical UV-NIR transmission spectrum was measured with a Jasco model V-570 spectrometer and the IR transmission spectrum with Bruker model Vertex 70 FTIR spectrometer.Refractive index at 589 nm was measured with an Abbe Refractometer, Kruss AR-4D (resolution ±0.001).Viscosity was measured with a Brookfield Viscometer, model LVD1.Temperature cycling tests were conducted with a homemade system consisting of a cell heated by a hot plate and cooled by liquid nitrogen, controlled by Eurotherm controller allowing temperature changes in the range -40 • C -120 • C. Adhesive tensile strength was measured using Cometech Material Testing Machine, model QC-506B1.Samples were analyzed and photographed using either by Leica Optical Microscope or simple digital camera. FIG. 2 Fast sol-gel stripe pattern (bright stripes) on silicon substrate (dark stripes) prepared by UV-curing technique. FIG. 7 FIG. 7 Side view of two bonded silica blocks with 15 µm fast sol-gel bonding layer with optical microscope.
3,294.2
2009-06-01T00:00:00.000
[ "Materials Science" ]
Pancreatic cancer cells resistance to gemcitabine: the role of MUC4 mucin Background: A major obstacle to the successful management of pancreatic cancer is to acquire resistance to the existing chemotherapeutic agents. Resistance to gemcitabine, the standard first-line chemotherapeutic agent for advanced and metastatic pancreatic cancer, is mainly attributed to an altered apoptotic threshold in the pancreatic cancer. The MUC4 transmembrane glycoprotein is aberrantly overexpressed in the pancreatic cancer and recently, has been shown to increase pancreatic tumour cell growth by the inhibition of apoptosis. Methods: Effect of MUC4 on pancreatic cancer cells resistance to gemcitabine was studied in MUC4-expressing and MUC4-knocked down pancreatic cancer cell lines after treatment with gemcitabine by Annexin-V staining, DNA fragmentation assay, assessment of mitochondrial cytochrome c release, immunoblotting and co-immunoprecipitation techniques. Results: Annexin-V staining and DNA fragmentation experiment demonstrated that MUC4 protects CD18/HPAF pancreatic cancer cells from gemcitabine-induced apoptosis. In concert with these results, MUC4 also attenuated mitochondrial cytochrome c release and the activation of caspase-9. Further, our results showed that MUC4 exerts anti-apoptotic function through HER2/extracellular signal-regulated kinase-dependent phosphorylation and inactivation of the pro-apoptotic protein Bad. Conclusion: Our results elucidate the function of MUC4 in imparting resistance to pancreatic cancer cells against gemcitabine through the activation of anti-apoptotic pathways and, thereby, promoting cell survival. Pancreatic adenocarcinoma is among the most common causes for cancer-related deaths in western countries (Keighley, 2003). It is one of the neoplasms with an extremely poor prognosis because of its aggressive invasion, early metastasis, resistance to existing chemotherapeutic agents and radiation therapy (Bardeesy and DePinho, 2002). Despite an enormous amount of effort spent in the development of chemotherapies for pancreatic cancer, these are effective only in a small proportion of patients. Gemcitabine has become the standard first-line chemotherapeutic agent for the advanced and metastatic pancreatic cancer, with marginal survival advantage, and amelioration of disease-related symptoms (El-Rayes and Philip, 2003;Pino et al, 2004). In contrast, resistance to gemcitabine has been increasing in recent years, and the effectiveness of gemcitabine has been reduced to o20% (Wheatley and McNeish, 2005). It is considered that resistance to gemcitabine treatment is mainly attributed to an altered apoptotic threshold in pancreatic cancer cells (Schniewind et al, 2004). MUC4, a membrane-bound mucin, is involved in the regulation of cell proliferation and inhibition of apoptosis . To date, the aberrant overexpression of MUC4 has been reported in pancreatic malignancies, but not in the normal pancreas, which has made MUC4 a promising therapeutic target for anti-cancer adjuvant therapies (Andrianifahanana et al, 2001). Recently, we have shown that the overexpression of MUC4 in mouse embryonic fibroblast cells confers oncogenic transformation . In addition, studies by overexpression and down-regulation of MUC4 in various pancreatic cancer cells showed its involvement in the development and progression of pancreatic cancer (Singh et al, 2004;Chaturvedi et al, 2007;Moniaux et al, 2007). Importantly, our recent studies have revealed that MUC4 interacts with HER2, a member of epidermal growth factor (EGF) receptor family and regulates its expression by post-translational mechanisms (Chaturvedi et al, 2008). HER2 is an established oncoprotein and is involved in growth and malignant properties of the cancer cells through activation of various intracellular signalling pathways (Hsieh and Moasser, 2007). It has been shown that the EGF protects prostate cancer cells from apoptosis by phosphorylating apoptotic protein Bad through extracellular signal-regulated kinase (ERK) activation (Sastry et al, 2006). In our earlier studies, MUC4 has been shown to increase the phosphorylation of ERK by stabilization of the expression of HER2 in pancreatic cancer cells Chaturvedi et al, 2008). These findings indicate that MUC4 might be responsible for resistance to gemcitabine treatment by alteration of apoptotic threshold in pancreatic cancer cells. In this study, we performed a set of experiments to define the function of MUC4 in the activation of anti-apoptotic pathways in response to gemcitabine treatment of pancreatic cancer cells. Earlier MUC4-down-regulated CD18/HPAF cells (CD18/HPAF/siMUC4) and scrambled siRNA-transfected CD18/HPAF cells (CD18/HPAF/Scr) were treated with gemcitabine for 24 and 48 h and extent of apoptosis was measured. Annexin-V staining showed that MUC4 inhibited gemcitabine-induced apoptosis of CD18/HPAF/Scr pancreatic cancer cells. CD18/HPAF/Scr cells also showed reduced DNA fragmentation, a hallmark of apoptosis, compared with CD18/HPAF/siMUC4 cells. In concert with this, the release of mitochondrial cytochrome c and the activation of caspase-9 were attenuated in MUC4-expressing CD18/ HPAF/Scr cells compared with MUC4-down-regulated CD18/HPAF/ siMUC4 cells. Interestingly, the expression of MUC4 was associated with the increased level of phospho-HER2 and -ERK, which further leads to deactivation of apoptotic protein Bad through enhancing its phosphorylation. Taken together, these findings indicate that aberrant overexpression of MUC4 in pancreatic cancer contributes resistance to chemotherapeutic agent gemcitabine by activation of MUC4-HER2-mediated anti-apoptotic pathway. Measurement of apoptosis Apoptosis was measured by using the Annexin-V Fluos staining kit (Roche Diagnostics, Indianapolis, IN, USA). For this, 1.5 Â 10 6 cells each of CD18/siMUC4 and control CD18/Scr were cultured in 10 cm petridishes followed by overnight incubation at 37 1C. The cells were then treated with 1 mM gemcitabine (Symon et al, 2002) in 10% DMEM for 24 and 48 h, respectively, followed by 24 h incubation in 10% DMEM. The induction of apoptosis and necrosis was measured by staining the cells with Annexin-V and propidium iodide solution, followed by fluorescence-activated cell sorting analysis (FACs). Assessment of mitochondrial cytochrome c release Cytosolic fraction was prepared as described by Kharbanda et al (1997). Briefly, cells were washed twice with PBS, and the pellet of 1.5 Â 10 6 was suspended in 1 ml of ice-cold buffer A (20 mM Hepes, pH 7.5/1.5 mM, MgCl 2 /10 mM, KCl/1 mM, EDTA/1 mM, EGTA/1 mM, DTT/0.1 mM, phenylmethylsulfonyl fluoride and 1 Â protease inhibitor cocktail (Roche)) containing 250 mM sucrose. The cells were homogenized by douncing three times in a dounce homogenizer with a sandpaper-polished pestle. After centrifugation for 5 min at 41C, the supernatants were ultracentrifuged at 105 000 Â g for 30 min at 41C. The resulting supernatant was used as the soluble cytosolic fraction. Protein concentrations in the soluble cytosolic fractions were determined using a Bio-Rad D/C protein estimation kit. The same amount of protein from the cytosolic fractions of CD18/Scr and CD18/siMUC4 cells were used to quantify the release of cytochrome c from mitochondria, using a commercially available cytochrome c ELISA kit (Calbiochem, San Diego, CA, USA) according to the manufacturer's instructions. DNA fragmentation assay CD18/HPAF/Scr and CD18/HPAF/siMUC4 cells were cultured in 10% DMEM with and without 1 mM of gemcitabine. Treated cells were washed twice with PBS, and DNA was extracted using Gentra's Puregene DNA Isolation Kit (Qiagen, Valencia, CA, USA) protocol. A measure of 5 mg of isolated DNA was resolved on a 1% agarose gel. Co-immunoprecipitation Cells were grown to 50 -60% confluency and treated with 1 mM gemcitabine for 48 h in 5% CO 2 incubator at 371C. Cells were washed once with ice-cold PBS and then lysed in extraction buffer, which contains 1% Triton X-100 in lysis buffer (150 mM NaCl, 2 mM EDTA, 50 mM Tris -Cl (pH 8.0), 1 mM NaF, 1 mM sodium orthovanadate, 1 mM PMSF, 5 mg of aprotinin per ml and 5 mg of leupeptin per ml) for 25 -35 min at 41C. The lysates were centrifuged at 16 000 Â g for 30 min at 41C. Protein concentrations were determined using a Bio-Rad D/C protein estimation kit. Equal amounts of protein cell lysates were incubated overnight with anti-14-3-3 mAbs or IgG in a 500-ml total volume. Protein G-Sepharose beads (Oncogene Research, Boston, MA, USA) were added to the lysate -antibody mix and incubated on a rotating platform for 2.5 -3.5 h at 41C, followed by three to four washes with the lysis buffer. The immunoprecipitates or total cell lysates were then immunoblotted with anti-14-3-3 mouse monoclonal antibody and anti-pBad goat polyclonal antibody. MUC4 confers resistance to gemcitabine-induced apoptosis in pancreatic cancer cells Membrane-bound mucin MUC1 and rat Muc4 have been shown to inhibit apoptosis induced by multiple insults in rat 3Y1 fibroblast cells and human melanoma and breast cancer cells, respectively (Raina et al, 2004;Workman et al, 2009). Anti-apoptotic function of MUC4 in pancreatic cancer cells in response to serum starvation has also been observed earlier in our laboratory . Further, an altered apoptotic threshold is considered to be one of the major attribute for the development of resistance to gemcitabine treatment in pancreatic cancer cells (Schniewind et al, 2004). Therefore, to determine the function of MUC4 in the development of resistance to gemcitabine in pancreatic cancer cells, we assessed the effect of MUC4 down-regulation on gemcitabine-induced apoptosis in CD18/HPAF pancreatic cancer cells. MUC4 was stably down-regulated in CD18/HPAF pancreatic cancer cells, which express high level of MUC4, by MUC4 siRNA Figure 1A). Scrambled siRNA-transfected CD18/HPAF cells (CD18/HPAF/Scr) were used as a control. These cell lines were further used to study the effect of MUC4 on gemcitabine-induced apoptosis. To analyse the apoptotic index, the MUC4-overexpressing and MUC4-silenced cells were treated with gemcitabine for 24 and 48 h and the extent of apoptosis was determined by Annexin-V and propidium iodide staining followed by flow cytometric analysis. The results showed that gemcitabine treatment at both time points was directly associated with apoptosis in CD18/HPAF/ siMUC4 cells and this response was suppressed in CD18/HPAF/Scr cells ( Figure 1B). Further, DNA fragmentation, which is a hallmark of apoptosis, was checked in these cell lines. For this, genomic DNA was isolated from CD18/HPAF/Scr and CD18/HPAF/siMUC4 cells before and after treatment with gemcitabine and resolved on 1% agarose gel. Our results showed that MUC4-expressing CD18/HPAF/Scr exhibited reduced DNA fragmentation compared with MUC4silenced CD18/HPAF/siMUC4 cells (Figure 2). These observations corroborated the finding that MUC4 protects CD18/HPAF pancreatic cancer cells from gemcitabine-induced apoptosis. MUC4 blocks activation of intrinsic apoptotic pathway The balance among the pro and anti-apoptotic members of the Bcl-2 family proteins has a central function in the regulation of intrinsic apoptotic pathway by controlling the activation of mitochondrial cytochrome c release in the cytosol. The mitochondrial associated anti-apoptotic proteins Bcl-2 and Bcl-X L suppress intrinsic mitochondrial apoptotic pathway, whereas pro-apoptotic proteins, such as Bad, translocate to mitochondria in response to apoptotic signals and interact with and deactivate Bcl-2 and Bcl-X L (Yang et al, 1995;Thomadaki and Scorilas, 2006). The proapoptotic activity of Bad is suppressed by its phosphorylation on serine residues in response to survival signalling cascades. As were seeded in 10 cm petridishes and treated with 1 mM gemcitabine for 48 h as described in methodology. A total of 50 mg protein from each cell extract was resolved by SDS -PAGE (15%), followed by immunobloting with anti-pBad, anti-Bad and anti-b-actin (internal control) antibodies. pBad protein level was more in CD18/HPAF/Scr cells compared with CD18/HPAF/siMUC4 cells. (B) Cytosolic fractions were prepared from 1 mM gemcitabine-treated CD18/ HPAF/Scr and CD18/HPAF/siMUC4 cells. The amount of cytochrome c protein from each fraction was then measured with a commercially available cytochrome c ELISA kit. Level of cytochrome c in the cytosol of CD18/HPAF/siMUC4 was more compared with CD18/HPAF/Scr cells. (C) A total of 20 mg protein from each cell lines treated with 1 mM gemcitabine for 48 h was resolved by SDS -PAGE (10%), followed by immunobloting with antibodies against cleaved caspase-9 and b-actin (internal control). CD18/HPAF/siMUC4 cells showed more cleaved caspase-9 compared with CD18/HPAF/Scr cells. These findings indicate up-regulation of intrinsic apoptotic pathway in CD18/HPAF/Scr cells. increase in phospho-Bad and Bcl-X L protects against apoptosis; to assess the effect of MUC4, we examined expression of these proteins in our cell models CD18/HPAF/Scr and CD18/HPAF/ siMUC4 after treatment with gemcitabine. We found that MUC4 markedly increased level of phosphorylated Bad in CD18/HPAF/ Scr cells ( Figure 3A). No difference was observed in Bad protein levels between CD18/HPAF/Scr and CD18/HPAF/siMUC4 cells ( Figure 3A). MUC1 and rat Muc4 has been shown to decrease apoptosis in response to various insults also by enhancing the expression of Bcl-X L (Raina et al, 2006;Thomadaki and Scorilas, 2006;Workman et al, 2009). In contrast, we did not observe any difference in Bcl-X L protein levels after gemcitabine treatment of CD18/HPAF/Scr and CD18/HPAF/siMUC4 cells (data not shown). Further, to determine the downstream effect of phosphorylated Bad on activation of the intrinsic mitochondrial apoptotic pathway, we examined the release of mitochondrial cytochrome c into cytosol and the activation of caspase-9. In response to gemcitabine treatment, mitochondrial cytochrome c release was significantly increased in MUC4-silenced CD18/HPAF/siMUC4 cells compared with CD18/HPAF/Scr cells ( Figure 3B). In concert with this, the level of cleaved caspase-9 protein was also enhanced in CD18/ HPAF/siMUC4 cells ( Figure 3C). These observations suggest that MUC4 blocks activation intrinsic mitochondrial apoptotic pathway in CD18/HPAF pancreatic cancer cells in response to gemcitabine treatment. MUC4 facilitates sequestration of Bad in the cytosol Phosphorylation of Bad promotes its interaction with the scaffolding protein 14-3-3 and prevents its interaction with antiapoptotic Bcl-X L protein, leading to its sequestration in the cytosol and inhibition of its pro-apoptotic activity (Thomadaki and Scorilas, 2006). We found that MUC4 increases phosphorylation of Bad in CD18/HPAF/Scr cells in response to gemcitabine treatment. Here, we determined that whether increased phosphorylation of Bad was associated with the increased binding with 14-3-3 proteins. For this, we performed co-immunoprecipitation experiment for pBad and 14-3-3 proteins. Our data showed that pBad was pulled down in 14-3-3 immunoprecipitates in 1 mM gemcitabine-treated CD18/HPAF/Scr cells (Figure 4). Pull down of pBad in 14-3-3 immunoprecipitates was decreased in CD18/HPAF/ siMUC4 cells after treatment with 1 mM gemcitabine (Figure 4). This suggests that the expression of MUC4 promotes binding of Bad with 14-3-3 proteins and thereby helps in its sequestration in the cytosol. MUC4 activates HER2 downstream signalling pathway Our recent studies have revealed that MUC4 interacts with HER2, a member of EGF receptor family and regulates its expression by post-translational mechanisms (Chaturvedi et al, 2008). To determine whether MUC4 exerts its anti-apoptotic function in pancreatic cancer cells through HER2, we examined the expression and activation of HER2 and its downstream signalling proteins. Our results showed increased expression and activation of HER2 in CD18/HPAF/Scr cells compared with CD18/HPAF/siMUC4 cells in response to gemcitabine treatment ( Figure 5). Enhanced activation of HER2 was also associated with enhanced activation of ERK ( Figure 5). This indicates that MUC4 contributes resistance to chemotherapeutic agent gemcitabine in CD18/HPAF pancreatic cancer cells by activation of the MUC4-HER2-mediated antiapoptotic pathway. pBad Figure 4 Analysis of interaction between pBad and 14-3-3 proteins in CD18/HPAF/Scr and CD18/HPAF/siMUC4 cells in response to gemcitabine treatment by co-immunoprecipiation assay. Lysates from 1 mM gemcitabine-treated and -untreated CD18/HPAF/Scr and CD18/HPAF/ siMUC4 cells were used for immunoprecipitation with mouse anti-14-3-3 antibody. The immunoprecipitates were electrophoretically resolved on a 15% polyacrylamide gel and immunoblotted with anti-14-3-3 and anti-pBad antibodies. The mouse IgG was used as isotype control for coimmunoprecipitation study. CD18/HPAF/Scr cells showed enhanced precipitation of pBad with 14-3-3 proteins in response to gemcitabine treatment compared with CD18/HPAF/siMUC4 cells. DISCUSSION Our earlier studies have shown the specific and differential expression of MUC4 in pancreatic adenocarcinoma as compared with the normal pancreas or chronic pancreatitis (Andrianifahanana et al, 2001). Using MUC4-knockdown and overexpression pancreatic cancer cell models, we have shown that MUC4 potentiates pancreatic tumour cell growth and metastasis by altering the behavioural properties of the tumour cells (Singh et al, 2004;Chaturvedi et al, 2007;Moniaux et al, 2007). Recently, the anti-apoptotic function of MUC4 in pancreatic cancer cells has been observed in our laboratory . Further, another membrane-bound mucin MUC1 and rat Muc4 have also been shown to inhibit apoptosis induced by multiple insults in rat 3Y1 fibroblast cells and human melanoma and breast cancer cells, respectively (Raina et al, 2004;Workman et al, 2009). In addition, Muc4 has been shown to impart resistance to the trastuzumab chemotherapeutic agent in breast cancer cells by causing steric interference with the drug (Price-Schiavi et al, 2002;Nagy et al, 2005). Here, in this study, we explored the function of MUC4 in development of resistance to gemcitabine in pancreatic cancer cells. We have shown that overexpression of MUC4 in pancreatic cancer cells contributes resistance to gemcitabine by activation of an anti-apoptotic pathway. This makes MUC4 an ideal candidate to consider as an important marker for prediction of patient response to therapy. A membrane-bound mucin MUC1 has been shown to impart resistance to rat fibroblast cells against gemcitabine by induction of the intrinsic apoptotic pathway (Raina et al, 2004). Rat Muc4 also provided resistance to chemotherapeutic agent cisplatin in melanoma and breast cancer cells (Workman et al, 2009). Consistent with these findings, our data also showed that MUC4 activates intrinsic mitochondrial apoptotic pathways to impart resistance to gemcitabine in CD18/HPAF pancreatic cancer cells. We have observed increased phosphorylation of pro-apoptotic protein Bad in MUC4-expressing CD18/HPAF cells in response to gemcitabine treatment. No change was observed in the expression of anti-apoptotic protein Bcl-X L , which has been shown to be a major player in the inhibition of intrinsic apoptotic pathway in response to various insults in earlier studies. The mitochondrialassociated anti-apoptotic proteins Bcl-2 and Bcl-X L suppress intrinsic mitochondrial apoptotic pathway, whereas pro-apoptotic proteins, such as Bad, translocate to mitochondria in response to apoptotic signals, and interact with and deactivate Bcl-2 and Bcl-X L (Yang et al, 1995;Thomadaki and Scorilas, 2006). The proapoptotic activity of Bad is suppressed by its phosphorylation on serine residues in response to survival signalling cascades. Indeed, phosphorylation at serine residue of Bad is sufficient for binding with scaffolding protein 14-3-3 and thus, inhibits pro-apoptotic function of Bad. MUC4 causes increased phosphorylation of Bad in response to gemcitabine treatment of pancreatic cancer cells, and thereby facilitates increased binding with 14-3-3 proteins. Therefore, Bad will not translocate to mitochondria to deactivate the anti-apoptotic protein Bcl-X L . As expected, anti-apoptotic effects of Bad phosphorylation were also associated with decreased mitochondrial cytochrome c release in the cytosol for the induction of intrinsic apoptosis. These findings indicate that MUC4mediated increased phosphorylation of Bad is sufficient to protect pancreatic cancer cells from gemcitabine-induced apoptosis. Our recent studies have revealed that MUC4 interacts with HER2, a member of EGF receptor family and regulates its expression by posttranslational mechanisms (Chaturvedi et al, 2008). In addition, a Proposed model for possible mechanism of MUC4-mediated resistance to apoptosis. In a viable cell, the pro-apoptotic Bcl-2 family members (Bax, Bak) and BH3-only proteins, such as Bad, are antagonized by anti-apoptotic members, such as Bcl-X L , Bcl-2. In response to an apoptotic stimulus, Bad are activated and prevent anti-apoptotic Bcl-2 members from inhibiting pro-apoptotic members. Pro-apoptotic members then form pores into the mitochondrial membrane and release pro-apoptotic factors, such as cytochrome c into the cytosol, which subsequently activates the caspase cascade leading to apoptosis. In response to the gemcitabine treatment in CD18/HPAF/Scr MUC4-expressing pancreatic cancer cells, MUC4 phosphorylates anti-apoptotic protein Bad through MUC4-HER2-ERK-mediated pathway. Phosphorylation of Bad facilitates its binding with scaffolding protein 14-3-3 and, thereby, inhibits translocation of Bad to the mitochondria to deactivate the anti-apoptotic protein Bcl-X L . These findings suggest that MUC4-mediated increased phosphorylation of Bad through HER2/ERK pathway might be responsible to protect pancreatic cancer cells from the gemcitabine-induced apoptosis. recent study has shown that anti-apoptotic effect of rat Muc 4 is independent of ErbB2/HER2 in A375 melanoma and MCF-7 breast cancer cells, whereas dependent on ErbB2 in JIMT-1 breast cancer cells (Workman et al, 2009). Our data showed increased expression and activation of HER2 in CD18/HPAF/Scr pancreatic cancer cells in response to gemcitabine treatment. Earlier studies have shown that MUC4-HER2 interaction subsequently leads to activation of ERK (Chaturvedi et al, 2008). We have also shown increased activation of ERK in response to gemcitabine treatment of CD18/HPAF/Scr cells. Further, ERK activation has been shown to protect prostate cancer cells from apoptosis by phosphorylating apoptotic protein Bad (Sastry et al, 2006). These observations suggest that activation of antiapoptotic pathway through MUC4-HER2-ERK-mediated pathway might be responsible for contribution of resistance to chemotherapeutic agent gemcitabine. A schematic model ( Figure 6) has been proposed to depict the possible mechanism of MUC4-mediated inhibition of apoptosis in pancreatic cancer cells in response to gemcitabine treatment. In conclusion, our data provide the first evidence that MUC4 imparts resistance to gemcitabine in pancreatic cancer cells. We showed that MUC4 protects CD18/HPAF pancreatic cancer cells from gemcitabine-induced apoptosis. Furthermore, inhibition of apoptosis was associated with increased phosphorylation of HER2 and ERK. Activated ERK then deactivates pro-apoptotic protein Bad by phosphorylation and, thereby, protects cells from apoptosis. These findings indicate that the overexpression of MUC4 confers resistance to anti-cancer agent gemcitabine. In the future, it will be of interest to examine the effect of gemcitabine treatment on MUC4-expressing and non-expressing pancreatic cancer cell lines in vivo to support the pathogenic relevance of MUC4 with the acquisition of resistance to chemotherapeutics.
4,790.8
2009-09-08T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
Riemannian Lie Subalgebroid This paper talks about Riemannian Lie subalgebroid. We investigate the induced Levi-civita connection on Riemannian Lie subalgebroid, and give a construction of the second fondamental form like in case of Riemannian submanifold. We also give the Gauss formula in the case of Riemannian Lie subalgebroid. In the case of the Lie subalgebroid induced by a Leaf of a characteristic foliation, we obtain that the leaf carries more curvature than the manifold as shown by Boucetta (2011). Introduction The notion of Lie groupoids and Lie algebroid are nowadays central resarch subjects in differential geometry.Lie algebroid was first introduced by Pradines and appeared as an infinitesimal counterpart of Lie groupoid.It is a generalization of the notion of Lie algebra and fiber bundle.A. Wenstein shows the crucial role of Lie algebroid in the study of Poisson manifold and Lagrangian mechanics.This motivated many studies on Lie algebroid, among others integrability by M. Crainic and R. L. Fernandes (2003), covariantes derivatives by Fernandes (2002) and Riemannian metric by M. Boucitta (2011), • • • The authors introduced in [1] the notion of geodesically complete Lie algebroid, and the notion of Riemannian distance.They also give a Hopf Rinow type theorem, caracterise the base connected manifold and characteristic leaf of this Lie algebroid. This paper deals with Riemannian Lie subalgebroid.We will introduce the notion of Riemannian Lie algebroid as a generalisation of Riemannian submanifold.Hence we will give the second fondamental form of Lie subalgebroid and rewrite the Gauss's formulas type.We will investigate the special case of Riemannian Lie subalgebroid induced by a leaf of a characteristic foliation of a Riemannian manifold. The paper is organised as follow.After this introduction, the second section gives some preliminary.In the third section, we introduce the notion of Lie subalgebroid of a Lie algebroid, induced by a submanifold of the base manifold.The fourth section deals with Riemannian Lie subalgebroid, and the second fondamental form in the case of Riemannian Lie subalgebroid will be introduced.We also give the Gauss's type formulas.In the last section we investigate a special case of Riemannian Lie subalgebroid induced by a leaf of a characteristic foliation. Some Basic Facts on Lie Algebroids Most of notions introduced in this section come from Boucetta (2003) and from J.-P.Dufour and N. T. Zung' s book (2005). Lie Algebroid A Lie algebroid is a vector bundle p : A → M such that : • the sectons space Γ(A) carry a Lie structure [, ]; • there is a bundle map ♯ : A → T M named anchor; Note that a Lie algebroid is said to be transitive if the anchor is surjective. The anchor ♯ also satisfies: where a, b ∈ Γ(A) and the bracket in the right is the natural Lie bracket of vector bundle.We also have: and for any a, b ∈ Γ(A) and f, g ∈ C ∞ (M).We also have a local splitting of a lie algebroid.given by R. Fernandes in (2002). Theorem 2.1(2002)(local splitting) Let x 0 ∈ M be a point where ♯ x 0 has rank q.There exists a system of cordinates where b i j ∈ C ∞ (U) are smooth functions depending only on the y ′ s and vanishing at x 0 : b Linear A-connection The notion of connection on Lie algebroids was first introduced in the contexte of Poisson geometry by Vaisman (1994) and R. Fernandes (2002).It appears as natural extension of the usual connection on fiber bundle (covariant derivative). Remark 2.1 The notion of A-connection is a generalization of the notion of the usual linear connection on a vector bundle.Lot of classic notions associate with covariant derivative can be written in the case of Lie algebroid. To introduce the notion of parallel transport, Boucetta sets the following definition and then we can introduce the notion of linear A-connexion. Definiton 2.1 Let p : A → M be a Lie algebroid. A linear A-connection D is an A-connection on the vector bundle , then the Christoffel's symbols of the linear connection D can be defined by: The most interesting fact about this notion is that one can ask about his relationship with the natural covariant derivative. The answer given by Fernandes in ( 2002) is relative to the notion of compatibility with Lie algebroid structure. 2. A linear A-connection D is weakly compatible with the Lie algebroid structure if and only if, for any leaf L and any sections α ∈ Γ(Ker♯ L ) and β ∈ Γ(Ker♯ L ), D α β ∈ Γ(Ker♯ L ). Riemannian Metric on Lie Algebroid Let p : A → M be a Lie algebroid. Definition 2.3 A Riemannian metric on a Lie algebroid p : A → M is the data, for any x ∈ M, of a scalar product g x on the fiber A x such that, for any local sections a, b ∈ Γ(A), the function g(a, b) is smooth. Like in the classic case of Riemannian manifold, one of the most interesting fact is the existence of a linear A-connection which has the same characteristics with the Levi-civita connection.The A-connection is Also, the Linear A-connection D is characterized by the Koszul type formula: for all sections a, b, c ∈ Γ(A) The Christoffel's symbols of the Levi-civita A-connexion are defined, in a local coordinates system (x 1 , • • • , x n ) over a trivializing neighborhood U of M where Γ(A) admits a local basis of sections {a 1 , • • • , a r }, by: where the structures functions b g i j =< a i , a j > and (g i j ) denote the inverse matrix of (g i j ). The author of [?] define the sectional curvature of two linearly independant vectors a, b ∈ A x by where R is Riemannian curvature. Notion of Lie Subalgebroid The notion of Lie subalgebroid is studied by many authors.K. Mackenzie (1987) gives the following definition Definition 2.4 Let p : A → M be a Lie algebroid.A Lie subalgebroid of A is a Lie algebroid A ′ on M together with an injective morphism A ′ → A of Lie algebroid over M. Here we give a construction of this notion which generalised the classic notion of a submanifold. Let p : A → M be a Lie algebroid with anchor map ♯ and let N be a submanifold of M. Let's set and If there is no risk of confusion, let's denote by ♯ : A N → T N the restriction of ♯ to A N .Then we can defined a Lie bracket [, ] N on Γ(A N ), which is like a restriction of the Lie bracket of Γ(A), by setting for all a, b ∈ Γ(A N ) and x ∈ N: This bracket is well defined and induced a Lie algebroid structure on p N : A N → N. Remark 2.3 One of the best example of this Lie subalgebroid structure, it's the one induced by a leaf of a characteristic foliation L. This structure Lie subalgebroid is transitive.We will use this structure to study some particular Riemannian Lie subalgebroid. Riemannian Lie Subalgebroid Let p : A → M be a Lie algebroid with anchor map ♯ and let g be a Riemannian metric on A. Let N be submanifold of M and p N : A N → N be an induced Lie subalgebroid.As in the Riemannian case, if f : A N → A is an isometric immersion, then g = f * g is an induced metric on A N . Definition 3.1 (A N , g) is a Riemannian Lie algebroid called Riemannian Lie subalgebroid. Like in the classic case of Riemannian submanifold, one can ask about the induced Levi-Civita A N -connection and it's relationship with the Levi-civita A-connection. As an answer, we will give a similar connection construction with the classic one of Riemannian submanifold.For this construction for all x ∈ N we denote by (A N ) ⊥ x the orthogonal complementary of (A N ) x in respect with gx .then we have : x can be defined by: Thus for all a ∈ Γ(A) one has a = a ⊤ + a ⊥ with a ⊤ ∈ Γ(A N ) and a ⊥ ∈ Γ(A N ) ⊥ . Moreover if D is the Levi-civita A-connection, then for all α, β ∈ Γ(A N ) one has: Let's set D N α β = (D α β) ⊤ .Then we have the following proposition. Proposition 3.1 D N is the Levi-civita A N -connection associate to the Riemannian metric g. proof Indeed D N is free torsion: for all α, β ∈ Γ(A N ) one has Second Fondamental Form This notion is a generalization of the classical case of Riemannian submanifold.we will show that the A -second fondamental form satisfies all properties of the second fondamental form of a Riemannian submanifold. Definition 3.2 The operator h This operator h is C ∞ (N)-bilinear and symmetric. Gauss and Kodazzi's Equations Here we rewrite the Gauss and Kodazzi's formulas in the case of Riemannian Lie subalgebroid. For all α ∈ Γ(A N ) and ξ ∈ Γ(A N ) ⊥ the relation (5) becomes : Now if we set B ξ α = −(D α ξ) ⊤ and L(α, ξ) = (D α ξ) ⊥ then we have the Gauss type formula: One of the important fact in the study of Riemannian submanifold is the relationship between the Riemannian curvature of the manifold and the one on the submanifold.Similarly to this case we can investigate this relationship between Riemannian curvature associate to g and the induced one associate to g. The following proposition gives the Gauss equation in the case of Riemannian Lie subalgebroid. Proposition 3.3 If R and R N are respectively the curvature associate to g and g, then we have: respectively the sectional curvature on A N and A, then the Gauss equation ( 8) become by sitting Moreover if α and β are orthornormal, then the equation ( 9) becomes : Totally Geodesic Lie Subalgebroid Definition 3.3 Let (A N , g) be Riemannian Lie subalgebroid of a Riemannian Lie algebroid (A, g).A N is said to be totally geodesic if any A N -geodesic is an A-geodesic. This class of Riemannian Lie subalgebroid is characterised by the following proposition. Proposition 3.4 Let p N : A N → N be a Riemannian Lie subalgebroid of the Riemannian Lie algebroid p : A → M.Then, the following assertions are equivalent: 1.A N is totally geodesic; 2. the second fondamental A-form is identically nul. Proof 1) ⇒ 2. Supposed that A N is a totally Riemannian Lie subalgebroid.Let s be an A N -geodesic, then D N s s = 0 and s is an A-geodesic (D s s = 0).Since D s s = D s s + ds dt , one has: 2 ⇒ 1).Supposed B ξ ≡ 0. Let s be an A N -geodesic, then : With the Gauss's formula ( 7), h(s, s) = 0 and s is an A-geodesic. Corollary 3.1 If (A N , g) is a totally geodesic Riemannian Lie subalgebroid, then the Gauss type formula becomes: for all linearly independent vectors α, β Minimal, Parallel and Totally Umbilical Lie Subalgebroid From the splitting theorem, consider {a 1 , • • • , a r } a local basis of the sections space Γ(A).Supposed that the sections space As in the case of Riemannian submanifold, the mean curvature H associated to the second fondamental A-form is defined by: then with the Gauss formula (7) the mean curvature become: we have ξ k ∈ Γ(A N ) ⊥ and B ξ k = 0. Then H ≡ 0 and A N is minimal. Definition 3.5 Let (A, g) be a Riemannian Lie algebroid and (A N , g) a Riemannian Lie subalgebroid of (A, g). 1.A normal section ξ ∈ Γ(A N ) ⊥ is called parallel, if for any section α ∈ Γ(A N ), one has: ) is said to be totally umbilic if, for all normal section ξ ∈ Γ(A N ) ⊥ , there exist a scalar λ such that: Proof Let α be a unit A N -section and {a 1 , • • • , a d } be a local orthonormal basis of the space of sections Γ(A N ).Then we have: with the Gauss equation ( 8), one has: Theorem 3.2 Let (A, g) be a Riemannian Lie algebroid with constant sectional curvature c.The scalar curvature of the Riemannian Lie subalgebroid (A N , g), S is given by : Proof Let {a 1 , • • • , a d } be a local orthogonal base of the space of A N -section Γ(A N ); then one has the scalar curvature: Characteristic Riemannian Lie Subalgebroid One of the interesting case is the situation of N with a leaf of characteristic foliation.Here we investigate on the following case : If N = L: In this case one has locally : Hence with the notion of compatibility of the A-connection D with the Lie algebroid structure (in the sens of Fernandes( 2002)), one has the following theorem. Theorem 4.1 Let (A, g) be a Riemannian Lie algebroid.If L is a Leaf of the characteristic foliation, then: 1.For all ξ ∈ Γ(A L ) ⊥ , one has B ξ = 0. Hence (A L ) is totally geodesic. 2. The Gauss's equation become: from the definition of B ξ one has B ξ α ∈ Γ(A L ), and with the fact that g is not degenerate, one has: B ξ = 0.And A L is totally geodesic. 2. Since B ξ = 0, then with the gauss formula we have : If L ⊂ N In this case, one has the following decomposition: where (A L ) ⊥ N is the complementary g N -orthogonale of A L in A N .Thus for all α, β ∈ Γ(A L ) one has: Moreover : In other words, since L ⊂ N than A L ⊂ A N and (A N ) ⊥ ⊂ (A L ) ⊥ .Thus Proof This is a consequence of the fact that if L is a leaf of characteristic foliation, then A L is totally geodesic. Corollary 4.2 If the Lie algebroid p : A → M is transitive, then N is a totally geodesic Riemannian submanifold of the Riemannian manifold M. Proof Let γ be a geodesic on M and s an A-path with base path γ.Then α is an A-geodesic (see [?]).From the above theorem, α is an A N -geodesic and we can conclude. h N (α, β) = h L (α, β) + h 1 (α, β)Hence we have the following proposition Theorem 4.2 If the submanifold N is entierelly contained in a leaf L of a characteristic foliation, then the Riemannian A N Lie subalgebroid is totally geodesic.Moreover A N have a reduction of codimension. Boucetta (2011)r notion of compatibility between linear A-connection and Lie algebroid structure introduced byBoucetta (2011)which is less stronger than the above one.A linear A-connection D is strongly compatible with the Lie algebroid structure if, for any A-path α, the parallel transport τ α preserves Ker♯.A linear A-connection D is weakly compatible with the Lie algebroid structure if, for any vertical A-path α, the parallel transport τ α preserves Ker♯.-connection is strongly compatible with Lie algebroid structure if and only if, for any leaf L and any sections Moreover the equality hold if and only if A N is totally geodesic.Where Ric is a Ricci curvature. Let (A, g) be a Riemannian Lie algebroid with constant sectional curvature c.If the Riemannian Lie subalgebroid (A N , g) has constant sectional curvature c, then: 1. c ≤ c, 2. c = c if and only if A N is totally geodesic.
3,729.6
2018-11-27T00:00:00.000
[ "Mathematics" ]
Whole-Gene Positive Selection, Elevated Synonymous Substitution Rates, Duplication, and Indel Evolution of the Chloroplast clpP1 Gene Background Synonymous DNA substitution rates in the plant chloroplast genome are generally relatively slow and lineage dependent. Non-synonymous rates are usually even slower due to purifying selection acting on the genes. Positive selection is expected to speed up non-synonymous substitution rates, whereas synonymous rates are expected to be unaffected. Until recently, positive selection has seldom been observed in chloroplast genes, and large-scale structural rearrangements leading to gene duplications are hitherto supposed to be rare. Methodology/Principle Findings We found high substitution rates in the exons of the plastid clpP1 gene in Oenothera (the Evening Primrose family) and three separate lineages in the tribe Sileneae (Caryophyllaceae, the Carnation family). Introns have been lost in some of the lineages, but where present, the intron sequences have substitution rates similar to those found in other introns of their genomes. The elevated substitution rates of clpP1 are associated with statistically significant whole-gene positive selection in three branches of the phylogeny. In two of the lineages we found multiple copies of the gene. Neighboring genes present in the duplicated fragments do not show signs of elevated substitution rates or positive selection. Although non-synonymous substitutions account for most of the increase in substitution rates, synonymous rates are also markedly elevated in some lineages. Whereas plant clpP1 genes experiencing negative (purifying) selection are characterized by having very conserved lengths, genes under positive selection often have large insertions of more or less repetitive amino acid sequence motifs. Conclusions/Significance We found positive selection of the clpP1 gene in various plant lineages to correlated with repeated duplication of the clpP1 gene and surrounding regions, repetitive amino acid sequences, and increase in synonymous substitution rates. The present study sheds light on the controversial issue of whether negative or positive selection is to be expected after gene duplications by providing evidence for the latter alternative. The observed increase in synonymous substitution rates in some of the lineages indicates that the detection of positive selection may be obscured under such circumstances. Future studies are required to explore the functional significance of the large inserted repeated amino acid motifs, as well as the possibility that synonymous substitution rates may be affected by positive selection. INTRODUCTION The circular chloroplast genome is in general expected to be a non-recombining unit where large within-genome duplications are rare. Most of its genes are single-copy, occurring in large and small single-copy regions, that are intervened by inverted repeat regions [1]. Substitution rates of chloroplast DNA (cpDNA) are held to be relatively slow and not very variable, although not constant, among lineages [2][3][4][5]. The gene content is likewise thought to be well conserved [6]. Most reports of positive selection are from the human genome or other model organisms [7,8], and documented significant positive bias of non-synonymous (dN) over synonymous (dS) substitutions from non-model organisms and/or entire genes are rare (but see [9] and [10]). From an evolutionary perspective it is to be expected that some genes (e.g. those involved in the immune system) can have specific sites that are under positive selection [11], but genes that exhibit positive selection as a whole and with non-synonymous substitutions more or less evenly distributed over the entire length of the gene are clearly more enigmatic and of greater general evolutionary interest. The chloroplast-encoded clpP1 (caseinolytic peptidase, ATPdependent, proteolytic subunit) is part of a gene family encoding clpP proteases with six members in Arabidopsis of the mustard family Brassicaceae [12]. The other five members are encoded in the nucleus (clpP2-clpP6) [12]. The main function of the protein is to degrade polypeptides, but the clpP proteases are involved in a variety of processes, ranging from developmental changes to stress tolerance [13]. It has been suggested that the clpP1 gene is essential for plant cell viability [14,15]. The gene is found in the chloroplast genome of all higher plants and most eukaryotic algae [12]. The structure of the gene and the amino acid composition are highly conserved with 196 amino acid residues distributed over three exons and with two intervening introns, but Oenothera, grasses, and the conifer genus Pinus lack the two introns. Most green algae also lack introns, but they have roughly the same number of amino acid residues as land plants. However, Chlamydomonas and Scenedesmus (both Chlorophyceae) have large insertions resulting in a total of 525 and 538 amino acids, respectively. It has been shown that the insertion sequence is not removed during transcription [16]. The present study makes a strong case for the existence of whole-gene positive selection on the chloroplast clpP1 gene in at least three separate lineages of flowering plants. The positive selection appears to be linked to multiple gene-region duplications, elevation of synonymous substitution rates, and insertion of repetitive amino acid sequence motifs. RESULTS In all 25 flowering plant species sequenced for this study (21 Sileneae,4 Oenothera), there is one copy of the clpP1 gene located between the rps12 and psbB genes. None of these had frame shifts or premature stop codons, and thus appeared functional. This is in accordance with all published land plant chloroplast genomes. Additional clpP1 copies were found in two of the species (Fig. 1). Several of the clpP1 sequences lack introns ( Table 1). Six of 21 investigated species of Sileneae showed signs of elevated branch lengths in the clpP1 exons (Fig. 2B). These taxa are distributed in three phylogenetically distinct lineages [17][18][19][20]: two closely related species in Silene subgenus Behen (S. conica and S. conoidea), one species in Silene subgenus Silene (S. fruticosa), and three of four investigated Lychnis species (L. chalcedonica, L. flos-cuculi, and L. abyssinica). Three of the six species (S. conica, S. fruticosa, and L. chalcedonica) have been extensively sequenced (.25 kb) also for other, both coding and non-coding chloroplast regions, but do not display such extreme branch lengths elsewhere in the plastid genome [18]. Surprisingly, the synonymous substitution rates in two of these three species are clearly elevated (up to five times) in clpP1 when compared with other cpDNA genes (Fig. 3). All four investigated species of Oenothera had elevated substitution rates. Only O. flava possess introns of the four, and it is also the one with the shortest branch length (Fig. 2B). Synonymous substitution rates in the clpP1 gene of O. elata are 2.5-3.2 times higher than other chloroplast genes (Fig. 3). The position and length of the clpP1 exons are generally conserved in Sileneae, like in most other angiosperms (Table 1), but exceptions were found in the six Sileneae species with long branches and the Oenothera sequences. Silene conica and S. conoidea share seven indels, of which the two largest insertions differ in length between the two species. Several of the species had long insertions partially consisting of amino acid repeats in their exons ( Table 1). Analysis of all available seed plant clpP1 sequences together with the five nuclear encoded gene family members (clpP2-clpP6) from Arabidopsis and Oryza strongly indicated (posterior probability 1.00) that the divergent sequences found in this study are of chloroplast origin (Fig. S1). The relationships within Sileneae in the gene family analysis, where all the long branches group together, are at odds with all previous cpDNA analyses (e.g. [18]). The analysis of only third positions from the clpP1 exons did, however, result in a topology compatible with other analyses of sequences from other cpDNA regions (Fig. 2B). For example, despite its very long branch, the Silene conica and S. conoidea clade grouped together with other members of Silene subgenus Behen (Fig. 2B). The Sileneae phylogenies based on clpP1 exon (Fig. 4A) and intron (Fig. 4B) sequences were substantially different. Intron sequences from Lychnis chalcedonica and Silene fruticosa did not show signs of elevated substitution rates relative to other cpDNA regions, and the phylogeny (Fig. 4B) is compatible with other Sileneae phylogenies based on cpDNA [18]. We used the method of Yang [21] to investigate the ratio of non-synonymous to synonymous substitutions (dN/dS, v) and found that this ratio varies significantly among lineages (P,0.0001), both under a generally accepted eudicot phylogeny ( Fig. 2A) and under the tree topology from Bayesian analysis of only third positions from the clpP1 exons (Fig. 2B). Several branches in both Sileneae and Oenothera resulted in v.1 (Fig. 2). In addition, the branches in Solanaceae have high v-values, despite shorter branch lengths. Three branches have v that are significantly higher than one after Bonferroni correction (table 2, Fig. 2A). The second topology (Fig. 2B) gives very similar v-values, but the values of the three branches with significant positive selection are higher still, and two internal branches not resolved in the first topology ( Fig. 2A) receive v-values.2.0 (Fig. 2B). We reject the idea that the observed open reading frames in the clpP1 sequences of Silene conica and S. conoidea have persisted by chance alone under a neutral model of evolution, because only 0.69% of simulated sequences with a branch length of 0.25 from the S. latifolia sequence appear to have remained functional (98.66% had premature stop-codons and an additional 0.65% lacked a proper start-codon). The average number of premature stop codons per simulated sequence was 4.3. DISCUSSION Our results indicate highly elevated substitution rates in the chloroplast clpP1 exons in Sileneae and in Oenothera. Although long branch lengths also could indicate high ages, we argue that this is unlikely because the comparisons with other genes (Fig. 3) which, unless acquired by horizontal transfer, must be of the same age. Even if the topological relationships sometimes are ambiguous, any resolution of the trees in Fig. 2 will show that the sister group has a significantly shorter branch. As sister groups by definition has the same age, either the short branch has undergone a substitution rate decrease or vice versa. This supports the explanation for the long clpP1 branches as likely to have been the result of a substitution rate increase. In Sileneae, there are at least three independent increases in substitution rates. Loss of introns in clpP1 appears correlated with Table 1. Length variation a , PCR primers, and amino acid repeats in the clpP1 region. Even if the rate of synonymous substitutions in the clpP1 gene for the Sileneae species with generally elevated substitution rates appears higher than that of other chloroplast genes for the same (Fig. 3), most of the total substitution rate increase can be ascribed to non-synonymous substitutions. A generally elevated substitution rate for a whole genome such as the mitochondrial genome of some species of Plantago could be explained by, e.g. an increased amount of oxygen free-radicals [22], but elevated substitution rates in, and exclusive to, a specific gene are harder to explain. The very long branch leading to Silene conica and S. conoidea is puzzling in this context, because the rates of both synonymous and non-synonymous substitutions are very high, and there is no significant positive selection. The pattern is similar, but less extreme, in L. chalcedonica (Fig. 3). The length of the branches leading to Solanum/Lycopersicon (Solanaceae), within the Fabaceae, to Vitis, and to Cucumis might also indicate elevated substitution rates, but this is much less striking than in the long Sileneae branches and in Oenothera (Fig. S1). By comparing synonymous and non-synonymous substitutions, we were able to detect statistically significant positive selection at the gene level on three branches. However, we anticipate that the actual duration of the episodes of positive selection in the tree might be larger. The power of the tests employed here is relatively low, i.e. positive selection is difficult to detect even if it exists at many sites [7]. Recently, methods have been developed to detect positive selection on individual codons for specific branches (e.g. the branch-site likelihood method [23]). These methods are generally more powerful and their utilization has resulted in a marked increase in the number of published reports of positive selection [24]. However, simulations by Zhang [24] showed that the power of these methods comes at a cost in the form of high levels of false positives. Under the assumption that substitutions in non-coding sequences are selection-wise neutral, elevated substitution rates in exon sequences compared to introns provide an indication of positive selection [25]. By comparing the branch lengths of the exon tree (Fig. 4A) and the intron tree (Fig. 4B) it is apparent that this is the case in S. fruticosa and probably also in Lychnis chalcedonica. For example, the uncorrected pairwise base distance between S. fruticosa (Sf1) and S. schafta is 0.23 in exons, but 0.05 in the introns, and for L. chalcedonica (exon: Lc1, intron: Lc3/Lc4 combined) and L. flos-jovis these figures are 0.30 and 0.04, respectively. Finally, Oenothera flava (the only Oenothera species in this study to have introns) shows more variability in exons than in introns compared to Eucalyptus (0.26 and 0.18, respectively), although the difference is less striking for this taxon. The clpP1 exon sequences that show the most extreme substitution rates are those of Silene conica and S. conoidea, but because they lack introns in the gene, the exon/intron comparison cannot be made. The variability in the gene is approximately an order of magnitude higher (synonymous and non-synonymous substitutions contributing roughly equally to the increase) than that of spacer-regions in the cpDNA of S. conica (see below). The pairwise base difference between Silene conica and Silene latifolia in intergenic spacers is on average 0.03 (rbcL/accD: 0.037, petA/psbJ: 0.035, psbE/petL: 0.028, rpl20/rps12: 0.022, data from [18]), but the difference in the clpP1 gene is 0.31. Despite the very divergent sequences, the dN/dS ratios did not indicate positive selection acting within this group. Because ratios around 1 implicate absence of both positive and purifying selection, this indicates that the S. conica and S. conoidea sequences may have lost their function. However, the simulation experiment strongly rejected the hypothesis that the absence of stop codons can be explained by chance alone. Further support for this is given by the fact that Silene conica and S. conoidea have seven indels in the clpP1 sequence, all of lengths that are multiples of three. The existence of these indels that do not distort the reading frame is in itself strong evidence for maintained gene function. Finally, even if S. conica appears to have a somewhat elevated cpDNA substitution rate in general [18], it seems unlikely that the very high substitution rates in clpP1 would be the effect of lost function. Xing and Lee [26] found that alternative splicing could greatly relax selection pressure (measured as dN/dS). This effect was accompanied by a strong decrease in synonymous substitutions. Because we observe a strong increase in synonymous substitutions in our data (Fig. 3), this explanation too, seems unlikely in this particular case. Whether there is a causal relationship between the increase in synonymous and non-synonymous rates in Silene conica/conoidea remains unclear. However, there are other indications that positive selection of clpP1 is correlated with increased synonymous substitution rates; Lychnis chalcedonica and Oenothera elata also have elevated synonymous substitution rates (Fig. 3). Some of the other branches in the eudicot clpP1 tree (Fig. 2) have combinations of branch lengths and dN/dS ratios that indicate a more widespread occurrence of positive selection of the gene. In particular the branch leading to Solanum/Lycopersicon (node 14 in Fig. 2A) that has a dN/dS ratio significantly higher than 1 before Bonferronicorrection ( Table 2), but also the branches leading to node 6 ( Fig. 2A), Medicago, Vitis, and Cucumis seem interesting targets for further investigations. In a systematic search for positive selection in higher plants based on almost 140,000 embryophyte gene sequences from GenBank, very few cases of v values above one were found when averaging over whole genes [27]. Only in two cases were v.2, and both of these were sequence pairs of nuclear encoded genes [27]. This illustrates how unusual our findings are. The recent report on positive selection in the chloroplast gene rbcL [10] clearly shows that specific sites in that gene are under positive selection in a wide range of land plants. Our study gives indications that positive selection in the clpP1 gene might be widespread in flowering plants. In rbcL it is only a small proportion of the sites that appear to be under positive selection [10], whereas in clpP1 a very large proportion of the sites are affected. In the present study, the rates of synonymous substitutions are rather conserved with respect to different taxa and genes, with the important exception of the species undergoing rapid evolution in the clpP1 gene (Fig. 3). None of the ''normal'' taxa or genes shows as high dS rates as the clpP1 gene from those three species. The degree of elevated dS rates also indicates an interesting pattern: the species with the strongest estimated positive selection has the least elevated dS rate and vice versa, i.e. the rate of non-synonymous substitutions are roughly the same among the three species. Elevated evolutionary rates due to positive selection or relaxed selective constraints are often preceded by gene duplication [28]. We detected extra clpP1 gene copies only in Lychnis chalcedonica and Silene fruticosa. Indeed, the completely sequenced chloroplast genome of Oenothera elata (NC_002693) contains only one copy of clpP1. Only one of the four clpP1 copies in L. chalcedonica is potentially functional (Lc1), i.e. the others contain stop codons or are incomplete. The intron-containing Lc4 fragment shows obvious signs of elevated substitution rates in clpP1, although less so than Lc1. The Lc3 copy, apparently a pseudogene, is less divergent than the other copies in Lychnis chalcedonica. This does not seem to be an artifact due to missing data, because in the region where sequence information for Lc1, Lc2, and Lc3 overlap the uncorrected distance between Lc1/Lc2 and L. flos-jovis (the ''normal'' Lychnis species in this study) is 30.3%/34.6%, whereas between Lc3 and L. flos-jovis it is 3.9%. Thus, in absence of a formal phylogenetic analysis of the clpP1 copies in L. chalcedonica, we may speculate that at least the duplication leading to Lc3 preceded the onset of positive selection. In S. fruticosa, Sf2 is markedly less divergent than the two other copies, and thus also probably the result of an ancient duplication preceding the nonsynonymous rates increase. In the region where sequence information for Sf1 and Sf2 overlaps the uncorrected distance between Sf1 and S. schafta is 22.6%, whereas between Sf2 and S. schafta it is 5.2%. Both these cases may thus agree with one of few documented cases where gene duplication precedes the onset of positive selection [29]. It may be that positive selection, under some circumstances, can be triggered by duplication rather than being an expected outcome of it. The very large insertions (174 to 197 amino acids) found in the clpP1 exon 1 of Silene fruticosa (Sf1), Lychnis flos-cuculi, and L. abyssinica do not cause frame shifts or stop codons. To our knowledge, the effect of indel evolution has not been studied in relation to positive selection. It is possible that repetitive insertions are beneficial, because given that the repeats are in multiples of three nucleotides, they reduce the probability of stop codons, while possibly fostering new phenotypic variants. -Conclusions In our study, four distantly related taxa or groups of taxa (Oenothera, Silene fruticosa, Silene conica/conoidea, and Lychnis chalcedonica/flos-cuculi/abyssinica) exhibit substitution rates in clpP1 exon sequences that are hitherto unprecedented in the chloroplast genome. We conclude that these high evolutionary rates are correlated with positive selection of clpP1 in the evolutionary histories of at least three of these four groups. In the case of Lychnis, this was probably preceded by a duplication of a segment including clpP1, psbB, psbT, psbN, and psbH, but only the clpP1 gene shows signs of positive selection. We cannot rule out the possibility of gene duplications as a causal agent in the other three cases, because duplicates may either be ambiguous, extinct, or undetected. One of the major aspects of the present results is that they indicate that positive selection may be accompanied by elevated synonymous substitution rates in some cases. If this indeed proves to be the case it will have far-reaching consequences for our ability to detect positive selection. Also, the fact that positive selection appears to have originated in at least three closely related lineages of Sileneae calls for caution when interpreting plastid data at population and phylogenetic levels (cf. [9]). The relationship between cpDNA duplications, increased substitution rates, positive selection, and indel evolution in the chloroplast genome needs further scrutiny, and plant clpP1 appears to constitute an excellent model system for such studies. MATERIALS AND METHODS -Taxon sampling All seed plant clpP1 sequences on GenBank [30] as of May 15th 2006, as well as the nuclear clpP2-6 from Oryza and Arabidopsis, were downloaded; in addition, 21 species of Sileneae and four species of Oenothera were sequenced for the gene (Table S1). The sampling of Sileneae species follows that of Erixon and Oxelman [18], but Silene conoidea and Lychnis abyssinica were added after the observation that closely related taxa exhibited extremely high substitution rates. The inclusion of Oenothera in the study was deemed important because a survey of complete chloroplast genomes on GenBank revealed that Oenothera elata (NC_002693) lacks introns and shows signs of elevated substitution rates. The four species of Oenothera were chosen to represent different sections in the genus [31]. -DNA Amplification and Sequencing All 21 Sileneae species (except Silene conoidea, Lychnis flos-cuculi, Lychnis flos-jovis, and Lychnis abyssinica) were amplified and sequenced in a continuous region from the end of rbcL to the beginning of petB corresponding to c. 18 kb in Spinacia. DNA preparation, amplification, sequencing, and general primers follow Erixon and Oxelman [18]. The clpP1 region in Oenothera was amplified with the PCR primers rps12-F and clpP/psbH-R6 (Fig. 5). For sequencing of this product, eight primers were constructed (Table S2). In addition, some specific primers were made for Silene fruticosa and Lychnis chalcedonica, either to amplify a specific sequence copy or to sequence through large insertions (Table S2). -Alignment The amino acid sequences from Arabidopsis and Oryza for all five nuclear members of the clpP gene family were aligned using ClustalW version 1.83 [32], with default settings, together with chloroplast clpP1 amino acid sequences. The nuclear sequences corresponding to the 21 first amino acids in the chloroplast clpP1 gene of Spinacia were excluded prior the analysis, because the alignment responded strongly even to small changes in the parameters. All sequences corresponding to the six last amino acids of Spinacia clpP1 were also excluded, due to extreme length variation both within clpP1 and among the other gene family members. The amino acids were only used in the alignment process; all analyses were done on nucleotide sequences. All non-clpP1 exon DNA sequences were manually aligned, using the principles of Oxelman et al. [33], in the sequence alignment editor Se-Al version 2.0a11 (http://evolve.zoo.ox.ac.uk/). -Phylogenetic analysis Bayesian phylogenetic analyses were performed with MrBayes version 3.1.2 [34]. For both the gene family data set (72 terminals) and a restricted eudicot clpP1 data set (38 terminals, see below), substitutions were modeled with the GTR+C model, which received highest AIC scores according to MrAIC.pl version 1.4 (http://www. abc.se/,nylander/) together with PHYML version 2.4.4 [35]. The large data set ran for 50 million generations with six MCMC chains (temperature (t) = 0.2) and four independent runs with trees sampled every 1000th generation. The eudicot data set ran for 20 million generations with four MCMC chains (t = 0.2) and two independent runs with trees sampled every 100th generation. The first 50% of the sampled trees were, in both analyses, discarded as burn-in. Bayesian analyses, with the same settings as above except that number of generations were 10 million, were performed on: a) a matrix of only third positions from the Eudicot data set, b) a matrix of Sileneae clpP1 intron sequences, and c) a matrix of Sileneae clpP1 exon sequences. -Detection of positive selection PAML version 3.14 [21] was used to calculate the non-synonymous (dN) and synonymous (dS) substitutions rates, and the ratio (v) between them. To test for variation in v among the branches, the likelihood for the data under a model with fixed v (estimated from data) for the entire tree (m0) was compared to a model allowing for the branches to have different v (m1). To test the null hypothesis that addition of a v parameter for each branch does not increase likelihood of the data, a likelihood-ratio test was performed, where the test parameter is assumed to follow the chi-square distribution with the degrees of freedom equal the number of tree branches 21. The null hypothesis of absence of selection on individual branches was tested by comparing the maximum likelihoods from the m1 analysis to maximum likelihoods for models with free v for all branches except that the v of the branch under consideration was set to 1 (m2). This test has one degree of freedom. Only branches with v.1.0 were tested, but because these were detected a posteriori, the probabilities were Bonferroni corrected [7]. -Topologies used for estimates of dN/dS Two different tree topologies were used for calculations of dN and dS values. The first topology ( Fig. 2A) was based on the classification of the Angiosperm Phylogeny Group II [36] at the family level and on strongly supported within-family relationships published elsewhere [17][18][19][20]37,38] . The sistergroup relationship between Silene conica and S. conoidea has not been published before, but is based on their very similar morphology, i.e. they belong to the section Conoimorpha Otth, which is characterized by a calyx morphology and a basic chromosome number, both of which are unique in Sileneae. Five Silene sequences were excluded from this analysis; these were identical or almost identical (#4 bases different) to at least one of the nonexcluded sequences. Only sequences with the entire clpP1 coding region were included in the analyses detecting for positive selection. To explore the effect of the tree topology on the outcome a second substitution rate analysis was performed using the 50% majority-rule consensus phylogeny from the Bayesian analysis of third positions (Fig. 2B). This topology is incompatible with the first topology with respect to the relations within Fabaceae. Another substantial difference is in the level of resolution. Eight more branches are unresolved in the second phylogeny compared to the first and two relationships are resolved only in the second (in Oenothera and in Lychnis, see Fig. 2). -Synonymous substitution rate comparison In order to obtain an estimate of the synonymous substitution rates in the clpP1 exons in Sileneae we made pairwise comparisons between eleven Silene/Lychnis species and Heliosperma (outgroup). These estimates were compared with data from three other chloroplast genes (psbB, 1527 bp; cemA, 639 bp; petA, 963 bp). The reason for choosing these particular genes was that they are the only large genes present in the 18 kb fragment sequenced for all the twelve taxa [18], and when Goremykin et al [39] compared the synonymous substitution rates for these genes of ten angiosperms relative to Pinus, the average dS values were similar and clpP1 had the lowest value (clpP1, 0.86; cemA, 0.89; psbB, 1.00; petA, 1.53). We also made a pairwise comparison of dS values for the same genes from the complete chloroplast genomes of Oenothera elata and Eucalyptus globulus. PAML version 3.14 [21] was used for all estimates. -Indirect test for loss of function In the absence of expression studies, we conducted a simple test to evaluate if the observed open reading frames of sequences with highly elevated rates could have persisted by chance. Ten thousand sequences were generated with SeqGen version 1.3.2 [40] under the JC69 model of evolution using the Silene latifolia sequence as ancestral and a branch length of 0.25. The branch length was chosen to be considerably smaller than the longest branch observed in the substitution rate analysis, namely the one leading to S. conica/S. conoidea, i.e. 0.47 (Figures 2A). A simple substitution model was chosen to simulate neutral evolution. Since the chloroplast DNA sequences have an AT-bias, the JC69 model will result in a conservative test, because an excess of A and T will increase the number of simulated stop codons simply because these are AT-rich (TAA, TAG, and TGA). ACKNOWLEDGMENTS We thank Katarina Andreasen, Adrian Clarke, Martin Lascoux, Henrik Nilsson, Mats Thulin, Mikael Thollesson, Niklas Wikström, and an anonymous reviewer for valuable comments on an earlier draft of this manuscript. Nahid Heidari is gratefully acknowledged for laboratory support. Author Contributions Conceived and designed the experiments: PE BO. Performed the experiments: PE. Analyzed the data: PE. Contributed reagents/materials/analysis tools: PE BO. Wrote the paper: PE BO.
6,641.6
2008-01-02T00:00:00.000
[ "Biology" ]
Thermodynamic cycles in Josephson junctions A superconductor/normal metal/superconductor Josephson junction is a coherent electron system where the thermodynamic entropy depends on temperature and difference of phase across the weak-link. Here, exploiting the phase-temperature thermodynamic diagram of a thermally isolated system, we argue that a cooling effect can be achieved when the phase drop across the junction is brought from 0 to π in a iso-entropic process. We show that iso-entropic cooling can be enhanced with proper choice of geometrical and electrical parameters of the junction, i.e. by increasing the ratio between supercurrent and total junction volume. We present extensive numerical calculations using quasi-classical Green function methods for a short junction and we compare them with analytical results. Interestingly, we demonstrate that phase-coherent thermodynamic cycles can be implemented by combining iso-entropic and iso-phasic processes acting on the weak-link, thereby engineering the coherent version of thermal machines such as engines and cooling systems. We therefore evaluate their performances and the minimum temperature achievable in a cooling cycle. Model and thermodynamic quantities We consider a superconductor/normal metal/superconductor (SNS) Josephson weak-link phase-biased by a superconducting ring pierced by an external magnetic flux, as schematically depicted in Fig. 1. The volume of the system is = + V A L A L N N S S (where A S/N and L S/N represent the cross sectional area and the length of the superconducting/normal metal regions, respectively). In the following, we also denote with σ S/N the associated conductivities and with Δ 0 the superconducting energy gap at = T 0. We assume that this hybrid system is thermally isolated. Its electronic degrees of freedom can be connected to two reservoirs residing at temperature T L , T R via two ideal thermal valves v j with = j L R , , see Fig. 1. Ideal thermal valve means that they are assumed to be instantaneous, non dissipative, without thermal losses and with negligible thermal resistance in conductive state. We notice that a great effort in the research in the field of mesoscopic caloritronics 12,[49][50][51][52][53] is currently devoted to reach these conditions, developing novel schemes for improved thermal isolation. In the following, we assume the valves closed (no heat exchange), except for Section 4 where the valves are exploited to realise thermodynamic cycles. Quasi-classical theory. In general, an hybrid system consisting of a superconductor and a normal metal in electric contact shows a different behaviour of both the thermodynamic and transport properties with respect to their bulk nature in the disconnected case. This modification has been dubbed in literature under the generic name of proximity effect and is due to the propagation of the electronic correlations from the superconductor to the normal metal. The proximity effect for dirty metals can be described within the quasi-classical theory of superconductivity 44,45 . In this framework, transport and statistical properties in thermodynamic equilibrium can be obtained by the momentum-averaged retarded Green function ε g x ( , ) R , a matrix defined in the electron-hole (Nambu) space, dependent on position x and energy ε 45 . The Green function ε g x ( , ) R can be determined by solving the so-called Usadel equations. We treat the SNS junction within the quasi 1-D approximation 45,54 , neglecting the edge-effects at the SN interfaces. This approximation is valid when ξ  A S 2 or as long as the junction resistance is concentrated in the normal region 55 . In this approximation, the Usadel equations read 44,56 g x g x i x g x ( ( , ) ( , )) 1 [ (), ( , )] 0 (1) where ξ is the superconducting coherence length 45 , τ j is the j-th Pauli matrix, )/2 1 2 and Δ(x) is the complex order parameter calculated 44 is Figure 1. Thermodynamic SNS system. A superconductor/normal metal/superconductor Josephson weak-link is phase-biased at phase ϕ by a superconducting ring pierced by a magnetic flux. The electronic system can be connected via two thermal valves v L and v R to two external reservoirs residing at temperature T L and T R , respectively. L N and L S are respectively the junction and ring lengths. the anomalous component of the Green function 44 and ĝ A is the advanced Green function that can be obtained from ĝ R at thermal equilibrium 45 . λ and E c define respectively the coupling constant and the energy-band of the electron-electron interaction. In practice, λ and E c are eliminated by using standard prescription for the cut-off regularization of BCS theory 57 . The Eq. (1) are complemented by the following boundary conditions. One of the which represents the pseudo-normalisation  = g ( ) R 2 . Moreover, matching conditions 45,58 hold at the S/N interfaces. At the position x SN of the left interface (and in analogue way for the right interface), we impose the continuity At a distance L S /2 from the interfaces, we set the BCS bulk boundary conditions is given by the homogeneous case of Eq. (1). This boundary conditions has physical validity when ξ  L /2 S , since the superconductor is assumed to be long enough that the inhomogeneity effects near the junction are negligible, recovering a standard bulk form. In the following we fix ξ = L 10 S . Moreover, we focus on a SNS junction in short regime, i.e. ξ  L N . In this regime, proximity effects are enhanced 59 and the numerical result can be compared with analytical ones from the literature. We fix therefore ξ = . L 0 1 N in the following. In order to calculate the entropy of the system, we extract from the Green function the quasi-particle normalised local DoS 44,54 by using The proximity effect 41,44 alters the DoS of both the weak-link and the superconductor. Qualitatively, in the normal metal appears an induced minigap ϕ Δ -( ) whose energy width can be tuned 42-45 by the phase difference ϕ. Since the DoS is phase-dependent, the quasiparticle entropy ϕ S T ( , ) of the junction acquires both a temperature T and phase drop ϕ dependence. The total entropy can be expressed as S , where x j denotes the curvilinear coordinate along the superconducting ring (x S ) and normal region (x N ). S is the quasiparticle entropy density given by , , ) is the normalised local DoS that quantifies local variations due to the proximity effect, N 0 is the normal DoS density at Fermi level. Since the entropy is given by the quasi-particle occupation of the available states, it increases from ϕ = 0 (gapped state) to ϕ π = (gapless state), as depicted in Fig. 2(a). An analysis of the phase-modulation of the entropy for proximized SNS Josephson junctions can be found in refs [59][60][61] , where the relations between supercurrent, entropy and inverse proximity effect are taken into account. Analytical Kulik-Omel'yanchuk limit. The entropy variation on phase and the supercurrent properties of the SNS junction are linked by a Maxwell thermodynamic relation 59 . Since both the entropy ϕ S T ( , ) and the supercurrent ϕ I T ( , ) can be obtained as two different derivatives of the free energy of the system 59,62 , the following identity holds: where e denotes the electron charge. This identity is directly due to the equilibrium nature of the Josephson current, and it establishes an exact thermodynamic relation between the excited states (quasi-particles), responsible for the entropy, and the ground state properties characterized by the Cooper pairs, responsible for the supercurrent flow. Equation (8) implies that the entropy can be written as www.nature.com/scientificreports www.nature.com/scientificreports/ 0 being S 0 (T) the entropy at ϕ = 0 and We consider now a particular analytical limit, given by the Kulik-Omel'yanchuk (KO) theory 55,63 . Assuming a diffusive SNS junction with short weak link ( ξ  L / 1 N ) and resistance concentrated in the weak link ( → ∞ a ), the quasi 1-D Usadel equations can be solved analytically. From the obtained Green function it is possible to extract the current-phase relation ϕ I T ( , ) KO 55,63,64 with critical supercurrent at zero temperature given by The second equality is obtained by considering that N N and that in diffusive transport it holds 45,54  σ ξ = Δ e 2 N 2 0 0 2 . In KO limit, the entropy at where S BCS is the homogeneous BCS entropy density, obtained by substituting in Eq. (7) the normalised BCS DoS: For  . T T 0 1 c , the entropy S 0 has the asymptotic form 65,66 where in the prefactor we have introduced the dimensionless parameter kΩ. A linear-in-temperature behaviour can be found for δ ϕ π → = S T ( 0, ) of Eq. (14). At ϕ π = we have www.nature.com/scientificreports www.nature.com/scientificreports/ where one can see a linear-in-temperature behaviour reminescent of the normal metal nature at ϕ π = . At low temperature the entropy variation δS is proportional to the critical supercurrent I c of the junction. The numerical approach (from Eqs (1) to (7)) allow to calculate the entropy variation in the general case, for every value of a, L N . The analytic limit of the KO theory (from Eqs (8) to (15)) can be reproduced by numeric calculation if → ∞ a and ξ  L N . In order to compare the numerical with the analytical KO results, it needs to express the parameter α in terms of the parameters L N , L S , α of the Usadel equations. Considering the second equality in Eq. (11), we can write α as We remark that this equality holds only in the KO theory, i.e. assuming ξ  L N and → ∞ a . These justify the approximation to the right hand side. If the KO assumptions do not hold, Eq. (14) is not valid and the parameter α cannot be used. Entropy and heat capacity variation. To better appreciate the role of the a parameter it is convenient to introduce a quantity p that estimates the relative variation of the entropy induced by the phase in comparison to the phase independent part, defined as The quantity p(T) is reported in Fig. 2 (13) and (18) it is straightforward to verify that the relative entropy variation δ π = p T S T S T ( ) ( , )/ ( ) 0 scales like α. Therefore, to increase the relative entropy variation, and thus making larger the effect due to iso-entropic process, one should increase α ∝ I V / C , by increasing the value of the critical current or by lowering the volume of the system V. The adiabatic effects that we are going to study in the next sections depend on the entropy relative variation p and hence on the parameter a (and finally on α). From Eq. (15), obtained within KO theory, we could predict the expected scaling behaviour. In particular, the magnitude of the entropy variation scales like 1/L N , since the critical supercurrent scales like 1/R N for short junctions. Instead, by increasing L S the total volume of the device increases without substantially affecting supercurrent, with the consequence that the relative entropy variation behaves like 1/L S . We note that this argument does not hold for  ξ L S , where BCS rigid boundary conditions are not anymore valid and hence the supercurrent magnitude can depend on L S . As already stated, here we neglect this situation by considering ξ = L 10 S . It is important to notice, however, that the critical current-volume ratio cannot be increased at will: they are not independent, since for a small volume local self-consistent reduction 67,68 of the pair potential Δ 0 (x) will appear, decreasing the supercurrent of the weak-link. It is important to stress that Eqs (9) and (10), and their link with the current-phase-relationship, are general and rely on basic thermodynamic consistency relation, i.e. Maxwell relation. Hence, they hold true independently of the nature of the weak-link (insulating barrier, metallic weak-link, ferromagnet layer, etc.). For a SIS junction, where the current-phase-relationship is given by the Ambegaokar-Baratoff formula 55,57 , the entropy variation at low temperature is Comparing this result with Eq. (18), we notice that the entropy variation in a SIS is exponentially suppressed with respect to that of a SNS junction, with a completely different temperature dependence. In order to enhance the effects of an iso-entropic transformation, therefore, it is convenient to consider a SNS junction instead of a SIS one. This is due to the particular feature of the SNS junction that, thanks to the DoS of the N region, allows the phase-modulation of the correlations and transport properties over a volume of magnitude ξ 3 , differently to a SIS junction that concerns a zero-length insulator layer. This is the reason why in this paper we mainly concentrate on proximized SNS systems. The above discussion shows the complete generality of the presented mechanism, which can be realized with different kind of junctions or with different external thermodynamic variables opening the road to further developments in the thermodynamic characterisation of novel hybrid systems. A comment on the heat capacity C of the system is now in order. This is a measurable quantity, encoding the temperature variation of the system after a given heat pulse, produced for example by Joule heating. Here, we can expect that the heat capacity is phase-dependent, being www.nature.com/scientificreports www.nature.com/scientificreports/ Hence, the two different temperature behaviours of entropy at ϕ = 0 and ϕ π = are reflected in the different behaviours of the heat capacity. Notice that here we consider the heat capacity at constant phase; this quantity can be different from the heat capacity at constant flowing supercurrent, in analogy with the different heat capacity of an ideal gas at constant volume or constant pressure. From Eq. (13) we obtain that the heat capacity at ϕ = 0 is ) at low temperature. Then, the heat capacity has a strongly different behaviour on T/Δ 0 whether the phase is ϕ = 0 or ϕ π = , respectively an exponential decrease or a linear behaviour with decreasing of temperature. Note that in the latter case one has the same linear-in-temperature behaviour expected for a normal metal, i.e. Hence, the value of α can be estimated by a heat capacity measurement at ϕ = 0 and ϕ π = . It is important to note that at low temperature the gapped nature of the superconducting leads exponentially suppress their heat capacity (see Eq. (23)) producing a limited contribution with respect to the proximized region, even though they have larger in volume. This is one of the reasons behind the effectivity of the proposed iso-entropic transformation in affecting the electron temperature of the total system. Iso-entropic process In the previous section we have discussed the phase and temperature dependence of the thermodynamic entropy of a SNS junction. Exploiting these features, we now study the properties of an iso-entropic process in which the entropy remains constant, while externally varying the phase ϕ of the weak-link. In order to retain the entropy constant, this process will result in a temperature variation, in particular in a electronic temperature decrease of the junction as we will demonstrate below. To implement such a process we assume that the system is thermally isolated, and does not exchange heat with the environment and phonons (see also Section 5 for a detailed discussion of this issue in realistic experimental conditions). For this reason, in a single iso-entropic process the two thermal valves v L and v R sketched in Fig. 1 are closed. Exploiting a physical analogy with classical thermodynamics, this iso-entropic process is similar to an adiabatic expansion/compression of an ideal gas. In both situations there is no heat exchange with an external reservoir and the number of available states is modified by the variation of a thermodynamical variable, typically an external parameter. In the former case tuning the phase ϕ one can modify the value of the minigap (and consequently the DoS), while in the latter case varying the volume the available states will change. Let us consider the system to be in a initial thermodynamic state In an iso-entropic process, the entropy remains constant, thus, in a quasi-static process that brings the phase from ϕ = 0 i to ϕ f , the final temperature is determined by the entropy equation f f i between the temperature and the phase in the final state is implicitly established. In particular, since the entropy increases from ϕ = 0 i to ϕ π < ≤ 0 f , the isolated system will decrease its temperature from its initial value T i . This is shown in Fig. 3(a), where we plot the relative temperature decrease Notice that greater temperature decrease can be achieved for lower initial temperature T i (see e.g. the curve corresponding to = . Fig. 3(a)). Recalling the symmetry properties of the supercurrent 39 and Eq. (8), it is possible to argue that ϕ T S ( , ) f f i is 2π-periodic even function in ϕ f . From now on we concentrate on a process that brings the junction from ϕ = 0 i to ϕ π = f , as sketched with the black arrow in Fig. 2(a), that maximize the temperature decrease. For these two values the supercurrent flowing through the junction is zero, allowing also to neglect the contribution to the system energy of the ring inductance (this point is further clarified in Section 4). We define ζ is characterised by an exponential decrease as a consequence of the energy mismatch between the distribution ε ε f f ( )log ( ) and the proximized DoS with induced minigap Δ. When ϕ approaches π, the induced minigap Δ closes and the energy window associated to ε ε f f ( )log ( ) becomes greater than the minigap. As a consequence, the phase-dependence of the entropy integral in Eq. (7) is stronger at low temperatures. This property is reflected in the different behaviour of the entropy at ϕ = 0 and ϕ π = in Fig. 2(a). In Fig. 3(b) we report ζ(T i ) as a function of T i . Various curves refer to different geometries for different values of the dimensionless proximity parameter a. In this figure and the followings, the solid curves represent the numerical solution obtained by solving the Usadel equations 10,59 within the specified geometry (see Section 2). The dashed lines are instead obtained within the KO theory, calculating of ϕ S T ( , ) by mean of Eqs (7), (12) and (14). In all cases, there is good agreement between the full numeric results and the KO theory for a = 10 2 , 10 3 , while deviations appear at lower values of a where KO theory overestimates the temperature decrease. This show www.nature.com/scientificreports www.nature.com/scientificreports/ that numerical solution of the proximized system is necessary when a become smaller as would be desirable in order to decrease the final temperature. From Fig. 3(b) one can argue that ζ(T i ) grows with a. This is shown in detail in Fig. 4, where ζ(T i ) is plotted as a function of a for three different values of initial temperature T i . When  a 1, the current-phase-relationship of the junction tend to the KO asymptotic limit ( → ∞ a ) where the supercurrent magnitude is determined only by the junction geometry. In this limit, the entropy variation has the form of Eq. (14) with a scale determined by the critical supercurrent magnitude I c . Furthermore increasing the volume V of the system, a increases and consequently the phase-independent contribution S 0 (T) of entropy increases. Then, the relative entropy variation p decreases scaling with α, spoiling the iso-entropic effects. In other words, the contribution S 0 that increases with V acts as a heat-capacity that mitigate the iso-entropic temperature decrease. On the contrary, when → a 1, the KO theory does not hold anymore. In particular, the current-phase-relationship is no more determined by the weak link characteristics only, but depends also on the geometrical parameters of the superconductor. This is due to the fact that the normal metal decreases the correlations in the superconductor banks, with the final result that a superconductor region nearby the SN interface behaves like a normal metal. As a consequence, the SNS junction behaves like a weak link with an effective length that is longer than the geometrical length 39,67,69 and with a reduced supercurrent magnitude that decreases with a. This effect contributes to decrease the iso-entropic effects, as shown by the fact that the numerical calculation (solid lines) in Figs 3 and 4 return a worse iso-entropic temperature decrease. The temperature decrease ζ(T i ) can be obtained by solving numerically the transcendental equation . However for → T 0 the phase-independent contribution in S 0 (T f ) (with exponential behaviour in T f , see Eq. (13)) can be neglected respect to the linear term in Eq. (18). In this case, f f and we obtain , confirming that the iso-entropic temperature decrease is enhanced by increasing the www.nature.com/scientificreports www.nature.com/scientificreports/ supercurrent and decreasing the volume. This is an important quantity because ζ(T i )T i represents the minimum achievable temperature in this iso-entropic transformation. thermodynamic cycles In this section we exploit the above results to implement thermodynamic cycles with a phase-biased Josephson weak-link. For sake of simplicity, here we limit the discussion to a particular cycle, although different thermodynamic cycles can be implemented. We thus combine two iso-entropic with two iso-phasic processes, in which ϕ is kept constant. In the following, we restrict to iso-phasic curves at ϕ = 0 and ϕ π = . We investigate the Josephson Cycle properties and performances as an engine and a cooling system. These two different configurations depend on the orientation in which the processes are performed and which temperatures of the cycle are fixed by the reservoirs T L and T R as depicted in Fig. 1. It is important to understand the form of the work and the heat associated to the weak-link during a certain process. We therefore fix the following sign convention: (I) the heat Q absorbed by the system from the environment is positive, (II) the work W released from the system to the environment is positive, (III) the sign of the supercurrent is positive when flowing in the direction of the phase gradient. The work is done by an external magnetic field that induces a dissipationless current I on the system. This work increases the energy of the system through two components. One is due to the reversible energy stored in the inductance that we can neglect since we treat the two states at I = 0. The second component represents the Josephson energy stored in the junction 57 . The work done on a junction from ϕ = 0 to ϕ π = at constant temperature T (i.e. in a iso-thermal process) is given by ∫ ϕ ϕ π I T ( , )d , where ϕ I T ( , ) is the iso-thermal current-phase-relationship. In the case of an iso-entropic process, we must consider that the temperature is not constant. Considering the phase-dependence of the temperature ϕ T T ( , ) f i (see Fig. 3(a)), the work done by the system to the environment in an iso-entropic process ϕ ϕ π = → = 0 reads S i f i In the following, we use the notation W jl to indicate the work done by the junction in a process from the thermodynamic state j to the state l, where j, l represent two states in Fig. 5(a). www.nature.com/scientificreports www.nature.com/scientificreports/ In the process → 1 2 the work done by the system is negative, the environment must spend an amount of energy to charge the Josephson inductance of the junction, increasing the free energy of the latter 57,70,71 . On the contrary, the system would release work when discharged from ϕ π = to ϕ = 0 (process → 3 4). The work in a iso-phasic process is zero, being zero the phase variation, = = W W 0 23 41 . At the same time, the heat absorbed by the junction in a process from the state j to the state l is jl j l where = − Q Q jl lj . In the entropy/temperature plane of Fig. 5(a), the absolute value of heat Q 23 is represented by the red area. The absolute value heat Q 41 , instead is the sum of the red and blue area. In order to calculate the net work = + W W W 12 34 done in one cycle, we consider the conservation of energy which states that within a cycle W is equal to the net heat absorbed by the system = + Q Q Q 23 41 , i.e. = W Q. Considering that Q 23 and Q 41 have opposite sign, the blue area in Fig. 5(a) represents the net work W per cycle. Josephson engine. An engine is a thermodynamic machine which can convert the temperature gradient between a hot and cold reservoirs into useful work. Referring to Fig. 1, we set the cold and hot reservoirs to be at temperature T L and T R , respectively. The cold reservoir can be though as an environment (e.g. large electric pad well-thermalised with the substrate at the base temperature of a cryostat) and the hot reservoir as a Joule heated pad with a continuous supplier of power. The Josephson engine consists of the following cycle, as sketched in Fig. 5 (a): • Iso-entropic 1 → 2. The junction is initially at temperature of the hot reservoir = T T R 1 and the valves v L , v R are closed (thermally isolated junction). Then, an iso-entropic transformation brings the system from . The junction absorb a certain amount of work |W 12 | with no heat exchange. • Iso-phasic 2 → 3. The junction is put in contact with the cold reservoir by opening valve v L and keeping the phase difference ϕ π = . The iso-phasic transformation brings the system from . The junction releases heat |Q 23 | to the cold reservoir, without doing any work. • Iso-entropic 3 → 4. The valve v L is closed and the junction is again thermally isolated. An iso-entropic transformation brings the system from . The junction returns the work W 34 and no heat is exchanged. • Iso-phasic 4 → 1. The junction is put in contact with the hot reservoir by opening valve v R and keeping the phase ϕ = 0. The iso-phasic transformation brings the system from ϕ = T ( , 0) 4 4 to . The junction absorbs heat Q 41 from the hot reservoir, without any work. We fix the cold temperature to be = − T T 10 , with the consequence that the energy stored in the junction in the process → 1 2 tends to the energy returned in the process → 3 4, and → W 0. The activation temperature can be understood as follows: starting from the cycle in Fig. 5(a), by decreasing T R till T act the blue area collapses to a line, indicating that the work goes to zero. Figure 5(b) shows the net work W released by the junction as a function of T R for a fixed = − T T 10 L c 3 . The curves are defined for ≥ T T R act . When → T T R act the W curves go to zero. As one would expect, at fixed a, the net work is an increasing function of T R , see that the blue area in Fig. 5(a) will increase if T 1 increases. A similar argument applies for the a dependence of the net work, which tends to saturate for large values of a, while approaching the KO limit. On the other side the work decreases with decreasing a. The reason is that for small a, the effectivity of the iso-entropic transformation to change the device temperature is progressively affected as a consequence of the reduction of the relative entropy variation p (see discussion in Section 2). The blue area of Fig. 5 is reduced because transformation → 2 3 and → 4 1 become arbitrarily close, i.e. the neat work reduces. Figure 5(c) reports the efficiency of the Josephson engine η = = − | | W Q Q Q / 1 / 41 23 41 . Also here, the various curves are defined for ≥ T T R act . The dash-dotted black curve is the Carnot efficiency limit given by act the efficiency tends to the Carnot limit. Considering Fig. 5(a), the efficiency can be visualized as the ratio between the blue area and the total (red + blue) area. When the areas collapse to a line, the efficiency tends to This shows, for a Josephson engine, a common property shared with other engines that when the thermodynamical efficiency is maximal then the work produced tends to zero. Analytical results can be obtained within the KO theory in the limit where temperatures are much smaller than Δ 0 . The value of T act can be found by solving where we have used Eq. (18) which governs entropy at ϕ π = for small T. (25), we obtain that | | ∝ Q T V I / R c 23 2 2 . As expected, the released heat increases with the temperature of the hot reservoir. The quadratic behaviour in V is due to the fact that both the heat capacity and the temperature difference increase with volume V. The behaviour as 1/I c is determined by the product of the heat capacity prefactor at (ϕ π = ), which grows with I c (see Eq. 24), with the temperature squared The heat where the last integral can be easily evaluated from the BCS free energy expression 65,66 . Anyway for Δ  T R 0 that integral may be safely neglected due to the exponential suppression of BCS entropy. Finally one finds The work is given by = + W Q Q 23 41 and for → T 0 ) in the low temperature limit. Since the second term in the square brackets is proportional to 1/α, the work slightly increases with supercurrent as ~κ − V I 1 / c where κ is a factor exponentially suppressed at low temperatures. The work follows roughly the temperature scaling Finally, the efficiency is given by Josephson cooler. We now discuss the thermodynamic cycle in the opposite configuration, i.e. acting as a cooler. Indeed, cooling can be obtained by reversing the cycle described in the previous section and by considering that the role of the reservoir is different in this situation. We consider the junction to be connected to a reservoir at temperature T R via a thermal valve v R , and to an external system -to be cooled-initially at temperature T L , named cooling-fin in the following. The latter system has to be thermally isolated from other spurious heat sources. This may be realised in nanoscaled suspended systems, such as membranes 48 , circuits 47 and low dimensional electronic systems. The Josephson cooling cycle consists in the following four processes The junction is in equilibrium at environment temperature = T T R 4 and the thermal valves v L , v R are closed. An iso-entropic transformation brings the system from ϕ doing work over the junction. 2. 3 → 2. The thermal valve v L is open, making the junction in contact with the cooling fin, reaching T L . In this process, the junction absorb heat Q 32 from the cooling fin. 3. 2 → 1. The thermal valve v L is closed. An iso-entropic transformation brings the system from . In this process, work is released from the junction to the external circuit and no heat is exchanged. The temperature T 1 represents the higher temperature of the cycle and is the analogue of the temperature in the heat exchanger in refrigerators. 4. 1 → 4. The thermal valve v R is open, making the junction in contact with the environment. The junction temperature is lowered from T 1 to T R , releasing heat to the environment. The temperature T 3 is again determined by the iso-entropic cooling from the temperature of the environment . The temperature T 3 is the minimum possible temperature of the cooling-fin, i.e. the minimum achieva-www.nature.com/scientificreports www.nature.com/scientificreports/ ble temperature T T ( ) R MAT once given T R as the one of the hot reservoir. The T MAT is given by an iso-entropic transformation starting from the state . We thus recall analytic result for T MAT that holds at Δ  T R 0 in the KO theory: MAT , no heat can be absorbed by the junction from the cooling-fin. To characterize the performance of the Josephson cooler, we discuss the cooling power per cycle Q 32 and the coefficient of performance COP of the SNS junction when the cooling-fin temperature equals the temperature of the environment, i.e. = = T T T L R . In this state the cooling power is maximum and decreases to zero when the cooling-fin temperature approaches the T MAT . The cooling power per cycle Q 32 represents the amount of heat removed from the cooling-fin and is plotted in Fig. 6(a). Indeed the iso-entropic cooling process reduces the device temperature lower than the cooling fin and by thermalization the heat is absorbed by the cooler from the cooling-fin. The cooling power then depends on the heat capacity of the system during the thermalisation process (3 → 2 in Fig. 5). When the system passes from the state 4 to the state 2 with the same temperature T with the phase difference tuned from 0 to π, a certain amount of heat must be absorbed by the system, since the heat capacity is increased by δ δ = ∂ C T S T (see Section 3). This quantity scales with the supercurrent magnitude and for → ∞ a converges to a function defined by the KO current-phase relationship. This quantity can be estimated for Intriguingly this limit does not depend on a but only on the junction critical supercurrent. Looking at Fig. 6(a), one see from the full numerical solution (solid lines) that Q 32 increases with a and they tend to saturate to a particular limit for large values of a. This trend is a consequence of the limited value of the critical current in the KO limit ( → ∞ a ). Indeed the cooling power increases with a due to the fact that the junction effective lengths decrease when approaching the KO limit with maximal critical current 39,67,69 . The coefficient of performance (COP) is defined as the ratio of the pumped heat per cycle with the work spent per cycle, i.e. = | | Q W COP / 32 . We note that in analogy with the efficiency η of an engine, also for this quantity there is a maximal limit COP C that depend on the temperature of the two reservoirs, given by In the specific case we are considering here with = T T R L the COP C diverges, therefore any finite value of COP may be considered as a sort of inefficiency of the cooler with respect to an ideal one. In Fig. 6(b) we show the COP as a function of the working temperature. One immediately see that the maximal performance of the cooler is obtained for → ∞ a . This corresponds to the maximal value of the absorbed work. At the same time the work absorbed per cycle decreases with → ∞ a . The reason is the following: increasing a, → T T 1 4 and → T T 3 2 . Hence, the iso-entropic electric work −W 43 done on the system tend to W 21 , with the consequence that the net work per cycle = + W W W 43 21 tend to zero. For this reason, the COP increases with a, as reported in Fig. 6(b). It is possible to evaluate an asymptotic expression of the COP as a function of the temperature T 1 by solving the implicit equation ζ = T T T ( ) KO 1 1 . In the following calculation, we took for Q 32 the estimate of Eq. (37) and for www.nature.com/scientificreports www.nature.com/scientificreports/ Q 14 we apply the same approach used to derive Eq. (32) but neglecting the contribution of T since in the limit → T 0 one has  T T 1 . So the heat exchange of the system with the hot reservoir is where the minus sign indicates that the heat is released by the system to the reservoir. This rough estimates allows to inspect the scaling properties of the cooling cycle COP on T 1 and α. The COP can be written as (15). All these behaviours are indeed seen in Fig. 6(b) where the solid lines represent the full numerical results and the dashed lines the results obtained within KO theory. Interestingly, increasing the supercurrent the COP decreases as 1/I c . The reason is that, increasing the supercurrent, the work scales as a power greater than the cooling power per cycle. This is due to the enhanced adiabatic decreasing of temperature. Therefore, improving the cooling power will result in a minor coefficient of performance, a common feature shared by refrigerators. Possible experimental implementations Here, we clarify which are the main physical requirements that need to be satisfied in order to realise the proposed thermal machines and we give some estimations on the expected performances. The first important issue concerns the possibility to realise an iso-entropic transformation, i.e. to thermally isolate the electron system of the SNS junction from the thermal bath for the time necessary to perform the transformation. In these metallic system, electrons thermally relax mainly by electron-electron and electron-phonon interactions, with respectively the two characteristic time scales τ e-e and τ e−ph . An efficient iso-entropic process should be faster than the electron-phonon relaxation time and slower than the electron-electron relaxation time, keeping the electron system in thermal quasi-equilibrium. This condition can be achieved in typical superconductors, where the two time scales are well-separated at low temperature, as demonstrated in several experiments 19,20,46,72 . Moreover, at temperatures below ≈ . T T / 0175 c , the τ e−e tends to saturate 73 , meanwhile τ e−ph is expected to be exponentially suppressed 74,75 . The general reason for τ τ < − − e e e ph is given by the fact that electron relaxation can mediated by many channels, among these also the phononic channel 76 . In particular, a superconductor with a high ratio τ τ , suggesting that these material can be used at higher frequencies. Depending on the specific materials, iso-entropic processes are thus possible with operating frequencies that can vary from 1-10 kHz (aluminium or tantalum) to 0.01-10 GHz (NbN or TiN). We notice that equilibration times increase at low temperatures, so one needs experimentally to find a good trade-off between the validity of the iso-entropic hypothesis and equilibration times in those systems. Investigate how the maximal operating frequency is affected by material selection, specific system design, operating temperature and non-idealities of the thermal valves is beyond the target of this paper. In view of possible experimental realizations, we can give some estimates on the expected performance of the proposed thermal machines, based on state-of-the-art materials and experimental parameters. The cooling power is given by Eq. T T 0 2 c it is  ξ = Δ W A N 0 0 2 . To express this quantity as function of I c , we refer to Eqs (15) and (19), yielding ξ Δ = A e R I /130 N c 0 0 2 0  . At = I 1 mA c and ν = 100 MHz we obtain again ν = ∼  W W 2pW. A possible implementation for thermal valves can be realized with quantum point contacts realized on top of two dimensional electron gas, offering the high tunability and the degree of thermal isolation required to test our predictions [78][79][80][81] . In this case good thermal contact between the two dimensional electron gas and the superconductor ring can be achieved using InAs-based quantum well and Nb or Al as superconductors 78-82 providing very www.nature.com/scientificreports www.nature.com/scientificreports/ high transparencies of the interfaces. A rough estimate of the expected temperature reduction for a realistic setup where the cooling-fin is done by a two-dimensional electron gas can be done. In III-V semiconductor crystals at sub-Kelvin temperatures the electron-phonon piezoelectric coupling is the dominant process for the heat exchange between electrons and the environment 83,84 . Then, for a realistic setup in which the cooling fin is made by a quantum well, the heat transferred by the hot phonons to the electrons can be written as with W 32 taken from equation (37) and ν is the operating frequency of the cooling cycle. Using the previous values and assuming for the cooling-fin the area ≈ A 100 μm 2 with a phonon temperature of ≈ T 100 mK ph one finds for the equilibrium electron temperature ≈ T 1 mK. If . This estimation would not take in account of many non-idealities such as non-ideal point contact thermal valve or limits due to intrinsic diffusive time. Those limitations need to be carefully addressed at the design stage of the proposed device and this goes beyond the scope of the present work. This discussion shows the potential of this cooling cycle and demonstrates that the proposed thermodynamic cycles could operate efficiently in the sub-Kelvin regimes playing an important role for many different quantum technology platforms. summary and Conclusions In this work, we have considered the thermodynamic properties of a proximized SNS Josephson junction in the diffusive regime. We have shown that the phase-and temperature-dependent entropy can be exploited to achieve significant temperature decrease of the electronic degrees of freedom of the system. In particular, one can implement iso-entropic processes, by externally tuning the phase drop of the weak-link, getting temperature variations consistent with thermodynamic constraints. Elaborating on this concept, we have demonstrated the possibility to build thermodynamic cycles based on the combination of iso-entropic and iso-phasic processes. By coupling the SNS junction to two thermal baths via two thermal valves, we have shown that it is possible to engineer a Josephson engine and cooler by coherently driving the phase across the weak-link. We have studied in detail these thermal machines, investigating their performances such as the efficiency or the cooling power as a function of different geometrical and electrical parameters. Full numerical calculations have been supported by asymptotic calculations valid in the short junction regime within the KO theory, discussing several limiting behaviours. We have also proposed a possible experimental setup to implement the discussed device as powerful cooler at sub-Kelvin regimes.
10,158.4
2018-06-05T00:00:00.000
[ "Physics" ]
GSM Based Automatic Energy Meter Reading and Billing System  Abstract —These instructions give you guidelines for preparing papers for an existence without electricity can't be thought of since it turns into an integral part of human life. In a developing country, people use postpaid electricity for their own purposes. Be that as it may, they do not know the amount of electricity they have consumed and how much cost they have done likewise till they receive the consumption bill at the end of the month. Also, in prepaid meters, to see consumption details, people have to go in front of the meter. In this research, a system has been designed based on GSM technology to solve this problem. The prepaid meter must be recharged; as a result, clients can use the electricity. The system alerts the client for any kind of emergency. Besides, when the client is away from the house, he can easily switch off the supply of electricity by sending an SMS. This project will support both society and country because it helps to reduce the wastage of electricity and to check electricity consumption and bill from remote distances.  Abstract-These instructions give you guidelines for preparing papers for an existence without electricity can't be thought of since it turns into an integral part of human life. In a developing country, people use postpaid electricity for their own purposes. Be that as it may, they do not know the amount of electricity they have consumed and how much cost they have done likewise till they receive the consumption bill at the end of the month. Also, in prepaid meters, to see consumption details, people have to go in front of the meter. In this research, a system has been designed based on GSM technology to solve this problem. The prepaid meter must be recharged; as a result, clients can use the electricity. The system alerts the client for any kind of emergency. Besides, when the client is away from the house, he can easily switch off the supply of electricity by sending an SMS. This project will support both society and country because it helps to reduce the wastage of electricity and to check electricity consumption and bill from remote distances. Index Terms-GSM, Automatic Meter Reading, Electricity Meter. I. INTRODUCTION In the propelled time of development, we came to consider distinctive remote-control structures for our mechanical assemblies or machines. GSM Based Automatic Energy Meter examining System is the progression of along these lines gathering use, interpretive, and status data from importance metering devices and trading that data to a region database for charging, asking about, and separating. This development, for the most part, spares utility suppliers the cost of broken trips to each physical zone to dismember a meter. Another favored viewpoint is that invigorating can be resolved to close driving forward use rather than on appraisals subject to past or anticipated utilize. This ideal data joined with the examination can associate with both utility suppliers and clients better control the use and strategy of electric energy, gas utilities, or water use. The Automatic Meter Reading structure is moreover prepared to give a course of action of different organizations, which are important for the administration associations in their undertaking and orchestrating And bolster; they are stack organization, power outage and accuse uncovering, customer organizations, Control Quality checking, mastermind organization, theft area, charging, balance settling, vitality settling, asset organization, energy usage information, intrusion announcing and so forth. Global System for Mobile (GSM) is a standard created by the European Telecommunications Standards Institute to depict the traditions for second-age instigated cell structures used by phones, for instance, tablets. It was first sent to Finland in December 1991. Beginning in 2014, it has transformed into the overall standard for compact tradeswith over 90% bit of the pie, working in excess of 193 countries and areas. The electricity meter is a contraption that checks the level of electric energy eaten up by a living methodology, a business, or an electrically controlled device. Electric utilities use electric meters to appear at customers' premises for charging purposes. They are routinely adjusted in charging units; the most all things considered surveyed that one being the kilowatt-hour (kWh). They are routinely investigated once each charging period. Unequivocally when imperativeness saves assets in the midst of particular periods are required, a few meters may measure ask for, the most absurd use of force in some between times. Automatic meter reading (AMR) is the movement of in like manner collecting use, illustrative, and status information from water meter or energy metering contraptions and exchanging that information to an area database for charging, looking, isolating. Another fantastic position is that invigorating can be resolved to close moving use as opposed to on evaluations subject to past or foreseen utilize. This worthwhile data joined with examination can bolster both utility suppliers and clients' better control of the utilization and enhancement of electric energy, gas utilities, or water utility. AMR improvements unite handheld, adaptable, and organize drives dependent on correspondence stages (wired and remote), radio rehash (RF), or electrical cable transmission. II. RELATED WORK V. Preethi and G. Harish mention that the energy meter is a contraption that gauges the proportion of electric energy eaten up by a home, business, or an electrically filled device [1] Noor-E-Jannat, M. O. Islam, and M. S. Salakin mentions that A GSM digitizes and reduces the data, by then sends it down through a channel with two unmistakable surges of client data, each in its own particular timetable opening [2]. gives a cost-capable design by inquisitive about the selfaffiliation, self-settling limits of the work organization, and using semiconductor chips and the radio handsets superb with IEEE 802.15.4 standard. Thousif ahamed and A.Sreedevi mention that Analog signals are been seen from the two information channels, which will be changed over into Digital signals by ADC uninhibitedly. With the two affected information signals transmitted to the microcontroller by techniques for SPI custom, dsPIC33F figures the power accordingly, imperativeness ate up will be gathered after a foreordained period. [8] Sarwar Shahidi, Md. Abdul Gaffar, Khosru, and M. Salim mention that AMR is a framework whereby the Energy Meter sends the recorded power usage of a nuclear family in the particular between the time period to a "remotely" related, which could be a (PC) or central server of intensity dispersing affiliations [9] Priyanka Dighe, Tushar Dhanani, and Kumar Gangwani mention that GSM is an approach to remotely screen and control centrality meter readings. Its tendencies to take a gander at imperativeness meters without visiting each house/affiliation [10]. This structure contains a microcontroller, which takes the readings at general breaks and records it in its memory III. SYSTEM DESCRIPTION The outline of the GSM-based Automatic Energy Meter perusing framework is given below. In the block diagram, we see an electric energy meter is interfaced with an Arduino board. The Arduino board is further interfaced with a 5V relay module, a 20X4 LCD display, and a SIM 900A GSM module. A 12V 3A power supply has been used as the GSM module needs high current flow to be activated. The pulse pin of the energy meter is connected with the Arduino UNO development board. This meter gives 3200 pulses for 1 unit of energy consumed. The total energy consumed is calculated by this information. The GSM module intercepts SMS and sends the data to Arduino. Arduino gives responses to those SMS by AT commands. The outputs are shown in the onboard display as well as the clients can see those data through SMS. In this project shown in Fig. 3, the watt, total pulse, and total cost have measured. The measuring value has shown in a 20x4 liquid crystal display (LCD) including balance. These measured data have sent to the distributor's cell phone via SMS by GSM 900A module. Here, line voltage has connected with the Arduino board which has calibrated by a 12*2V, 3A transformer. The rating of voltage, current, and energy has shown in the LCD display. The output value of energy (unit) and unit cost has sent to the distributor mobile. There is also a manual switch when we press it SMS send to distributors mobile. When power is supplied from line voltage, we see in the LCD display that it initializes with a heading ''Automatic Energy Meter by Ashif & Jaki''. Then the microcontroller of the Arduino tries to detect the GSM module by showing the "Finding module" text on LCD. After successful detection, "Module connected" text appeared in the LCD. Then the module tries to find network (showing in LCD "Finding network") to whom it would send SMS to inform about a watt, total cost, balance, total pulse. When the network is found, a "Network Found" text is shown in the display. Then a "System Ready" text is displayed in the LCD indicating that the whole system is ready to inform. The founded network which we may call consumer mobile gets an SMS as "System Ready". The consumer gets a low balance alert when the balance is under a certain balance (here 15tk) level alarming him to recharge his energy meter soon. The consumer further gets an SMS indicating light cut down due to very low balance (hereunder 5tk) and suggesting him to recharge very soon. The consumer can recharge by sending SMS as typing *A.balance amount# (such as *A.20#). To know information at any time, the consumer can send an SMS by typing *A.total#. A. Hardware Module Description In this paper different instruments and technologies have been used for achieving the better output within the desired architecture. Sensors and modules are discussed below: B. Arduino Nano Arduino Nano is an exceedingly small development board which supports ATmega328 microcontroller. It has vast technical features. Nano has 14 digital I/O pins among which PWM output is provided by 6 pins. Fig. 4. Pinout of Arduino ATmega328 [11] It has 8 analog input pins. It also has 32 KB of flash memory, 2 KB of SRAM and 1 KB of EEPROM. Nano's clock speed is 16 MHz. Besides, it has an operating voltage of 5 V at logic level and recommended input voltage is 7-12V [12]. C. Liquid Crystal Display (LCD) LCD displays has seen everywhere. Computers, calculators, television sets, mobile phones, digital watches use display to display the time. An LCD is an electronic display module, which uses liquid crystal to produce a visible image. The 20×4 LCD display is a basic module commonly used in DIYs and circuits. The 20×4 translates a display 20 characters per line in four such lines. D. A.C 1 phase 2-Wire Static Energy Meter Single phase static electronic energy meter is designed to meter residential and small commercial energy consumers in distribution networks. The meter is designed to offer reliable energy measurement in single phase circuit and is highly suitable for metering and remote communication purposes. E. SIM 900A GSM GPRS Module The SIM900A is a complete Dual-band GSM/GPRS solution in a SMT module which can be embedded in the customer applications. [15] Featuring an industry-standard interface, the SIM900A delivers GSM/GPRS 900/1800MHz performance for voice, SMS, Data, and Fax in a small form factor and with low power consumption. With a tiny configuration of 24mmx24mmx3mm, SIM900A can fit in almost all the space requirements in user applications, especially for slim and compact demand of design [15]. F. Relay Module 5V Relay Module is a relay interface board, it can be controlled directly by a wide range of microcontrollers such as Arduino, AVR, PIC, ARM and so on. The Relay Module is a convenient board which can be used to control high voltage, high current load such as motor, solenoid valves, lamps and AC load. It uses a low-level triggered control signal (3.3-5VDC) to control the relay. Triggering the relay operates the normally open or normally closed contacts. It is frequently used in an automatic control circuit. Fig. 8. Relay Module [16] To put it simply, it is an automatic switch to control a high-current circuit with a low-current signal [16]. V. HARDWARE IMPLEMENTATION In this system shown in figure 4, the power supply is provided to the meter. A GSM unit shows the interfacing with the microcontroller. Transmission of usage details is sent to the office modem using a user modem. Every consumer has a unique number provided by corresponding authority. The use of an embedded system improves the stability of wireless data transmission. For long-distance transmission, GSM telecommunication has shown excellent performance in any condition. VI. PROJECT RESULT Result of the project and result analysis shown in below Fig. 10. Shows the total watt, cost, balance, and total pulse for 100 W load The result has taken for a 100-watt load. Also, the GSM module successfully sent SMS. The process of getting outputs as shown below. At first, it takes some time for initializing, then it measures voltage, current, energy (unit), total pulse, and cost. Later, this information is sent to the distributor mobile by SMS. Figure 10 shows the output information to be sent by the GSM module to the consumer's mobile. Figure 11 shows message/SMS in consumer's mobile sent via GSM module. Figure 12 shows a low balance alert in the consumer's mobile when the balance is below 15tk. Figure 13 shows light cut down alert due to a very low balance alert when the balance is below 5tk. VII. CONCLUSION This project is finished with extraordinary supervision and care of circuit outlining and collecting. Saving electricity and efficient distribution of it is a prime concern of modern technologies. In progressing countries like Bangladesh, assuring efficient electricity consumption and distribution is a must. Often people have to leave their houses for different purposes and want remote access to their energy meter for turning on / off it. Sometimes people want to plan how much electricity he wants to consume in a month and schedule accordingly. So, they need to know how much has been consumed and how much left of the plan. Also, if the target reaches before the term, the target should be modified. But existing models don't support this feature. Our project will complete these purposes. It will lessen the risk of energy theft and unnecessary consumption of energy. The principle vision of this task is that it lessens the manual control and robbery. It spares a considerable measure of time. The framework is quick and exceedingly dependable as it gives constant information. It is effortlessly incorporated with information securing and information exchange. This venture can be a commitment to the exertion of the Govt. of Bangladesh to make Digital Bangladesh.
3,410.4
2020-06-23T00:00:00.000
[ "Engineering", "Computer Science" ]
Corrigendum: The Inhibition of the Rayleigh-Taylor Instability by Rotation Scientific Reports 5: Article number: 11706; published online: 01 July 2015; updated: 31 May 2016 The authors of this Article would like to clarify a point regarding the experimental technique described in the Results section. This clarification does not affect the findings and results of this study. to the direction of acceleration slows the growth of the instability and changes its character in a rigidly rotating three-dimensional liquid system (Fig. 1). In studying the RTI experimentally, a difficulty arises in maintaining the stability of the heavy fluid above the lighter fluid during preparation of the system. One method for overcoming this is to use rocketry 21 or other means of acceleration (e.g. compressed gas 2,3,22 , linear electric motors 23 ) to induce RTI in a gravitationally-stable two-fluid system by suddenly inverting the direction of acceleration. This method fails when the two fluid system is rotated, for the following reasons. The interface forms a paraboloid as a result of the centripetal acceleration required to keep the liquid in solid body rotation. For example, a vigorously stirred drink has a concave parabolic free interface with its lowest point on the axis of rotation. The isobaric surfaces form identically-shaped 'concave' parabolas, independent of density and density gradient. If the direction of acceleration is suddenly inverted, at the moment this occurs, the shape of the isobaric surfaces is also inverted, becoming 'convex' . However, it takes a finite time for the shape of the interface to follow. Since the interface does not coincide with an isobar at the moment of inversion, the RTI does not develop from hydrostatic conditions. Another method for creating unstable initial . Each column of images shows the growth of the instability for a particular rotation rate as the tank is lowered into the magnetic field. A video, which is the source of the images shown in the figure, is included in the Supplementary Information on-line. conditions is to employ a barrier between the two fluids during preparation, but the same curvature of the interface presents significant technical difficulties for this technique. These barrier removal methods also perturb the fluid interface at the instant the barrier is withdrawn due to viscous drag and liquid displacement 24 , and the wake induced by the removal of the barrier may dominate the initial growth of the instability 25 . Recently, magnetic fields have been used to induce RTI constrained to two dimensions 26,27 and rotating magnetic fields used to stabilise a cylindrical ferrofluid system 28 . The approach we adopt here is to use the magnetic field of a superconducting magnet (Cryogenic Ltd. London) to manipulate the effective density of the two liquids in order to induce RTI in a fully three-dimensional system. Previously, this technique has been applied to float dense objects in less dense fluids (see e.g. [29][30][31] ), exploited as a method for separating granular materials 32 , and used to influence thermal convection 33 . Here, a light, paramagnetic liquid was floated upon a denser diamagnetic liquid layer in a transparent cylindrical acrylic tank approximately 10 cm in diameter. The top and bottom layers consisted of an aqueous solution of manganese chloride and an aqueous solution of sodium chloride respectively. The Methods section contains further details. The tank was placed on a rotating platform above the magnet and rotated until the liquids were rotating as a rigid body, before being allowed to descend at a constant vertical speed into the magnetic field, with no change in angular velocity. Figure 2 shows a schematic diagram of the experimental set-up. Results As the tank descended into the magnetic field, the downward magnetic force on the upper liquid and the upward magnetic force on the lower liquid increased. This magnetic force is a body force: the force per unit volume is given by F = χB∇ B/μ 0 , where B is the magnitude of the magnetic field, μ 0 is the magnetic constant, and χ is the magnetic susceptibility, which is positive for the paramagnetic liquid (upper layer) and negative for the diamagnetic liquid (lower layer). Directly above the magnet bore the direction of ∇ B, and therefore the magnetic force, lies close to vertical, so that the gravitational and magnetic body forces may be subsumed into a single, vertically-acting 'effective gravitational' force. Equivalently, we can consider that the force results from a change in the effective density ρ * of the liquid in the magnetic field. In this picture the net vertical body force per unit volume is given by , and the effective Atwood number is then simply defined as 2 , where 1 ρ ⁎ is the effective density of the upper fluid layer, and 2 ρ ⁎ is the effective density of the lower fluid layer. In manipulating the relative magnitudes of the effective densities, the equilibrium profile is unaffected, and the RTI is initiated from hydrostatic conditions. In Fig. 3, contours (blue lines) show the effective Atwood number, ⁎ A , versus vertical and radial distance from the centre of the magnet. Note that ⁎ A depends on position since ρ * depends on the strength of the magnetic field and its vertical gradient, both of which depend on position. The red dashed lines show the curvature of the hydrostatic parabolic interface for rotation rates of Ω = π, 2π, 3π and 4π rad s −1 , where the interface crosses = ⁎ A 0. The grey dotted lines denote magnetic field lines. Rate of growth. The descent of the rotating platform was halted when ⁎ A at the interface reached a small positive value, ≈ . ⁎ A 0 002, whereupon the interface became unstable, developing undulations that grew into characteristic finger-like structures. Figure 1 shows images of the interface viewed from the side for varying rotation rates and various times after the onset of instability. A video, which is the source of the images shown in the figure, is included in the Supplementary Information on-line. In the non-rotating experiment, the onset of RTI is apparent prior to the tank coming to a halt at t = 0 s. The images show that the early growth of the instability is slowed significantly with increasing rotation rate. In order to obtain a quantitative measure of this suppression, the vertical extent of the growing 'fingers' of fluid was measured as a function of time. We created a 'time series image' in which each video frame is represented by a single column of pixels, such that the horizontal axis of the resultant image represents elapsed time; a typical example is shown in the inset to Fig.4. The growth of the fingers is visible in such images as a pink column of pixels that descends from the position of the interface, with time increasing from left to right. The Supplementary Information contains details of the algorithm used to produce these images. We identified a time t = T at which the scale of the fingers exceeded an arbitrary threshold, in this case 2 mm, i.e. T is defined by h(T) = − 2 mm. The error in the measurements reflects noise in the images. Figure 4 shows that T increases up to a rotation rate of Ω ≈ 6 rad s −1 , beyond which the threshold time does not increase, within experimental uncertainty. Wavelength of the most rapidly growing mode. We now consider how rotation affects the character of the instability, for various Ω and times after onset of RTI. In these experiments, the interface was filmed from above the tank, and the platform was allowed to descend during the entire course of RTI mixing. Figure 5a shows still images from the video, at rotation rates Ω between 0 and 13.2 rad s −1 and for times up to 2 s after the onset of the instability. An example video is included in the Supplementary Information, on-line. At the onset of RTI, undulations appeared at the interface, which grew in amplitude with time. The character of the undulations changed with increasing Ω : for small Ω , the disturbance grew as a cellular structure, but with increasing Ω this was replaced by more meandering snake-like concentric structures. These 'snakes' had a dominant radial wavelength and dominant azimuthal wavelength. The dominant radial wavelength was relatively insensitive to time, whereas the azimuthal wavelength decreased with time after onset of RTI, approaching that of the radial wavelength, whereupon the concentric patterns broke up into finger-like structures. Most noticeably, the radial wavelength decreased with increasing Ω . To characterize the dominant radial wavelength of the instability, λ, we computed the two-dimensional auto-correlation function of the image, y) is the grey-scale image in the interrogation window, and F is the two-dimensional discrete Fast Fourier Transform. From G(x,y) we derived a one-dimensional autocorrelation function, G r ( )  , by averaging over azimuthal angle θ. A representative set of 1D autocorrelation traces for a given experiment (Ω = 5.15 rad s −1 ) is shown in Fig. 6. This example shows that the first peak appears at approximately 6 to 7 mm. We restricted our analysis to time less than 2 seconds after onset of RTI. The total time between onset of the RTI and the fingers reaching the lid and base of the tank was approximately 3 s. Figure 7 shows a plot of λ as a function of Ω . The bars on each data point indicate the minimum and maximum value of λ measured over the first 2 seconds after onset of RTI. The plot shows that rotation caused a decrease in λ from 16 ± 2 mm at 1 rad s −1 , to approximately 6 mm at 4 rad s −1 , which is readily observable in the video images (see Fig. 5a). Below 1 rad s −1 , the length scale was too large to resolve with the autocorrelation function. Above 4 rad s −1 , increasing rotation rate had little effect on λ. Effect of viscosity. It is known that fluid viscosity suppresses small scale structures in the non-rotating RTI (e.g. 9,34 ), suggesting that the observed lower limit (≈ 6 mm in Fig. 7) on the wavelength of the dominant mode may be dependent on the viscosity of the liquid. To verify this experimentally, equal quantities of glycerol were added to each layer. Figure 5b shows images of the interface for fixed rotation rate (Ω = 7.8 ± 0.1 rad s −1 ), and increasing viscosity, from top left to bottom right, at 1.6 s after onset. Discussion In general, applying a magnetic field to a conducting liquid introduces magnetohydrodynamic effects. Here, however, magnetohydrodynamic forces are usually weak compared with the other body forces imposed on the system. The magnetic field affects the liquid primarily through its interaction with the spins of the electrons of the Mn 2+ ions, and with the orbital motion of the electrons in the water molecules, rather than with macroscopic electric currents: the magnetic body force that destabilises the system is due to the magnetization of the liquid by the alignment of the spins of Mn 2+ and by the orbital angular momentum imparted to the electrons by the imposition of the magnetic field 35 . Although magnetised, the magnetic field generated by magnetization of the liquid is insignificant compared to the imposed magnetic field, since the susceptibility of the liquids is of order 10 10 6 5 χ − − − (unlike a ferrofluid for example). The Lorentz force J × B (per unit volume), responsible for magnetohydrodynamic effects, acts on ions carrying a current J in the liquid. Here, the current is given by J = σ(u × B) where u is the velocity of the liquid and σ is its conductivity. Note that, under rigid body rotation at a constant angular velocity, the Lorentz force does not influence the motion since there cannot be a purely radial electric current; the Lorentz force in this case is balanced by a radial electric field. Hence we need only consider the component of the Lorentz force that results from deviations in the velocity field from rigid body rotation. We can also assume that the magnetic field produced by any electric currents generated in the liquid is insignificant compared to the externally imposed field, since the conductivity of the liquid is relatively low σ ≈ 4 S m −1 ; the magnetic Reynolds number Re m = μ 0 ULσ ~ 10 −7 − 10 −8 for the largest length (L) and velocity (U) scales considered here. To understand the effect that this force may have on the experiment, we consider the relative magnitudes of the Lorentz force, Coriolis force and viscous forces, all of which are proportional to the velocity of liquid motion deviating from rigid body rotation. The Elsasser number El = σB 2 /(ρΩ ) is a measure of the ratio of the Lorentz force to the Coriolis force; in these experiments, El was less than 2 × 10 −2 for Ω ≥ 0.5 rad s −1 , which is the smallest non-zero rotation applied experimentally, and less than 1 × 10 −2 for Ω > 1 rad s −1 . This implies that the effect of the Lorentz force is weak compared to the Coriolis force in these experiments. To verify this experimentally, we repeated the experiments, replacing the sodium chloride solution in the lower layer with a density-matched (hyphenated) zinc sulphate solution. We measured the conductivity of the zinc sulphate solution to be approximately 35% less than that of the sodium chloride solution of the same density. We found that the change in the growth rate with increasing rotation was the same in both cases, within experimental uncertainty, thus showing that the stabilising effect of rotation is unrelated to the motion of the liquid through a magnetic field (see Supplementary Information). We now consider the relative magnitude of the Lorentz and viscous forces, which is quantified by the Hartmann number, Ha 2 = B 2 L 2 σ/ (ρν). In experiments in which we did not add glycerol to the liquids, Ha 2 exceeds 1 for length scales  L 10 mm, indicating that, at these length scales, the Lorentz force has more influence on the motion of the liquid than viscosity. (Note that, even in this case, the magnitude of the Coriolis force in the rotating experiments is more than 50 times larger than that of the Lorentz force). Consistent with this, our experiments to determine the influence of viscosity on the observed lower limit of the dominant wavelength at a fixed rotation rate (inset Fig. 7), suggest a contribution from the Lorentz force to the effective viscosity of the liquid: extrapolation suggests that λ ≈ 5 mm even at η = 0. This viscous-like damping of a conducting fluid by Lorentz force is well-understood (see, for example 36 ). Here, the Lorentz force appears to act to weakly dampen motion deviating from rigid-body rotation, although the damping effect is much weaker than that observed in a liquid metal for example, which has a much larger conductivity. The Supplementary Information contains plots of the Elsasser number, the Hartmann number and the Ekman number Ek = El/Ha 2 as a function of Ω . Owing to the spatial variation of B, the interface tends to become unstable at the centre of the tank slightly before the edges of the tank. This effect is enhanced for higher rotation rates owing to the parabolic curvature of the interface. For this reason we limited experiments to Ω  13 rad s −1 . Ideally, the magnetic field should be imposed on the liquid instantaneously; however, due to the large inductance of the superconducting magnet, the rate at which we can increase B is too slow to induce RTI. Our approach of lowering the tank into the magnetic field achieves the objective of imposing the field quickly enough to induce RTI in the bulk of the system. We now propose a mechanism through which rotation can act to stabilise the interface. With increasing rotation rate, a homogeneous liquid system tends to arrange itself into Taylor columns, parallel to the axis of rotation 37 . In a two layer system, the coherence of the columns is affected by the strength of the (effective) density difference at the interface. That is to say, the propensity for liquid to remain as a column competes with the RTI, in which liquid parcels in the upper and lower layers must move laterally relative to one another in order to overturn and rearrange themselves into a more stable configuration. The consequence of this competition is that relative lateral movement of fluid parcels either side of the interface is inhibited by virtue of the Taylor-Proudman Theorem [38][39][40] . As a result, with increasing rotation rate, liquid parcels switch position by forming smaller structures that require less lateral displacement to move past one another, and into a more stable arrangement. This results in the stabilisation of large wave modes that otherwise develop in the non-rotating case and hence we observe the development of shorter wavelength modes, decreasing in scale with increasing rotation rate. Since rotation appears to suppress wavelengths above a certain critical length scale, it might be anticipated that, for an inviscid system, as the rate of rotation increases the observed length scale of perturbation should tend toward zero. However, in our system, which has viscosity, we observe that the length scale asymptotes to a finite value that can be controlled by viscosity (≈ 6 mm in our set-up, see Fig. 7). We can formalize the above argument by considering a generalization of the approach of Taylor 2 to include rotation. Miles 41 noted the importance of including the effects of the parabolic free-surface of a rotating body of liquid when calculating the properties of free-surface waves in rotating systems. Therefore, we combine the approaches of both Taylor 2 and Miles 41 and consider a two-layer liquid stratification in a cylindrical tank that is rotating about its axis with angular frequency Ω . The position of the initial interface is given by z r a g 2 0 2 2 1 2 2 ( ) = Ω − /( ), relative to cylindrical coordinates whose origin coincides with the centre of the tank, and where a is the radius of the tank. Following the method outlined in Miles 41 , it can be shown that for low rotation rates, a g 1 2 Ω /  , an axisymmetric instability at the interface develops as e iωt , where t is time and A is the Atwood number, δ is the aspect ratio of the cylinder (layer depth to radius) and k is a zero of the Bessel function J 1 , such that k/a is approximately the wavenumber associated with the mode of instability (cf. Miles 41 , (4.10)). For ω 2 < 0 the mode is Rayleigh-Taylor unstable and grows. Equation (1) shows that, at least for low rotation rates, the growth rate of an unstable mode may be reduced by rotating the system. The second term on the right hand side of (1) may increase the value of ω 2 and hence slow the growth rate. It can be further shown, without recourse to small rotation rate asymptotics, that an axisymmetric mode may be completely stabilised provided the critical rotation rate, Ω c , given by can be achieved without the interface intersecting the lid or base of the tank. Considering the largest, mode 1, unstable mode, we may substitute our experimental parameters into (2) (g = 9.81 m s −2 , k = 3.83, δ = 0.87, a = 4.50 × 10 −2 m, = × − A 2 10 3 ) to find a critical rotation rate Ω c = 1.18 rad s −1 . This is consistent with the images in Fig. 1 where the dominant mode 1 instability, that spans the breadth of the tank in column 1 (Ω = 0 rad s −1 < Ω c ), has been apparently stabilised in column 2 (Ω = 1.5 rad s −1 > Ω c ). Tao et al. 20 recently speculated on the use of rotation to suppress RTI in the spherical pellets used in inertial confinement fusion. They concluded that rotating a pellet might retard the RTI during the acceleration phase, at the equator of the pellet. Our experiments suggest that the RTI at the poles of a rotating spherical fluid may also be suppressed. The present experiments are limited to A ~ O(10 -3 ), but our theory shows that this result is applicable to any Atwood number. This implies that rotation could inhibit the growth of the instability over the entire surface of a rotating sphere. Chandrasekhar considered the special case of infinite bodies of inviscid fluid separated by a horizontal interface, and concluded that an unstable configuration could not be stabilised indefinitely by rotation 9 . This analysis was further developed by Hide 42 who included the effects of viscosity, and considered the wavenumber and growth rate of the most unstable modes. Our results agree qualitatively with Hide's, who concluded that rotation slows the growth of the instability. However, quantitative comparison of our experimental results with Hide's is problematic, since Hide's analysis considers the growth of the most unstable modes at very short times after the onset of the instability. In comparison, the length scales we observe in our experiments are not necessarily those of the fastest growing initial mode of disturbance, as larger scale wave modes may overtake smaller scale wave modes before their presence is experimentally observable. In conclusion, we have developed a method for magnetically inducing the Rayleigh-Taylor instability in a fully three-dimensional system, and have analysed the effect rotation has on the growth rate and the length scale of the instability. We found that rotation acts to suppress the growth of large wavelength undulations, with the wavelength of the fastest growing mode decreasing with increasing rotation rate, asymptoting to a fixed value that depends on the viscosity of the liquid. These data suggest that rotation acts to stabilise long wavelength instabilities, while viscosity stabilises short wavelength instabilities, producing an RTI whose character and growth are determined by the combined effects of both. Methods Two liquid layers, consisting of an aqueous paramagnetic solution lying on top of an aqueous diamagnetic solution, were prepared in a cylindrical acrylic tank 10.7 cm in diameter. The paramagnetic solution was floated slowly onto the diamagnetic one to leave a well defined interface between the two. The top layer of liquid was a solution of manganese chloride (MnCl 2(aq) , 0.06 mol l −1 ), and the bottom layer was a solution of sodium chloride (NaCl (aq) , 0.43 mol l −1 ). A transparent lid was submerged into the top layer such that there was no interface with the atmosphere and the two layer depths were equal. The volume magnetic susceptibilities and densities of the liquids were χ 1 = 3.37 × 10 −6 (SI units) and ρ 1 = 998.2 ± 0.5 kg m −3 respectively for the top layer, and χ 2 = − 9.03 × 10 −6 (SI units), ρ 2 = 1012.9 ± 1.2 kg m −3 respectively for the bottom layer. The magnetic susceptibility was determined by means of a Gouy balance. The conductivity was σ ≈ 3 -4 S m −1 in the lower layer, and approximately half that in the upper layer. The magnetic field strength was B ~ 1 − 2T. The fluid viscosity was varied for some experimental runs and lay in the range ν = η/ρ = 1 × 10 −6 to 30 × 10 −6 m 2 s −1 depending on the concentration of glycerol used. The temperature of the tank and its contents was T = 22 ± 2 °C. The rotation of the platform was achieved by means of a drive shaft connected to an electric motor. The rate of rotation was increased slowly, at approximately 0.002 rad s −2 , up to the desired rate of rotation. The tank was left to rotate at this rate until the liquids were rotating as a rigid body; this was verified in separate experiments using neutrally buoyant tracer particles. A catch was then released allowing the platform to descend into the magnetic field. The tank was driven to rotate at the same speed during descent by means of a slip bearing. No longer than 2 hours elapsed between filling the tank and releasing the catch. Magnetic damping caused by eddy currents generated in the copper support cylinder kept the rate of descent constant over the duration of each experiment. In experiments to study the growth rate of the instability, red water-tracing dye (Cole-Parmer 00295-18) was added to the lighter paramagnetic solution to aid visibility of the growing wave-like interface. The layers had dimensions 250 ml volume and 45 mm radius. We placed a PTFE platform below the descending drive shaft to halt the descent at a position below where the development of RTI plumes became prominent in a non-rotating tank, where the effective Atwood number at the centre of the interface was ≈ . ⁎ A 0 002. The descent speed was controlled between 8 and 11 mm s −1 by the addition of brass weights to the top of the cylindrical tank. We observed no correlation between descent speed and the appearance of RTI fingers. We imaged the interface using a high-speed video camera (Sony NEX-FS100), running at 60 frames per second. To avoid lensing effects caused by the curved cylindrical walls, the cylindrical tank was submerged in a water-filled flat-sided transparent box, as shown in Fig. 2, and filmed through the sides of the box. In experiments to determine the wavelength of the most rapidly growing mode, red and blue water-tracing dyes (Cole-Parmer 00295-18 & -16) were added to the denser saline solution to make this lower layer opaque. Small quantities of fluorescein sodium salt (~10 −4 kg l −1 ), were added to this solution to enhance visual contrast between the two layers 25 . The layers had dimensions 300 ml volume, 53.5 mm radius. We imaged the interface from above the liquid tank, with the same high speed camera running at 240 frames per second. Prior to analysis, the high speed video images of the interface were processed, mapping each image into the rotating reference frame of the tank. In the resulting video images, the tank appears stationary.
6,197.8
2016-05-31T00:00:00.000
[ "Physics" ]
The technique of robotic anatomic pulmonary segmentectomy II: left sided segments Anatomic pulmonary segmentectomy and mediastinal nodal dissection has been advocated in patients with smaller tumors or patients with limited pulmonary reserve. The overall 5-year survival and the lung cancer-specific 5-year survival following anatomic segmentectomy have been shown to be equivalent to that of lobectomy. Robotic surgical systems have the advantage of magnified, high-definition three-dimensional visualization and greater instrument maneuverability in a minimally invasive platform. These robotic systems can facilitate the dissection of the bronchovascular structures and replicate the technique of segmentectomy by thoracotomy. Greater experience with the robotic platform has resulted in a reproducible anatomic segmentectomy technique. This is a companion paper to The Technique of Robotic Anatomic Segmentectomy I: Right Sided Segments. This paper outlines the technique of anatomic pulmonary segmentectomy for the left lung: Left Upper Lobe (LUL) Anterior Segment (S3), LUL Apicoposterior Segment (S1 + S2), LUL Lingulectomy (S4, S5), Left Lower Lobe (LLL) Superior Segmentectomy (S6), and LLL Basal Segmentectomy (S7-S10). INTRODUCTION Historically, anatomic pulmonary segmentectomy was used for the surgical treatment of lung abscesses and other lung infections. Chevalier Jackson and John Hubert first proposed a system of nomenclature for the bronchopulmonary segments [1] . In 1939, Churchill and Belsey [2] reported the first anatomic segmentectomy, a lingulectomy. Edward Boyden described the vascular and bronchial anatomy for pulmonary segments [3] . In the latter half of the twentieth century, the advent of antibiotic therapy led to a decrease in segmentectomies performed for infectious lung processes and an increase in their use for primary malignancies of the lung. In the 1960's and 1970's, Rasmussen and Clagett published reports of segmentectomy for lung cancer with low mortality [4] . With the introduction of stapling devices in the late 1960's, wedge resections, which were technically much easier, became widely used. Thereafter and unfortunately, wedge resection, a nonanatomic pulmonary resection, and individual ligation anatomic segmentectomy became grouped as "sublobar resections". Subsequent studies showed that anatomic segmentectomy was associated with significantly better cancer-related survival than wedge resection [5] . However, as anatomic segmentectomy is a technically more demanding procedure than lobectomy, lobectomy became the procedure of choice for early stage lung cancer. Recently, anatomic pulmonary segmentectomy has been shown to be a viable oncologic procedure for early lung cancer, including patients who are elderly or have limited pulmonary reserve [6][7][8][9][10][11][12][13][14] . As a result of high definition three-dimensional visualization and increased maneuverability of the surgical instruments in a small space, the surgical robot has the distinct advantage of replicating the technique of anatomic segmentectomy by thoracotomy using a minimally invasive platform [15] . Although there has been skepticism about the cost and the lack of evidence of the survival advantage of using robotic lobectomy, the robotic platform seems to be especially suited to a minimally invasive approach to anatomic segmentectomy [15,16] . Greater experience with the robotic platform has resulted in a reproducible anatomic segmentectomy technique. This is a companion paper to The Technique of Robotic Anatomic Segmentectomy I: Right Sided Segments. This paper outlines the technique of anatomic pulmonary segmentectomy for the left lung: Left Upper Lobe (LUL) Anterior Segment (S3), LUL Apicoposterior Segment (S1 + S2), LUL Lingulectomy (S4, S5), Left Lower Lobe (LLL) Superior Segmentectomy (S6), and LLL Basal Segmentectomy (S7-S10). ANATOMIC SEGMENTECTOMY IN THE LEFT LUNG The bronchopulmonary segments of the left lower lobe are similar to the right lower lobe. Although there are only two lobes in the left lung, there is some symmetry among the bronchopulmonary segments bilaterally. However, some segments of the left lung merge, resulting in fewer bronchopulmonary segments on the left than there are on the right lung [ Figure 1]. The apicoposterior segment (S1 + S2) of the left upper lobe represents the fusion of the apical and posterior segments. Although the Lingula is divided into two bronchopulmonary segments, the superior (S4) and inferior (S5) Lingular segments, from a practical standpoint, S4 + S5 segmentectomy or lingulectomy is typically performed. In the left lower lobe, there are four segments unlike the right lower lobe which has five segments. The anteromedial basal segment (S7 + S8) represents the fusion of the anterior basal and medial basal segments. The other segments (superior S6, posterior basal S10, and lateral basal S9) maintain the same relative positions as observed in the right lung. Port placement The operating room table is reversed such that the pedestal does not interfere with the docking of the robot over the head of the patient. A double lumen endotracheal tube is placed, and the patient is positioned in a full lateral decubitus position. The left arm is placed over pillows and positioned high enough such that access to the 4th intercostal space in the anterior axillary line is readily attained. The table is flexed in order to move the hip down and to open the intercostal spaces. The lung is deflated and placed on suction. The position of the double lumen tube is rechecked after the patient is prepped and draped. We prefer the use of a double lumen tube as opposed to a bronchial blocker. During robotic dissection, manipulation of the hilum and the bronchus can result in dislodgement of the blocker and loss of lung isolation. Every effort should be made to ensure lung isolation for the entire procedure. The position of the robot over the head of the patient makes manipulation of the endotracheal tube difficult. Untimely inflation of the lung can result in loss of exposure and its associated complications. Proper port positioning is crucial and a fundamental prerequisite to the conduct of the procedure. Figures 2 and 3 show port placements. A line is drawn from the tip of the scapula to the costal arch. This delineates the highest point in the chest and the midscapular line (posterior axillary line). Pleural entry is with a Hassan needle. Saline is infused and care is taken to look for easy egress of the saline from the needle. If there is concern of pleural adhesions, we use a Visiport Instrument (Medtronic Inc. Norwalk, CT) for entry into the pleural space under direct vision. If the Visiport is used, a purse string is placed in the muscle layer and tied around the robot camera port in order to prevent CO 2 leakage. Port #1 is the camera port. Warm, humidified CO 2 is insufflated through this port at a flow rate of 6 L/min to a pressure of 6-8 mmHg in order to push the lung and diaphragm away. The other ports are placed under direct vision. Port #2 is placed in the 7th intercostal space in the posterior scapular line. This port is 9 cm posterior to Port #1. Prior to the placement of Port #3, a 21-gauge needle is inserted into the 7th intercostal space at costovertebral junction from the patient's back and a 10 mL subpleural bubble of 0.25% bupivacaine with epinephrine is injected near the intercostal nerve. Next, Port #3 is placed 9 cm posterior to Port #2 in the 7th intercostal space just medial to the spine. Port #4 is placed 9 cm anterior to Port #1 in the 7th intercostal space at the anterior For the da Vinci Si robot, the bed is angled posteriorly away from the anesthesia machine and the robot is brought in over the head of the patient. For the Xi system, the robot is brought in from the back and perpendicular to the patient and the boom is rotated to the proper position. One of the advantages of the Xi robot is that the surgeon can control the stapling device. We prefer a 30 mm stapler with a white load for the vascular structures, and a blue or green load for the bronchus and the lung tissue as judged by the size and thickness of the structure. For all segmentectomies, begin by dividing the inferior pulmonary ligament and remove station #9, and station #8 [ Figures 4 and 5]. The lung is retracted medially and anteriorly in order to remove lymph nodes from station #7. We find that pulling the nasogastric tube back above the area of subcarinal dissections opens the mediastinal space and facilitates the subcarinal and mediastinal dissection. After the mediastinal dissection, the nasogastric tube is advanced back into the stomach, placed on suction, and used to decompress the stomach and prevent gastric distension and the resultant elevation of the left hemidiaphragm. Next, open the pleura anterior to the vagus nerve. Identify the left mainstem bronchus and stay inferior to the edge of the cartilage. The station #7 nodal bundle is accessed between the inferior pulmonary vein and the left mainstem bronchus. The nodal bundle is traced to the carina and is then removed [ Figure 6]. Next, the lung is retracted inferiorly, and pleura overlying the station #5 nodal bundle is opened. Station #5 nodes are removed [ Figure 7]. The left main pulmonary artery is identified above the left main bronchus. The space between the pulmonary artery and the bronchus is opened and station #10L nodal bundle is identified overlying the superior border of the bronchus [ Figure 8]. The space between the pulmonary artery and the aorta is cleared in order to visualize the nodal bundle which encases the apicoposterior trunk of the artery [ Figure 9]. Care is taken to identify and preserve the vagus nerve and the recurrent laryngeal branch. After exposing the apicoposterior trunk, the station #10 nodal bundle is swept in an inferomedial direction, thereby exposing the underside of the truncus branch and its takeoff from the main pulmonary artery. Next the upper and lower lobe are retracted in opposite directions and the fissure is identified. Dissection of nodal bundle in station #11 allows for the identification of the pulmonary artery in the fissure [ Figure 10]. The artery is most superficial at the junction of the Lingula, upper lobe and the lower lobe. The subadventitial plane is entered, and dissection is carried posteriorly, under the pulmonary parenchyma in the posterior aspect of the fissure toward the main pulmonary artery. The Cadiere forceps is used to pass a vessel loop under the pulmonary parenchyma in the posterior aspect of the fissure. A stapler with a blue cartridge is used to divide the tissue in the posterior aspect of the fissure [ Figure 11]. Following the dissection of mediastinal nodes, the lung is retracted posteriorly and the anterior hilum is approached. The nodes in station #5 are removed and the proximal left pulmonary artery is exposed just posterior to the left phrenic nerve [ Figure 12]. The nodes between the superior pulmonary vein and the pulmonary artery are dissected and removed. The superior pulmonary vein is separated from the underlying pulmonary artery [ Figure 13]. Figure 14 shows the anatomic relationship among the vein, artery, and bronchus in segment S3 (V3, A3 and B3). V3 is encircled, elevated with a vessel loop, and divided with a stapler with a white cartridge. Care is taken to preserve the V1 branch to the S1 segment of the upper lobe. The B3 bronchus is encircled, elevated off the pulmonary artery, and divided with a stapler using a purple cartridge. Division of the B3 facilitates division of the A3 PA branch(es). The A3 PA branch is encircled with a vessel loop and divided with a stapling device. The A3 PA branches can be divided before dividing B3; however, this usually requires suture ligation and division of the A3. Next the intersegmental fissures between S1 + S2 and S3 and between S4 + S5 and S3 are delineated either using indocyanine green if using the Xi robot or inflation technique and divided using a stapler carrying a green cartridge [ Figure 15]. Left upper lobe apical and posterior anatomic segmentectomy (S1 + S2) The approach to these left sided segments is similar. Although individual posterior (S2) segmentectomy is possible, instead of an individual apical segmentectomy, many times an apicoposterior (S1 + S2) segmentectomy is performed on the left side. As with all segmentectomies, the procedure begins with mediastinal nodal dissection as has been described previously. For a posterior S2 or apicoposterior S1 + S2 segmentectomy, the pulmonary artery branches to the respective segments as identified in Figure 16. The branches are encircled, elevated with a vessel loop, and divided with a vascular stapler carrying a white load. Following the division of the pulmonary artery branches, the bronchus is approached from the back. The segmental bronchus is isolated, the N1 nodes are excised, and the bronchus is encircled and divided with a stapler with a purple or blue cartridge [ Figure 17]. For these segments, the segmental veins are usually taken with division of the fissure. The intersegmental fissure is identified as has been outlined previously and divided in a stepwise progressive manner using a stapling device with a green cartridge [ Figure 18]. Left upper lobe lingulectomy and anatomic segmentectomy (S4 + S5) Lingulectomy can be performed with either a vein first or artery first technique. The advantage of the artery first technique is that the fissure is approached first, station #11 nodes are removed first, and if they are positive, a left upper lobectomy is performed. After a complete mediastinal nodal dissection as with the other left sided segmentectomies, the oblique fissure is opened and the subadventitial plane above the descending pulmonary artery is entered [ Figure 19]. The "V" shaped space between the lower lobe pulmonary artery and the Lingular artery is dissected and all N1 nodes are removed. Figure 18. LS1, S2 Segmentectomy: the intersegmental fissure between S1 + S2 and S4 + S5 segments is divided in a stepwise progressive manner using a stapling device with a Green load Next, the lung is retracted posteriorly, and the anterior hilum is approached. The space between the superior and inferior pulmonary veins is developed and the nodes are removed. The superior pulmonary vein is dissected away from the underlying pulmonary artery, encircled with a vessel loop, and elevated. After the entire superior pulmonary vein is dissected, the Lingular vein(s) are identified, encircled, elevated with a vessel loop, and divided with a vascular stapler. Then, the anterior aspect of the oblique fissure is divided by passing a stapler with a blue cartridge from an anterior to posterior direction, heading toward the space between the Lingular artery and the inferior pulmonary artery. This enables easy access to the Lingular pulmonary artery which is encircled, elevated with a vessel loop, and divided with a stapler carrying a white cartridge [ Figure 20]. Division of the fissure also enables access to the Lingular bronchus. The Lingular bronchus is encircled and elevated with a vessel loop; the anesthesiologist removes any indwelling suction catheters and the bronchus is divided with a stapler using a green cartridge [ Figure 21]. Finally, using the techniques which have been outlined earlier, the intersegmental fissure between S1 + S2, S3, and the Lingula are identified [ Figure 22]. The lung parenchyma is then divided with multiple firings of a stapling device with a blue or green cartridge. Robotic left lower lobe anatomic superior segmentectomy (S6) Port placement and instruments are similar to the left upper lobe segmentectomy procedures. Following the complete mediastinal dissection which has been outlined previously, the pulmonary artery is identified in the oblique fissure. The subadventitial plane overlying the pulmonary artery is entered, and Figure 23]. A pair of Cadiere forceps is used to pass a vessel loop under the pulmonary parenchyma in the posterior aspect of the fissure. A stapler with a blue cartridge is then used to divide the tissue in the posterior aspect of the fissure. The subadventitial plane is then developed anteriorly in order to identify the descending branch of the pulmonary artery. The anterior aspect of the oblique fissure is divided. The superior segmental pulmonary artery is identified. The Cadiere forceps is passed under the superior segmental pulmonary artery, a vessel loop is passed underneath and used to encircle and elevate the vessel, and the vessel is divided with a stapler with a white vascular cartridge introduced from a medial to lateral direction [ Figure 24]. The lung is elevated and retracted medially. The Cadiere forceps is passed from a medial to lateral direction under the inferior pulmonary vein and a vessel loop is used to encircle and elevate the vein [ Figure 25]. The superior segmental vein is identified, encircled, and divided using a stapler with a white vascular cartridge introduced from inferior to superior direction [ Figure 26]. The nodes overlying the left lower lobe bronchus are swept toward the specimen. The B6 bronchus is identified, encircled, and divided [ Figure 27]. The intersegmental fissure between the S6 and the basal segments of the lower lobe is identified as has been outline previously and divided using a stapling device. The approach to this segmentectomy is similar to superior segmentectomy (S6). Following the complete mediastinal nodal dissection, the inferior pulmonary vein is encircled with a vessel loop and elevated. Then the superior segmental vein is identified, thereby allowing for identification of the basal branch of the inferior pulmonary vein. The basal vein (V7-10) is then divided with a stapling device with a white cartridge. Next, the pulmonary artery is isolated in the fissure as has been described previously. The left lower lobe pulmonary artery is identified [ Figure 28]. The basal branch of the left pulmonary artery is encircled and elevated with a vessel loop and divided with a vascular stapler. Following the division of the A7-10, the bronchus to the basal segment (B7-10) is encircled and divided with a stapler carrying a blue cartridge [ Figure 29]. Finally the intersegmental fissure is identified and divided using a stapler with a green cartridge [ Figure 30]. CONCLUSION Anatomic pulmonary segmentectomy in patients with early stage lung cancer is an oncologically efficacious procedure. The surgical robot allows for precise dissection of the segmental bronchopulmonary structures while minimizing trauma to surrounding tissues, and it allows for thorough and complete dissection of the mediastinal nodes. Robotic segmentectomy should be considered when planning a lung sparing operation in patients with small tumors, in elderly patients or patients with borderline lung function. Authors' contributions Contributed equally to the performance of the surgeries, collection of data and writing the manuscript: Gharagozloo F, Meyer M
4,264.4
2020-10-12T00:00:00.000
[ "Medicine", "Engineering" ]
Deepening Well-Being Evaluation with Different Data Sources: A Bayesian Networks Approach In this paper, we focus on a Bayesian network s approach to combine traditional survey and social network data and official statistics to evaluate well-being. Bayesian networks permit the use of data with different geographical levels (provincial and regional) and time frequencies (daily, quarterly, and annual). The aim of this study was twofold: to describe the relationship between survey and social network data and to investigate the link between social network data and official statistics. Particularly, we focused on whether the big data anticipate the information provided by the official statistics. The applications, referring to Italy from 2012 to 2017, were performed using ISTAT’s survey data, some variables related to the considered time period or geographical levels, a composite index of well-being obtained by Twitter data, and official statistics that summarize the labor market. Introduction and Background In recent decades, since the Stiglitz Commission's suggestions advising to build a complementary system focused on social well-being that is suitable for measuring sustainability and that considers subjective assessment [1], different new measures of well-being have been proposed. In these new indices, the subjective dimensions are traditionally investigated using ad hoc surveys, but these data are not free of issues. Some issues are related to the survey structure or plan, which, despite all efforts [2], still create some methodological drawbacks [3,4]; other issues are related to the poor geographical disaggregation or low time frequency of the data. An objective evaluation of the currently proposed indices demonstrates a limited and undersized presence of data in the the subjective and perceived dimension. Since 2012, with the aim of finding complementary information to fill this information gap, a new subjective and perceived Italian well-being index from social networks data has been proposed (Subjective Well-Being Index (SWBI) [5]). This is a composite index that uses the same framework considered by the New Economic Foundation for its Happy Planet Index [6], but it is the result of a human-supervised sentiment analysis (integrated sentiment analysis (iSA) [7]) of Twitter data. Recently, suggested using the SWBI to provide information on subjective well-being at Italian sub-national levels and at different moments in time [8]. Despite social network data being defined as the largest available focus group in the world [9,10], as they cover several topics, are continuously updated, and are free or cheap, these data also have disadvantages. Even if social media users are always increasing in number (from http://wearesocial.com 27 January 2021: from January 2019 to January 2020 the world growth was +7% in Internet access (59% of people in the world had Internet access) and +9.2% for active social media accounts (with a penetration of 49%)), not all are users; hence, one of the main issues with this kind of information is sampling bias. To enable the use of such a rich source of information, scholars are still working on the solution to these issues. Notably, suggested a new Italian well-being measure combining official statistics with Twitter data using a weighting procedure combined with a small area estimation (SAE) model to precisely consider sampling bias [11]. For a detailed description of all the well-being index in Italy, please refer to the work of [12]. This paper focuses on the Italian scenario due to the central role given to this topic by the Italian Parliament, which introduces equitable and sustainable well-being among the objectives of the government's economic and social policy. The authors provided a detailed outline of the well-being indices useful for different scholars and practitioners, with the awareness that, for a good analysis, a complete and conscious description of disposable data is the starting point to further improve their usefulness, maximize their advantages, and reduce their limitations. In this study, using a a Bayesian network (BN) approach, we combined social networks data, which are characterized by high frequency and geographical disaggregation, traditional survey data, and official statistics to evaluate well-being. Adopting this approach, both categorical and continuous data can be used with different geographical area levels and time frequencies. The aims of this study were twofold: (1) to describe the relationship between survey and social network data; (2) to investigate social network data and official statistics. We focused on the forecasting power of the social media information. All analyses were performed using R version 4.0.4 (R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/, downloaded 30 June 2021). Section 2.1 reports a brief presentation of BNs. Section 2.2 describes the data. Section 3 provides a discussion of the results. Bayesian Networks: A Short Refresher A Bayesian network (BN) is a probabilistic graph model that represents a set of stochastic variables with their conditional dependencies through the use of a direct acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationship existing between symptoms and diseases. Given the symptoms, the network can be used to calculate the probability of the presence of different diseases. Formally, Bayesian networks are direct acyclic graphs whose nodes represent random variables in the Bayesian sense: they can be observable quantities, latent variables, unknown parameters, or hypotheses. The arcs represent conditions of dependence; the nodes that are not connected represent variables that are conditionally independent of each other. Each node is associated with a probability function which takes as input a particular set of values for the variables of the parent node and returns the probability of the variable represented by the node. For example, if the parents of the node are Boolean variables, then the probability function can be represented by a table in which each entry represents a possible combination of true or false values that its parents can assume. There are efficient algorithms that perform inference and learning starting from Bayesian networks. BNs are both mathematically rigorous and intuitively understandable data analytic tools. They implement a graphical model structure that is popular in statistics, machine learning, and artificial intelligence. They enable the effective representation and computation of a joint probability distribution (JPD) over a set of random variables. The DAG's structure is defined by a set of nodes, representing random variables and plotted by labeled circles, and a set of arcs, representing direct dependencies among the variables and plotted by arrows. Thus, an arrow from X i to X j indicates that a value taken by variable X j depends on the value taken by variable X i . Node X i is referred to as a "parent" of X j . Similarly, X j is referred to as a "child" of X i . An extension of these genealogical terms is often used to define sets of "descendants", i.e., the set of nodes from which the node can be reached on a direct path. The DAG guarantees that there is no node that can be its own ancestor (parent) or its own descendent. This condition is of vital importance to the factorization of the joint probability of a collection of nodes. Although the arrows represent direct causal connections between the variables, under the causal Markov condition, the reasoning process can operate on a BN by propagating information in any direction. A BN reflects a simple conditional independence statement, namely that each variable, given the state of its parents, is independent of its non-descendants in the graph. This property is used to reduce, sometimes significantly, the number of parameters that are required to characterize the JPD. This reduction provides an efficient method to compute the posterior probabilities given the evidence in the data [13,14]. In addition to the DAG structure, which is often considered to be the qualitative part of the model, the quantitative parameters are estimated by applying the Markov property, where the conditional probability distribution at each node depends only on its parents. More formally, BNs are defined by a network structure: a DAG G = (V; A), in which each node v i ∈ V corresponds to a random variable X i , and a global probability distribution X, which can be factorized into smaller local probability distributions according to the arcs a ij ∈ A in the graph. The main role of the network structure is to express the conditional independence relationships among the variables in the model through graphical separation, thus specifying the factorization of the global distribution: The probability distribution P(X) should be chosen such that the BN can be learned efficiently from the data. This distribution is flexible, so the assumptions should not be too strict, and it is easy to perform inference query [15]. The three most common choices in the literature are: • Discrete BNs (DBNs): X and X i |Π X i are multinomial. • Gaussian BNs (GBNs): X is multivariate normal and X i |Π X i are normal. • Conditional linear Gaussian BNs (CLGBNs): X is a mixture of multivariate normal and X i |Π X i are multinomial, normal, or mixtures of normal. It was proved that exact inference is possible in these three cases, hence their popularity. In this study, we considered both CLGBNs and GBNs. In the first aim of this paper (Section 3.1), some variables are categorical and others are numerical; instead, for the second aim (Section 3.2), all the variables are numerical. To compare the strength of the link between two variables, we evaluated the strength as measured by the score gain/loss that would be caused by the arc's removal [16]. Especially, we adopted the BIC criterion. The strength is the difference between the score of the network in which the arc is not present and the score of the network in which the arc is present. BNs have been used in the analysis of multi-dimensional well-being [17], considering the correlation among dimensions. The same authors also applied multivariate statistical techniques to ISTAT's BES index [18]. Moreover, employed BNs in the context of subjective well-being, focusing on the ability to predict it using material living conditions and deprivation, using the European Quality of Life Study data (2011) in four Central European countries [19]. Our proposed method combines, in a BNs approach, well-being data from social networks [5] and surveys. The Data In this study, using a Bayesian networks (BNs) approach, we combined traditional survey, social networks data, and official statistics to evaluate well-being. In the first step, we described the relationship between survey and social network data. In the second step, we investigated social network data and official statistics. In this section, we introduce the different kind of used data. As survey data, we considered some variables from the Aspects of Daily Life report. This ISTAT sample survey collects fundamental details on individual and household daily Italian life, focusing on several thematic areas of different social aspects useful to studying well-being. These data are also adopted for the subjective dimension in the ISTAT's wellbeing index Benessere Equo e Sostenibile (Equitable and Sustainable Well-Being in Italian, BES). This is an annual survey that considers people aged 14 years and over, and the data, with a regional aggregation, are available free of charge. Data download was done on 30 June 2021. (http://dati.istat.it/). The considered variable were: • I_sat is the average rating of satisfaction with life as a whole (on a 1 to 10 scale). • I_eco_sat is the percentage of people very or fairly satisfied with their economic situation. • I_health_sat is the percentage of people very or fairly satisfied with their health. • I_family_sat is the percentage of people very or fairly satisfied with family relationships. • I_friends_sat is the percentage of people very or fairly satisfied with their friendships. • I_freetime_sat is the percentage of people very or fairly satisfied with their free time. In the first step, we also considered some variables related to the considered time period and geographical area: • w_day is the day of the week. • month is the month. • year is the year. • prov is the Italian province. Focusing on social network data, we used the SWBI index [8], which is defined by eight domains, all measured on a 0-100 scale, according to three well-being dimensions: personal well-being, social well-being, and well-being at work. In Figure 1, the eight domains are characterized by three different shades of blue according to the dimension of interest: personal well-being summarizes the person's self-perception and is defined across five different domain; social well-being concerns perceptions of the individual's relations with other people and is composed of two domains; "well-being at work" is represented by the subjective assessment of one's own as far as work is concerned and refers to only one domain. This framework was inspired by the Happy Planet Index (HPI) of the New Economy Foundation (NEF) [6], and, during the coding definition of the human supervised step of the iSA-integrated Sentiment Analysis [7] process, the sentences were coded considering this structure. Sentiment analysis is a methodology used for the systematic extraction of web users' emotional states from the texts that they post on various platforms, such as blogs, social networks, etc. The literature in social psychology highlights an association between the well-being of individuals and their use of words. This implies that it is possible to extract words from the messages posted on social networks to reconstruct the emotional content, infer psychological traits, and measure the subjective well-being of individuals [20]. In the iSA algorithm, the human supervised part is essential because information can be retrieved from texts without relying on dictionaries of special semantic rules. Humans just read a text and associate a topic (e.g., life satisfaction) with it. Then, the computer learns the association between the set of words used to express that particular opinion and extends the same rule to the rest of the texts. Human coders classify a small percentage of the texts (training set), in order to train the computer program to associate specific words with the dimensions of well-being described above. Then, all remaining data are automatically classified (test set). Each tweet is classified according to the scale −1, 0, 1, where −1 is negative, 0 is neutral, and 1 is positive feeling. The data used for the empirical estimation of the SWBI are represented by tweets written in Italian and posted in Italy. A percentage of tweets contains geo-reference information, which makes it possible to build indicators with a high geographical disaggregation. In the considered dataset, more than 200 million tweets were analysed. They were collected with daily frequency and for all Italian provinces. For an up to date and complete comprehension of the SWBI index and, in general, of the social media data, we suggest the work of [21]. In this paper, we only consider the following domains: • sat: Life satisfaction, having positive assessment of the overall life situation. • vit: Vitality, having energy, feeling well-rested and healthy, and being active. • rel: Relationships, the degree and quality of interactions in close relationships with family, friends, and others who provide support. • wor: Job quality, feeling satisfied with employment, work-life balance, and evaluating the emotional experiences of work and work conditions. In the second step of our analysis, we used one official statistic that is traditionally used to evaluate the labor world: the quarterly regional unemployment rate (t_unemployment). As in the used data there are different time frequencies, and SWBI has higher-frequency data (daily), all data were integrated at the daily level using repeated values for annual and quarterly data. Social Media Data vs. Survey Data The first step of our analysis was aimed to understand if a coherence exists among subjective measures of well-being; the high-time-frequency social network data, composed of a structured and and emotional component [7]; the annual survey data from the Aspects of Daily Life report (ISTAT). The analysis was performed considering six survey variables (I_sat, I_eco_sat, I_health_sat, I_family_sat, I_friends_sat, and I_freetime_sat), four SWBI variables (sat, vit, wor, and rel), and four covariates (year, month, day of week, and province). The analysis was performed using R statistical language, using the bnlearn [15] and BNViewer libraries [22]. The network was obtained with the hill-climbing algorithm with the BIC-CG score functions. The resulting network is shown in Figures 2-4. In these dynamical plots, the highlighted nodes are in purple, while the children or parents of the highlighted nodes are in white. Figure 2 highlights the parents nodes of ISTAT's overall satisfaction in white. The ascendents that are not parents are in grey. The structure of the our obtained network allows us to respond with the affirmative to our initial research question: the ISTAT's overall satisfaction is linked both to the individual dimensions of the survey and to the variables that comprise the social SWBI. Figure 3 highlights the descendent nodes of the provinces node, the children are denoted by white nodes. It is evident that the geography dimension has an impact on the measurements obtained through the surveys, namely direct with the single dimensions and mediated by the global index of satisfaction (I_sat), but its effect on social data, characterized by wider variability due to the emotional component, is negligible. This result is in line with those reported in [8]. Figure 4 highlights the descendent nodes of the week day. As expected, the day of the week has a direct impact on the indices obtained by Twitter data, sat, vit, wor, and rel, but it has no effect on the ISTAT's satisfaction indices, which are annual. The same was observed for the month, as can be found by examining the arrows starting at months. Social Media Data vs. Official Statistics In the second step, we investigated social network data and official statistics. We focused on the forecasting power of social media information. We used an official statistic that is traditionally used to evaluate the labor world: the quarterly regional unemployment rate (t_unemployment). We aimed to determine if the social variables, in particular the wor variable, job quality, is able to anticipate the information in the official measure of the unemployment rate, which refers to previous months. For this purpose, we tried to evaluate the effect of wor using the strength of the link to the unemployment rate, with lags of 90, 120, 270, and 360 days. Figure 5 reports the Bayesian network obtained with the hill-climbing algorithm with the BIC score functions, with a 90-day lag. As such, we confirmed that all the social dominions are related to the official statistics. For completeness, we stress that, when we considered the other lags (120, 270, and 360 days), similar networks were obtained: all the links were confirmed and the networks differed only in the strength of the correspondence with the different arcs. The strength is measured by the score gain/loss that would be caused by the arc's removal [16]. In other words, it is the difference between the score of the network in which the arc is not present and the score of the network in which the arc is present. Negative values, represented on the vertical axis in Figure 6, correspond to decreases in the network score; positive values correspond to increases in the network score (the stronger is the relationship, the more negative is the difference). Considering the lags of 90, 180, and 270 days, we found an increasingly stronger relationship between the social information (wor) and the official statistic (t_unemployment). When considering the 360-day lag, corresponding to an annual delay, the forecasting power of the SWBI decreases, but the difference is still negative. Conclusions Evaluating well-being has involved the work of many scholars, and new methods have recently characterized the international scientific research in this field. In the European context, the work of the Stiglitz Commission [1] represents an epochal turning point, among others. Nowadays, well-being measures must involve both objective and subjective components, provide comparable indicators, have good territorial granularity within the nation, and be up to date. Another aspect highlighted by [23] is the need to integrate different data: by source, by representativeness, by time frequency, and by territorial granularity. In this method, we incorporated all these aspects: applying Bayesian networks, we defined two questions for which we arrived at an affirmative answer for both. In Section 3.1, we describe the relationship between the survey and social network data. Because the relationships among survey and social network data are complex, the BNs approach was used to evaluate the depending structure. The network that we obtained allowed us to answer "yes" to our first research question: the ISTAT's survey data are linked to the variables that comprise the social SWBI. The roles of the different covariates are confirmed. In Section 3.2, we investigate social network data and official statistics, with a particular focus on the forecasting power of social media information. We confirmed that the social variables, in particular job quality (wor), predicts the information of the official measure of the unemployment rate. With a 270-day delay, i.e., nine months, we identified a stronger relationship. In the future, we will focus on developing the analysis to evaluate if social network data, focusing on other domains, are able to anticipate other official statistics. This study will be important because the availability of official statistics with a reasonable time frequency and a realistic geographical granularity is limited.
4,937.8
2021-07-30T00:00:00.000
[ "Computer Science" ]
From classroom tutor to hypertext adviser: an evaluation This paper describes a three-year experiment to investigate the possibility of making economies by replacing practical laboratory sessions with courseware while attempting to ensure that the quality of the student learning experience did not suffer. Pathology labs are a central component of the first-year medical undergraduate curriculum at Southampton. Activities in these labs had been carefully designed and they were supervised by lab demonstrators who were subject domain experts. The labs were successful in the eyes of both staff and students but were expensive to conduct, in terms of equipment and staffing. Year by year evaluation of the introduction of courseware revealed that there was no measurable difference in student performance as a result of introducing the courseware, but that students were unhappy about the loss of interaction with the demonstrators. The final outcome of this experiment was a courseware replacement for six labs which included a software online hypertext adviser. The contribution of this work is that it adds to the body of empirical evidence in support of the importance of maintaining dialogue with students when introducing courseware, and it presents an example of how this interaction might be achieved in software. Introduction In response to an initiative to improve student attendance and appreciation of pathology practicals, case-based teaching was introduced to a first-year undergraduate Pathology course (McCullagh and Roche, 1992). The practicals were designed as part of a total experience, to build upon material recently presented in lectures, and they typically included notes on a case study, slides and a microscope. Students were required to answer a number of questions related to the case study and the practicals were followed by tutorial discussion groups in which the issues raised could be explored and reflected upon. The practicals were self-paced and the students could ask for assistance from demonstrators whenever they wished, although the majority of problems were sorted out by discussion between students. These labs were informal; there were generally around eighty students present in each lab session, and there was a background noise consistent with many students engaging in conversation. Students arranged themselves informally into groups of between two and eight. These labs were perceived to be popular and effective. The problems were the difficulty in recruiting sufficient trained demonstrators and the potential for some students to take back seats. In some cases perhaps only one student would do the microscope work in spite of the possibility of using monitors. However, there was a need to economize and it was decided to conduct a two-part experiment: • to introduce some courseware to replace the practicals; • to investigate the possibility of doing without demonstrators for the courseware practicals. At the same time the pathology staff hoped that there might be educational gains from the introduction of courseware, since courseware may be used many times, allowing students the opportunity, after their tutorial discussion groups for example, to revisit the work and reconceptualize the material (Mayes, 1993;Mayes, 1995). In this paper we start by briefly describing the research context within which this work was conducted. We continue by describing the nature of the courseware that was designed to replace the practicals and by detailing the results of the first evaluation of this courseware, which demonstrated that student learning did not suffer from the use of the software while demonstrators were still present. Encouraged by this result a second trial was conducted, this time without demonstrators present when the students used the courseware. Again the results were encouraging, but the evaluation demonstrated that students had not always succeeded in getting answers to some of their questions. The final section describes a third trial in which a hypertext software adviser was used to provide students with additional help. The evaluation of this trial demonstrates that students found the adviser helpful and were more likely to accept the software and to find answers to their questions. Context Many evaluations of the adoption of courseware have a positive result in that they tend to demonstrate that student learning is either unaffected or improved by the intervention; for example, see the 'the no significant difference phenomenon' (Russell, 1999;Johnson, Aragon, Shaik and Palma-Rivas, 2000). However, there have been some more recent ethnographic studies (such as Hara and Kling, 1999) pointing out that students are not always pleased with the learning experience, and may be frustrated by the environment and their inability to talk with someone to solve their problems. In the more extreme cases they see such methods as attempts by universities to absolve themselves of their teaching duties (Noble, 1998). As a result recent research has examined ways in which communication with and between students can be maintained within an online environment (Wegerif, 1998;Arvan, Ory, Bullock, Burnaska and Hanson, 1998). This work adds further evidence to the debate, and introduces a software adviser agent as part of the solution to the problem of maintaining dialogue. A particular feature of courseware that this work addresses is that of replacing practical laboratories with virtual practicals. There is a body of work in this area which aims at producing virtual or remote practicals (for example, Colwell and Scanlon, 2001) in order to make the experience available to distance students, to share expensive experiments more widely or to make dangerous experiments safer. The focus of this work was concerned with making economies in terms of laboratory equipment, laboratory space and demonstrator time. The SCALPEL courseware The general principle in building the SCALPEL (Southampton Computer Assisted Learning Pathology Education Laboratory) courseware was to design an environment to replicate the same six case studies previously delivered in the traditional laboratory. The design constitutes a main window containing these case studies. Hyperlinks were authored into the text to provide access to the same supplementary material previously provided in the lab, that is, pictures, videos and images of microscope work. Additional online background material could easily be referenced and searched when required. A Multiple Choice Question (MCQ) engine was also integrated with the courseware in order to allow the students immediate feedback on whether they were getting the correct answers to the questions in the case study. The courseware was implemented using Microcosm (Hall, Davis and Hutchings, 1996). The rationale for this decision was mainly due to Microcosm's facilities for integrating with third-party applications; in this experiment two-way integration with the MCQ engine (provided by the STOMP TLTP project) was an important feature, and in the case where a student asked for help from background materials the engine had to be able to follow links back to content. Integration with the Media Viewer was also important. Two other features of Microcosm that were useful were generic links (Fountain, Hall, Heath and Davis, 1990) which allowed the rapid authoring of links from all occurrences of keywords and phrases to appropriate materials, and the computer-link facility which used advanced text search features dynamically to locate suitable materials for the user. The initial design of SCALPEL was in 1996; at this stage the Web was still in its infancy and browsers had support for little more than rendering of html. The Web was seen as encouraging a didactic view of learning, rather than the student-controlled exploratory style (Crook and Webster, 1997) that was needed for these case studies. Figure 1 shows a screenshot of the SCALPEL interface. The case study notes are on the right. The user has followed a link to see two histology slides, and the MCQ engine is ready for the student's answer to the question in the case study. It would be possible to create a fairly similar learning experience using any of the now widely available virtual learning environments (VLEs), but the choice of environment was fortuitous in making the courseware simple to deliver. Evaluation methodology The evaluations were undertaken in three consecutive years. In each year there were around 160 first-year undergraduate medical students. The unchanged selection procedures and the collection of demographic data gave us confidence that from year to year these students formed a very similar selection of the population, and that it was therefore possible to generalize results from year to year. It is probably reasonable further to generalize the results for the whole body of undergraduate medical students but wider generalization to the population as a whole should only be done with caution. Quantitative data were collected using questionnaires. These questionnaires asked for answers to factual and non-factual attitudinal information. The latter answers were collected on a five-point Likert scale (Strongly agree, Agree, Neutral, Disagree, Strongly disagree). The sense of the wording of some indicators was reversed to prevent the results being affected by, for example, students agreeing with everything. Complex attitudes were measured using a number of facets of that attitude, and then those facets were averaged. Questionnaires were pre-trialled with colleagues to highlight any problems before they were presented to students. All questionnaires were anonymous. In the first trial the practical sessions were timetabled, and questionnaires were distributed at the halfway point and at the end of the final practical, so that return rates were 100 per cent (N=157). In the second trial students completed the practicals in their own time, and were requested to return questionnaires when they had finished. This led to a much lower response rate (N=97/160). In the final trial, in spite of organizing a sweepstake offering two £50 prizes (students could cut a corner off their questionnaire, write their name on it and add it to the sweepstake, thus maintaining anonymity), the return rate fell even further (N= 74/165). In addition to the quantitative data collection, focus groups were held and students were observed using the system. No attempt was made to quantify the data collected, but rather the results were used to add substance to and to help interpret the results gained from analysis of the questionnaires. Those who wish to see greater depth of methodology are referred to Michael Kemp's Ph.D. thesis (Kemp, 2000) to see the hypotheses tested, the statistical tests applied and the full experimental results. Online labs: the first evaluation The main purpose of the first evaluation was to discover whether there was any measurable change in the effectiveness of learning when lab practicals were replaced with SCALPEL courseware. Clearly an important facet of the introduction of any new teaching method is to convince staff that students would not fare worse as the result of the change. As a subagenda we chose to investigate whether there were any groups of students who fared better or worse as the result of the change, where groups were defined by such concepts as demographics, preferred learning style and previous exposure to computers. For this part of the study twenty-three hypotheses were tested: the fundamental three were about pedagogic value. 1. The educational attainment of pathology on average differs between medical students who work through practicals in the traditional laboratory and those who use SCALPEL. 2. Medical students find SCALPEL an acceptable pedagogic environment. 3. Medical students' acceptance of SCALPEL influences educational attainment. Hypotheses 4-9 looked at what aspects of students (demographics, learning styles, attitudes, and so on) influenced the pedagogic value of SCALPEL. Hypotheses 10-13 looked at what student characteristics affected their need for demonstrators. Hypotheses 14-18 looked at what characteristics might affect their attitude to computers and hypotheses 19-23 looked at how student characteristics affected learning style. For the purpose of this study the students were split into two groups. Group A did the first three practicals using the traditional lab and the next three practicals using SCALPEL, whereas Group B did the first three practicals using SCALPEL and the next three in the traditional lab. Demonstrators were present in the lab and in the computer rooms where the SCALPEL practicals were held in the normal timetabled slots. Educational attainment (ability to recall and apply subject domain knowledge) was measured by examination using MCQ (one mark for right answer, minus one for wrong answer), and short-answer tests. These examinations were taken by all 159 students after the first .three practicals and after all practicals were completed. Questionnaires were administered before the start of the first practical, at the end of their final practical using SCALPEL and after all practicals were completed. The important outcomes of this evaluation were that there was no relationship between educational attainment as measured and the method used for study. Both groups of students scored similar marks at the crossover and at the end, and the marks at the end were an improvement on the marks at the crossover point. Acceptance of SCALPEL was distributed evenly around 'Neutral' and no significant relationship was found between educational attainment and acceptance of SCALPEL. Other important results were that the students expressed a strong preference for maintaining the presence of demonstrators in the labs when using SCALPEL, and that there was a weak negative relationship between those that expressed a need for demonstrators and expressed acceptance of SCALPEL (that is, those who did not enjoy SCALPEL were more inclined to need demonstrators). These results were further emphasized by the focus groups. No significant relationships were discovered between students grouped by demographics, learning styles or previous exposure to computers, and their acceptance of SCALPEL or their educational attainment. Observations indicated that the dynamics of the lab sessions changed with the introduction of SCALPEL. The labs were quiet and the students worked mostly alone, although small ad-hoc groups of two or three students might form to discuss a problem occasionally. Demonstrators were observed moving around the lab to deal with individual queries rather than groups, and as a consequence often answered the same question many times. Few problems were observed with the software although in their first session it was noticed that some students took a while to learn the importance of closing a window after finishing with its contents. Doing away with demonstrators: the second evaluation In spite of the strength of feeling expressed by the students in the first trial concerning the need for demonstrator assistance, the team were encouraged by the results showing no change in educational attainment, and decided to continue the experiment by using SCALPEL without demonstrators. For the second trial, all students were asked to complete the six case studies using SCALPEL. The work was self-paced, meaning they could do it when they liked as no demonstrators would be present in the computer rooms anyway, but tutorials would still be run at the set times and would cover the points that were raised from the last practical that they were expected to have completed. The important feature of this evaluation was the attempt to discover the methods that students used to solve problems they encountered during practicals. They were asked to note the number of times they would have liked to have consulted a demonstrator, why they wanted to consult a demonstrator, and how they eventually answered their question (whether they used the text book, lecture notes, resources within SCALPEL, asked a student, asked a member of staff, asked in tutorial, or whether they never did answer their question). Figure 4 below shows the variation of expressed need for demonstrators during SCALPEL sessions, and it is clear that year by year the students showed less need for demonstrators, in spite of the fact, that in Year 2 at least, nothing had been done to compensate for their absence. This effect is discussed more fully below. It was clear that the reasons for wishing to consult a demonstrator were nearly all concerned with the pathology content of the practicals and very rarely to do with procedural issues or operation of the software. Figure 2 shows the methods students used to solve their problems. Text books, tutorial dialogue and peer dialogue are clearly important. Staff were rarely consulted, and the use of the SCALPEL courseware materials is small, indicating that students did not think it likely that answers to their questions would be discovered online. Students were also asked to suggest how SCALPEL might be improved by ranking the following and by making further suggestions: • glossary of terms; • online text books; • ability to ask questions online and get answers; • on line Notepad; • more links to background material; • optional extra practicals; • more animations; • sound clips to introduce sections, such as patient histories. Students were clear that their preferred improvements to the SCALPEL courseware would involve putting text books online, providing rich linking to the new online materials and the ability to ask questions online. These results were confirmed in focus groups; students showed little interest in the introduction of multimedia gizmos, in spite of the fact that a few high-quality examples had been created to demonstrate the sort of material that was envisaged. Introducing a hypertext adviser: the third evaluation The importance of dialogue with peers and teachers is well documented (see, for example, Laurillard, 2001;Schank and Clearly, 1995;Mayes, 1995). In the earlier versions of SCALPEL the only way that the system entered into any dialogue with the student was via the MCQ engine, which asked the student questions and gave them some feedback on their answers. Clearly this is limited by the fact that the subject of the dialogue is decided by the system and there is no opportunity for the students to ask their own questions. The fact that students were allowed to carry out the SCALPEL practicals at the time of their choice ruled out the use of synchronous communication methods for online dialogue, as there were unlikely to be any staff or many students online at a convenient moment. On the other hand, there has been much work recently on the part that asynchronous communication can play in providing tutorial support and dialogue (such as Masterton, 1998;Stratfold, 1998). Many of these approaches are based on work on 'Answer Gardens' (Ackerman and Malone, 1990). In this approach students pose questions, and the answers to these questions are collected in a kind of extended FAQ, organized by topic so that students can locate areas of similar questions to their own. As more questions are posed and answered, the answer garden grows. The final experiment set out to discover whether the introduction of a richer environment supported by an online 'demonstrator agent' would improve attitudes to SCALPEL. In this experiment all 165 students completed the same six practicals, but an experimental group of 40 volunteers were given additional access to an enhanced MCQ engine which provided access to the demonstrator agent. Of these, 28 students from the experimental group and 46 from the control group returned questionnaires. Figure 3 shows a typical screen shot of the demonstrator agent in use. The student wishes to answer question 2, but has a question. The demonstrator agent is aware of the context (the question the student is trying to answer) and is therefore able to list all other questions that have previously been asked in this context (as well as offering the student the opportunity to consult other contexts). In the example the student has selected to see the answer to one of these questions. Alternatively the student could have submitted a new query. This would have been answered in due course by a domain expert, and thus added to the list of previously asked questions. In this way the answer garden grows. In the example the student has read the answer, but is perhaps not clear of the exact meaning of one of the terms that is used in the answer, so has pressed the Show Links button. This has sent a query to Microcosm asking it to show any links it has in its database which are anchored on terms which appear in the answer to the question. The screenshot shows Microcosm returning with two links to background materials (in this case a glossary of terms, but it might have been an online text book) that might be useful. The results of the evaluation were encouraging. Figure 4 shows the decreasing need that students as a whole show for consulting demonstrators. This was directly linked to a year by year improvement in the acceptance of SCALPEL. It is not clear why this year on year improvement occurred. One possible interpretation is that in the late 1990s computers were becoming increasingly widespread (this was confirmed in their responses to questions about previous exposure) and in particular the Web was becoming a standard source of reference. A generation of students brought up with expectations of learning from and referring to online materials is more likely to find courseware acceptable. There was little difference between the experimental group and the control group in the reasons expressed for wishing to consult a demonstrator. However, the control group expressed an average need of seven demonstrator consults per student compared with an average of five per student for the experiment group. ••Pathology -Histology •Aims and Objectives •-MCO Engine -SCALPEL Functionality The pattern for solving problems changed considerably as shown in Figures 5 and 6. In particular we see that the SCALPEL courseware becomes much more used, questions to members of staff become used a little (by submitting questions via the agent) and the number of questions never solved is considerably reduced. There was a significant improvement in attitude to the use of SCALPEL recorded by those who had access to the demonstrator agent. This is attributed to a reduction in the average number of required demonstrator consults relative to the control group, and an increase in problems solved as a consequence of the demonstrator agent facilitating the resolution of misconceptions with the pathology material. Conclusions This series of evaluation experiments has demonstrated that there was no significant change in student learning as the result of replacing traditional labs with virtual labs, even with the earliest group who were most averse to using courseware. An unexpected result of this work was the fact that, over the three years, the first-year students were increasingly accepting of the use of courseware and found it less important to consult demonstrators. This was independent of any changes we made in the courseware. It is possible that this effect could be in some way linked to the reducing percentage of students that answered the surveys, but is more probably a reflection of the way society is changing. Over the period of this study in the late 1990s PCs, and in particular the Web and email, became standard tools for most educated citizens of. the UK, and as a consequence the students were more accustomed to reading, learning from and interacting with computers. In spite of the general improvement in acceptance of courseware, there was a general preference for access to demonstrators. This is not surprising; models of learning based on constructivist views (Brown, Metz and Campione, 1996) and conversational frameworks (Laurillard, 2001) would lead one to expect that removing demonstrator assistance would lead to poorer learning, as reported by our students. The encouraging result was that the acceptance of the use of courseware was significantly improved by the reintroduction of a relatively simple software agent, which provided limited dialogue via an answer garden and dynamic access to suitable in-context links. It is testament to the success of this work that all pathology practicals at the University of Southampton were subsequently delivered to students through the SCALPEL environment.
5,429.2
2002-09-01T00:00:00.000
[ "Education", "Medicine", "Computer Science" ]
Student Interns’ Core Values: A Case Study of the Mathematics Education Program The research focuses on determining the student interns’ perceptions about the 4 core values according to Inprasitha (2013). The participants were 124 student teachers in the mathematics education program, at KhonKaen University during the 2017 school year. Data were collected by questionnaires, interviews after their completion of practicum and participatory observation. Data were analyzed by descriptive statistics and content analysis. The results revealed 100% of student interns perceived each value. The four values are: 1) Building Collaboration: they valued students’ working together and recognized the need to compromise opinions arising from mutual discussions while planning and reflecting together; 2) Open-Minded Attitudes: they valued listening to comments from their peers, and they valued waiting for students to understand the problem/situation by themselves and gave students the opportunity to think by themselves and explain their ideas; 3) Public Concern: they valued supporting student learning in the classroom and taking care of all students, and they could help with school activities and; 4) Emphasis on Product-Processes Approach: they valued thoroughly planning, anticipated students’ ideas and patiently waited for students to solve their own problems. Introduction Initial teacher education faces issues associated with transforming the identity of student teachers from that of student to that of teacher (Ponte & Chapman, 2008). Preservice preparation is a time to begin developing a basic repertoire for reform-minded teaching and is a time to form habits and skills necessary for the ongoing study of teaching in the company of colleagues (Feiman-Nemser, 2001). Initial teacher education is primarily concerned with developing proficiency with a number of different dimensions of teacher knowledge, from teachers' knowledge of mathematical content to teachers' knowledge of pedagogy and didactics (Liljedahl et al., 2009). So, the initial teacher preparation program has closely linked and is a great impact on the quality of teachers (Inprasitha, 2013;Sowder, 2007;Office of the Education Council, 2015). In Thailand, the general initial teacher education program operates in accordance with the standards framework announced by the Ministry of Education and the professional aspect is supervised by the Teachers Council of Thailand (Ministry of Education, 2011;The Teachers' Council of Thailand, 2013). During the last 3 decades, the initial teacher education program has undergone many changes. There are a variety of models of the program which encourage good students to enter the teacher profession. But the overall teacher productivity in the country still encounters many obstacles, both in quantity and quality. This includes teachers who are unable to teach in accordance with education in the 21st century and the most crucial problem among every model of the program is the spirit of the teaching profession (Inprasitha, 2013; Office of the Education Council, 2017). Inprasitha (2013) proposed that every institution lacks a practical approach that will mold students into the quality of teachers required by society (Inprasitha, 2013; Office of the Education Council, 2017). As mentioned above, the mathematics education program, faculty of education, KhonKaen University, Thailand, initiated a new model of teacher education based on Inprasitha's ideas (2013) which were driven by 4 core values. This program aims to build the spirit of the teaching profession through participating in extracurricular activities related to the core values during year 1 -year 5. These 4 core values have been designed by weaving together the 3 steps of Lesson Study. So, student interns received training beginning in their first year through extracurricular activities, and they have been trained to be a member of a professional learning community in a school project. The research question was: how they do students perceive the 4 core values throughout engagement in activities during their 5-year program? Mathematics Teacher Education Program According to Inprasitha's Ideas Thai traditional teacher education has a basic understanding that teachers have a duty to transfer knowledge from teacher to students. The initial teacher education program, therefore, provides the value that teachers need to have mathematical content, teaching material, and teaching methods to convey content to students. But education for the 21st century gives importance to teaching and learning that emphasizes thinking skills; teachers are able to focus on the students' (Inprasitha, 2016;Inprasitha, 2015;Inprasitha, 2007). As mentioned above, KhonKaen University, Thailand initiated a new model of teacher education in 2004. Inprasitha (2015) concludes that the features of the new teacher education program, according to Inprasitha's ideas are as follows: 1) Building a community which is the immediate link between pre-service and in-service programs; 2) Designating what kinds of practices we want them to cultivate from the beginning (Thailand uses Open Approach and Lesson Study); 3) Since we enculturate new classroom culture, the new teacher education program is what we call a "values-driven" program; and 4) A long-lasting, unresolved problem among alignment of subject matters and pedagogy has been tackled by using the PCK idea. However, during the last 3 decades, the initial teacher education program in Thailand has undergone many changes. However, the most crucial problem among every model of the program is the spirit of the teaching profession. As mentioned, KhonKaen University, Thailand initiated a new model of teacher education based on Inprasitha (2013) which is driven by 4 core values (Inprasitha, 2013). This 5-year Mathematics Education Program aims to build teachers with the spirit of the teaching profession through participating in 4 core value activities from year 1 to year 5. These 4 core value activities have been designed by weaving together the 3 steps of Lesson Study. In the final year, student teachers practice teaching at project schools that use the teaching practice of lesson study to incorporate an open approach consisting of a lesson study team, teaching practices in weekly cycle and school support and university support . Inprasitha (2013) explained that during the last decade (2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013), the Faculty of Education, KhonKaen University was challenged to design a new type of teacher education program. Based on the idea of pedagogical content knowledge, they made a distinctive program by dividing the major course into three categories, that is: collegiate or advanced mathematics, school mathematics, and mathematical learning processes. Moreover, the program is intentionally planned based on the idea of educational values and educational theoretical frameworks: reflective thinking (Dewey, 1933cited in Inprasitha, 2013 and community of practice as a learning community (Lave & Wenger, 1991cited in Inprasitha, 2013. Based on the perspective of values education, four core values have been selected that are driven in mathematics teacher education program (Inprasitha, 2016;Inprasitha, 2013). The details about the four core values have been expanded as follows (Inprasitha, 2013): 1) Building Collaboration emphasizes that members of the community are aware of the identity of each member, and give value to all members because everyone is an important person. While working together, they share ideas and also listen to the arguments and suggestions of each member in the community in order to ensure success. 2) Public Concern emphasizes that members of the community are aware of the public with regard to others rather than oneself. Having good intentions, Creative Education students will gain a public mind through various activities which will be a valuable experience that cultivates values which are based on Thai society and empower students to be able to live in a society. 3) Open-minded Attitudes emphasize that members of the community are willing to listen or consider new ideas from other members of the community even if it is a criticism or suggestions. 4) Product-Processes Approach, emphasizes students valuing their work in both the processes and results. In particular, the emphasis on the process part helps students understand each procedure of the process and history of those results. These 4 core value activities have been designed by weaving together the 3 steps of Lesson Study. Student interns have received since their first year through a variety of social extracurricular activities according to Table 1 and Figure 1, during which time the four core values were nurtured. The idea of a community of practice brought into the program a focus on individual participation in every activity (Inprasitha, 2007;Inprasitha, 2013). All extracurricular activities follow the process of lesson study based on Inprasitha (2010). The process is three steps as shown in Figure 2. Step 1: Collaboratively planning, step 2: Collaboratively doing and step 3: Collaboratively reflecting. They were trained to use lesson study cycle as an approach for improving work on all activities and containing four core values that drive the mathematics teacher education program. Inprasitha (2013) put it; "The idea was practically implemented in the simplest fashion by allowing students the time to reflect upon whatever activity they had done." The central issues are on "reflection" rather than what they had done (Inprasitha, 2016;Inprasitha, 2013). In mathematics education, Faculty of education, KhonKaen University, student teachers have been trained to be a member of a professional learning community in a school project including in-service teachers, principals, supervisors, graduate students, and teacher educators (Inprasitha, 2015). So, the structure of the mathematics education community is comprised of a group of people with a variety of statuses, including age, year and educational level. Throughout the program, from year 1 to year 5, student teachers participate in extracurricular activities in all 3 roles including the role of participants in activities as the younger students, the role of event organizer and the role of senior. Community structure in the program found that students in the 5-year mathematics education program had the opportunity to participate in main core value activities and participated in 3 roles. In the final year of the Mathematics Education program, student interns use teaching practices with lesson study incorporating open approach at schools project of the students' mathematical higher thinking development project in northeastern of Thailand, consist of lesson study team in weekly cycle (Changsri, 2012). Lesson Study Lesson Study originated in Japan from Japanese Language "Jugyokenkyuu". The major point of Lesson Study in every process was teachers working together (Stigler & Hiebert, 1999;Baba, 2007 It might be said that the simple steps of lesson study include collaboration in planning, observing and reflecting (Inprasitha, 2007). As Shimizu & Chino (2015) put it, in Japan, there is growing interest in using Lesson Study to foster a practical grounding for prospective teachers as part of their pre-service education courses and it plays an important role in teacher education in Japan in both pre-service education for new teachers and the in-service professional development for licensed teachers also. In Thailand, Lesson Study was implemented originally by Associate Professor open-ended problems, 2) student's self-learning through problem solving, 3) whole class discussion and comparison, and 4) summarization through students' mathematical idea association occurring in the whole class (Inprasitha, 2010). For Lesson Study of extracurricular activities, before student interns practiced at school projects, they attended activities both in the course and extracurricular activities. For extracurricular activities, they were trained to work based on lesson study cycle as an approach for improving work in all activities. The Mathematics Education program, faculty of education, KhonKaen University, promotes core values through extracurricular activities. The participants in each activity were a variety of statuses, ages, years and level of education. Activities were designed for students to have the opportunity to take part in social activities throughout the 5 years and have the opportunity to take part in activities in 3 positions including the younger students, event organizers and seniors. For all activities, the group of student teachers who were the event organizers has conducted the 3 main steps of Lesson Study according to Inprasitha (2010) as follows: 1) Collaboratively planning: The group consists of undergraduate students, graduate students and lecturers. There is a division of responsibility in the work department, for paying attention to detailed planning and taking a long time to plan more than one cycle following these step: collaboratively planning with peers, revising the plan, collaboratively planning with the seniors, revising the plan, collaboratively planning and getting advice from lecturers and revising the plan with peers and seniors. 2) Collaboratively doing: Students have the opportunity to participate in all 3 positions, including the younger students, event organizers and seniors. All activities require cooperation in work, a common problem-solving situation, learning to accept other people's opinions and seeing the common interest rather than personal interest. 3) Collaboratively reflecting: this was divided into 3 phases, including: joining all activities after the preparatory meeting reflecting in subgroups according to each work department and reflecting together with all work departments. There were two levels of reflecting after joint activities including level 1: reflection to improve conducting activities, level 2: reflection for learning after engaging in social activity and building awareness of core values. ideas, ways of solving problems, etc., in order to bring to the discussion to the third phase, as shown in Figure 5. 3) Collaboratively reflecting on teaching practice: At the end of the week, all teachers in the school have weekly reflections together. In this phase, the school principal conducted the meeting as moderator of this activity. Student interns act as an instructor and present the results of their teaching to mentor teachers, other observers and school principals. They discussed about students' ideas and students' responses to problems/situations and mathematical concepts and recorded the results to improve teaching, as shown in Figure 6. The components of the extracurricular activities are consistent with the components of teaching practice in educational institutions, as shown in Table 2. The program has trained student interns to have core values to support teaching practice in schools that emphasis students' thinking and learning together with others in professional learning communities. Research Objectives To analyze student interns' perceptions about the 4 core values: a case study of the mathematics education program. Participants This study was conducted in the context of the Mathematics Education Program, Faculty of education; KhonKaen University is a 5-year initial teacher education program. Participants were 124 student interns of the Mathematics Education Program, Faculty of Education, KhonKaen University. They can teach mathematics at both primary and secondary levels by choosing one throughout 1 academic year, but most the participants teach at the elementary level. All participants are between age 22 and 23, and vary in gender. The participants' studies in the 5-year mathematics education program from the academic years 2014 to 2018. Figure 6. The incident while student intern collaboratively reflects with the school principal, mentor teacher and other teachers. incorporating open approach consisting of a lesson study team and teaching practices in weekly cycles. And then, a small number of participants as a target group were selected purposefully for in-depth study, including 12 student interns. Data Collection Data were collected in two phases. Firstly, we collected data with 124 student interns after their completion of the practicum by using a questionnaire referring to determine the student teachers' perceptions about the 4 core values according to Inprasitha (2013) and what they learned after participated in those activities, and the values/benefits of participating in extracurricular activities. Secondly, after collecting data in phase 1, we collected data with 12 student interns for in-depth study by using a semi-structured interview, including clusters of questions referring to the practical way of doing extracurricular activities according to the lesson study approach to enhance the core values following Inprasitha' ideas (2015) and their reflections about what they have learned after participating in those activities and what they perceived about values/benefits of participating in extracurricular activities for teaching practice. Methodological triangulation by using questionnaires, interviews and participatory observation were all used. Data Analysis This research aimed to focus on student teachers' perceptions about the core values, for qualitative data from the questionnaire, we used content analysis method. For the data from interviewing, we transcribed them and then analyzed the data by content analysis of student interns' perceptions about the 4 core values following Inprasitha's ideas (2015): building collaboration, open-minded attitudes, public concern and emphasis on product-processes approach. Research Results The research results focus on student interns' perceived core values after they engaged in all activities of the Mathematics Education Program, Faculty of Education, KhonKaen University. Student interns have perceptions about the core values as follows. "The program focuses on working with others, whether the person who you work with is known or unknown because I always know to accept the opinions of others and to work with others who have level/age differences and learn to work with peers in a same year who have a variety of attitudes." (Student intern number 99, February 21, 2018) During practicum, they used teaching practices with emphasis on student collaboration. While planning together, student teachers valued the opinions generated by the discussion. While reflecting together, student teachers valued the reflection on students' ideas for improving teaching. As shown in 2) Open-minded attitudes: They valued listening to comments from their peers and learning to accept others whom they collaboratively plan and reflect. With student interns valued the acceptance of others' opinions and valuing the acceptance of individual differences, with an open mind. The following findings are some examples of opinions as expressed by the student interns: "The experience gained from the program helped me to work with others and systematically plan. Reflections after working help me learn about problems in the workplace and predicting of various problems. So I accepted other people's ideas and was a person who is ready to learn and participate in the face of events/situations on their own and rather than closing myself off from things that are useful for learning and self-improvement." (Student intern number 26, February 21, 2018) During practicum they valued patiently waiting for students to understand the problem/situation by themselves, providing opportunities for students to solve problems on their own and giving students the opportunity to show reasons. From participation observation, during a phase of posing the problem/situation, student interns valued create an atmosphere for students to participate in answering questions and allowing all students to express their opinions even if they are is right or wrong. They focused on participation according to student experience and Open-minded attitudes 124 100 Public concern 124 100 Emphasis on product-processes approach 124 100 gave the opportunity for students to express their opinions on open-ended problems situations, listening to every student's answer. From Item46 to Item56, a student intern presented a picture as the material on the board, and response asked, "How many blocks are there?" One student replied, "There are ten" which was the corresponding to the number of blocks in the picture. The student intern used repetition of students' responses without judging whether they are right or wrong, then asking the students that "What about the other students?" The following findings were some examples as expressed in the classroom from Item46 to Item56: Item46 Student intern: Let's guess what activities we will have. (Student intern sticks picture on the board) Item47 students: I saw the blocks. Item48 Student intern: Saw the blocks. Item49 student 1: There were ten blocks. Item50 Student intern: Ten blocks, what do other people see? Item51 student 2: Square block. Item52 Student intern: Next, what are the next predictions? How much is this? Item53 students: Ten/Nine. Item54 Student intern: Why is it nine? Item55 student 3: It intersects. Item56 Student intern: It contrasts with another one. What about other students? 3) Public concern: Every activity must be done together and there is mutual help between members so those who were promoting them valued seeing the common interest rather than personal interest. The following findings are some examples of opinions as expressed by the student interns. "Activities in the program must be done together, plan together and reflect together. Everyone must sacrifice personal time to help friends to organize various activities. We have to help each other. During the younger generation being an event organizer, we must join in activities and encouragement them" (Student intern number 91, February 21, 2018) Valuing public concern promotes emphasis on facilitating student thinking in the classroom, valuing caring for every student, and valuing assistance with the activities of the school. From participation observing, student interns valued the role of facilitating the management of students' ideas. The student interns prepared the material to expand the student ideas. Student interns facilitated student thinking and took care of all students by attaching learning materials and writing on the board according to the words that students described as their reason/ideas as shown in Figure 7. "Activity in the program that is useful for teaching practice is the study tour activity which students in year 3 are organizers of because organizing this activity requires systematic planning and problem solving, including having to be careful and working step by step according to the process to prevent errors from occurring." (Student intern number 64, February 21, 2018) Valuing on emphasis on product-processes approach prompts them the importance of systematic work planning, with emphasis on anticipating the students' ideas in order to prepare teaching. They were patiently waited for students to solve problems by themselves and did not disturb them. As mentioned, student teachers were trained since their first year through a variety of extracurricular activities and they were also trained to be a member of professional learning community in a school project. The result of this research at phase 2 focuses on student interns' perceptions about the 4 core values that they perceived during practicum as follow Tables 4-6. Open-minded attitudes Public concern Emphasis on product-processes approach Collaboratively designing research lesson -Express opinions in a creative way that is useful for improving teaching. -Valuing compromise opinions arising from mutual discussions while planning together. -Listen and accept the opinions of others. -Listen to suggestions to improve the teaching. -Act as facilitator during collaborative planning. -Produce the learning materials for students intentionally. -Valuing planning the lesson plans thoroughly at every step. -Valuing anticipating students' ideas to prepare for effective teaching. Open-minded Attitudes Public concern Emphasis on product-processes approach Collaboratively reflecting on teaching practice -Reflecting on the issues that arise in the classroom for improving teaching and developing students' thinking. Listening and accepting suggestions to improve teaching practices. -Improve operations according to suggestions. -Acting as facilitator during collaborative reflection. -Reflect on the results according to the set point. -Emphasis on reflecting on the results of the students' ideas in detail. -Emphasis on listening to suggestions from others. Conclusion The Mathematics Teacher Education Program (5-year course), Faculty of Education, KhonKaen University has implemented a value-driven teacher education program according to Inprasitha (2015Inprasitha ( , 2013Inprasitha ( , 2010, It was built upon major core concepts: problem solving as a driving force for mathematical thinking, teachers learning together, and reflection using two innovations; open approach as a teaching method and lesson study as a way to improve teaching (Inprasitha, 2015). There are integrated lesson study and the open approach for activities throughout the program, both in courses and extra-curricular activities. This program aims to build the spirit of the teaching profession through participating in extracurricular activities as core value activities during year 1 -year 5. These corevalue activities have been designed weaving together the 3 steps of lesson study. Student interns are aware of the value of cultivating their values through participation in the teacher production programas organizers and participants. Progressing from year 1 to year 5 is a great way to learn the core values through practices consistent with the concept of learning by the sociocultural theories (Lerman, 2013) and the situative perspective that describes the meaning of learning as a change of participation in social activities (Lave & Wenger, 1991). Learning takes place in the real world of "practice" within the community of practice (Wenger, 1998). As a result, student teachers have the opportunity to absorb the values they need to cultivate and student teachers' values are one of the criteria which affect their performance in their teaching practices (Bishop, Seah, & Chin, 2003;Inprasitha, 2013) and indicate good quality learning and teaching (Lovat, 2005;Brady, 2011). Student interns of Mathematics Education Program, Faculty of Education KhonKaen University have the following core values: 1) Building collaboration: Every activity was organized according to lesson study cycle, so student teachers recognized working together through planning, participation and reflection. Participation in the course and extracurricular activities helps student interns learn to work with others and follow teaching practices with an emphasis on student' collaboration. While planning together, student teachers value the compromise of opinions generated by the discussion. While reflecting together, student teachers value the reflection on students' ideas for improving teaching. 2) Open-minded attitudes: Student interns value the acceptance of others' opinions and valuing the acceptance of individual differences, being open minded in accepting new ideas, and valuing to patiently waiting for students to understand the problem/situation by themselves while providing opportunities for students to solve problems on their own and to show their reasons. 3) Public concern: Every activity in the program and school must include spent time together with mutual help between members so everyone values see the common interest rather than personal interest. This value promotes an emphasis on facilitating student thinking in the classroom, valuing caring for every student, and valuing assisting in the activities of the school. 4) Emphasis on product-processes approach: Every activity provides value to detailed planning and anticipating what will happen with an emphasis on patiently waiting to hear the opinions of others and reflections for improving work and sending information to those responsible for activities in the following year. This value prompts the importance of systematic work planning, emphasis on anticipating students' ideas for preparing teaching, and patiently waiting for students to solve problems by themselves and not disturb them. Building community and participation in social practice in the courses and extracurricular activities with lesson study cycles according to Inprasitha (2013) provides a sound structure for professional learning and helps student teachers perceive and develop their core values. Research results support that one important thing for initial teacher education program is cultivating core values and undertakings can occur from doing and reflecting on their practice with others. Promotion of core values is a long-term process and is caused by accumulation through continuous participation in social activities. Finally, the essence of the activity should reflect the core values also.
6,066.2
2019-07-04T00:00:00.000
[ "Mathematics", "Education" ]
Inlet Effect Caused by Multichannel Structure for Molecular Electronic Transducer Based on a Turbulent-Laminar Flow Model. The actual fluid form of an electrolyte in a molecular electronic converter is an important factor that causes a decrease in the accuracy of a molecular electronic transducer (MET) liquid motion sensor. To study the actual fluid morphology of an inertial electrolyte in molecular electron transducers, an inlet effect is defined according to the fluid morphology of turbulent-laminar flow, and a numerical simulation model of turbulent-laminar flow is proposed. Based on the turbulent-laminar flow model, this paper studies the variation of the inlet effect intensity when the thickness of the outermost insulating layer is 50 µm and 100 µm, respectively. Meanwhile, the changes of the inlet effect intensity and the error rate of central axial velocity field are also analyzed when the input signal intensity is different. Through the numerical experiment, it verifies that the thickness of the outermost insulating layer and the amplitude of the input signal are two important factors which can affect the inlet effect intensity and also the accuracy of the MET. Therefore, this study can provide a theoretical basis for the quantitative study on the performance optimization of a MET liquid sensor. Introduction Solid-state accelerometers are widely employed in many fields. The high frequency response of solid "inertia sensor" is better compared to that of the low-frequency domain [1]. The molecular electronic transducer (MET) liquid motion sensor has lower self-noise and better accuracy in the low-frequency range, making it a good choice for low-frequency applications [2,3]. In recent years, the method of tracing and analyzing characteristic parameters of electrochemical processes by numerical simulation has been used widely. The parameter characteristics of a MET can also be studied in this way. Zaitsev established a noise model and analyzed the influence of the self-noise of a sensor within the frequency range of 0.01-200 Hz [4]. Zhou studied the influence of a MET elastic film on its low-frequency performance through numerical simulations and designed elastic films suitable for different cavities to optimize MET characteristics [5]. Vadim studied the MET convective noise model, and the simulation results were basically consistent with the experimental data [6]. Ivan established a noise model of METs in the full frequency operating range to study the technical parameters and characteristics of METs [7]. Sun established two-dimensional and three-dimensional laminar flow models of the MET sensor, revealing the relationship between electrolyte velocity, concentration distribution of active ions, and current density [8]. Huang, Agafonov, and Yu reviewed METs as motion sensor and applied MET for planetary explorations [9]. In previous studies, a laminar flow model was established to study the flow signal of a single-channel electrolyte, and an ideal output signal was used for simulation; however, the impact of the MET multichannel coupling effect in practical applications was not considered [10]. The MET multichannel interaction causes irregular movements in some electrolytes at the inlet and outlet of the channel, which forms a turbulent area with unstable flow field distribution [11]. Laminar flow refers to the layered flow of electrolyte fluid. There is only relative sliding between two adjacent layers of fluid, and there is no lateral mixing between the flow layers. Turbulent flow means that the fluid no longer maintains a layered flow but may flow in all directions. Turbulence has radial velocities perpendicular to the axial direction, and confusion occurs between the various flow layers. The essential difference between laminar and turbulent flow is that laminar flow has no radial velocity, while turbulent flow has radial velocity. Because of the collision between the electrolyte in the reaction chamber and the outermost insulating layer of the sensitive chip during the movement and the small radius porous structure of the sensitive chip, the electrolyte flow mode in the channel changes to a more complex turbulent-laminar flow mode. In previous studies, other researchers neglected the coupling between the reaction chamber and the multi-channel structure of the sensitive chip. The single channel model in the sensitive chip is usually simulated, and the electrolyte flow mode in the channel is considered as laminar flow. Based on this assumption, although the output signal and other characteristics of the sensor can be better represented, the model can't entirely express the multi-channel coupling effect in the turbulent-laminar flow mode. Therefore, to solve this problem, a multi-channel model of MET is proposed; this model can better describe turbulent and laminar flow patterns than the single-channel model. In addition, this paper qualitatively analyzes the two main factors that cause the intensity of the inlet effect, and their impact on the accuracy of the sensor. Geometry Model The physical structure of a MET reaction chamber is shown in Figure 1. The reaction chamber is surrounded by an insulating layer and filled with an electrolyte solution. The schematic of a single channel model is shown in Figure 2. The sensitive chip has four platinum electrode layers (thickness: 40 µm) are arranged as per the anode -cathode -cathode -anode (ACCA) model. They are separated by three internal insulation layers (40 µm) and two outermost insulation layers (100 µm). The nine layers can be compacted into a multichannel array in the shape of a columnar channel. The channel of the sensor is circular with a radius of 0.05 mm and a spacing of 0.25 mm at the center of the channel. As the electrolyte solution flows through the multichannel array, the solid parts of the sensor act as a barrier in addition to the channels. Obviously, the solid barrier can form an inlet effect on each channel, and therefore, the electrolyte flow in multiple channels is no longer a laminar flow, but a turbulent-laminar flow. Numerical Model We use the incompressible Navier-Stokes equations to study the electrolyte flow field in the reaction chamber [8]: where u, v, w represent the velocities in the X, Y, and Z directions, respectively; a x a y a z represent acceleration in the three directions; → P represents the pressure; and t represents the time; the density of the electrolyte is: and the dynamic viscosity of the electrolyte is: where, ρ w , µ w are the density and viscosity of water, c is the mass fraction of electrolyte in the electrolyte, A ij , B ij are the coefficients of different kinds of salt and T is the temperature of the electrolyte. In this study, we used the Poisson equation to describe the electric potential: where ε, ϕ, and ρ are the permittivity (F m −1 ), potential (V), and charge density (C m −3 ), respectively. The mass transport ((8), (9)) subject to mass continuity (10)) is still described by the Nernst-Planck equation [12]: is the velocity vector (ms −1 ); and R i is the mass source of species i (mol m −3 s −1 ). Because the model is highly nonlinear, it needs to be approximated by the time measurement of multiple lengths and scales. It is assumed that the electrolyte is electrically neutral in the entire process, the concentration of active ions is relatively small compared to the concentration of the background electrolyte [13,14]. Since the whole electrolyte is electrically neutral, the concentration gradient of the effectively charged ions and the current density in the electrolyte obey Ohm's law: under constant conductivity: Meanwhile, the concentration and velocity of electrolyte in incompressible flow are constrained by Fick's second law: where σ is conductivity (S m −1 ), Q is charge source (A m −3 ), and F stands for the Faraday constant. Under the assumption of electrolyte neutrality, the abovementioned partial differential equation and Nernst-Planck equation are combined to describe the activity of ions on the electrode surface: where → n is the charge per mole of electrons, k a = k c = 4 × 10 −9 m 2 /s are the reaction constant representing the cathode and anode, α is the charge and discharge coefficient of the cathode reaction set at 0.5, U = 0.8 V is the potential difference between the two pairs of electrodes, and E 0 = 0.54 V is the equilibrium potential [15]. Boundary Conditions and Parameter Settings The composition of the model electrolyte is defined as iodine-potassium iodide solution. The essence of the reaction is the mutual transformation of iodide and triiodide ions. i.e., I 2 + I − I − 3 . Potassium ions do not participate in the reaction, and therefore, zero flow boundary conditions should be used for potassium ions on the electrode. To ensure that the simulation results converge easily, the nonslip boundary condition is applied to all solid surfaces. On the surface of the insulation layer, the electrical insulation condition is applied for the electric field and the zero-ion flow boundary condition is applied for the ion transmission. At the outlet, considering the distance from the electrode, the effect of the electric field is ignored. Initial parameter settings are listed in Table 1 [8]. In this paper, COMSOL Multiphysics, which is powerful in electrochemical simulation, is selected to conduct multi-physical field modeling and simulation analysis. The calculation of the multi-channel simulation model is completed by the following procedures. First, the author adds the Nernst-Planck equation and fluid laminar flow model in COMSOL Multiphysics and then defines the reaction cavity and sensitive element model according to the geometric model size, secondly, defines the MET parameters according to the initial parameters and boundary definition conditions. Thirdly, the electrolyte composition of MET is defined as iodine-potassium iodide, the material of the sensitive element is platinum, and the insulating material is ceramic. Finally, the mesh is divided and the electrode part is encrypted. Evidence of Turbulence -Laminar Flow Phenomena and Multichannel Inlet/outlet Effects The above mathematical model was constructed by Comsol Multiphysics, a multi-physics finite element analysis software. By applying a sinusoidal signal with an acceleration of a = 0.01sin(πt) m/s 2 to the electrolyte in the reaction chamber, the distribution of the electrolyte flow field under the model in Figure 1 can be obtained as shown in Figure 3. . Figure 3 shows that at the inlet and outlet of the channel structure, the kinetic energy of the electrolyte is accumulated and consumed partially. Because the electrolyte undergoes two kinds of movements with large differences in space size from the reaction chamber to the porous channel during the movement, the kinetic energy of the electrolyte will be concentrated at the inlet and outlet positions of the sensitive chip. Therefore, the particle in the electrolyte move in an irregular direction. The random movement of the electrolyte is concentrated on the inlet/outlet of the multi-channel structure of the sensitive chip, which indicates there is the inlet effect on the inlet/outlet positions of the sensitive chip in the reaction chamber. The flow mode of the electrolyte was simulated with a new model ( Figure 4a) and compared with the simulation of Sun's laminar flow model (Figure 4b) [8][9][10][11]. As shown in Figure 4a, the radial velocity distribution gradually decreases from the inlet position, tends to zero near 90 µm, and maintains a state where the radial velocity tends to zero at 90 µm to 370 µm. At 370 µm, the radial velocity distribution increases gradually from zero. As shown in Figure 4b, the radial velocity distribution decreases gradually from the inlet position, tends to zero near 40 µm, and maintains a state where the radial velocity tends to zero at 40 µm to 400 µm. At 400 µm, the radial velocity distribution increases gradually from zero. In Figure 4a,b, the phase in which the radial velocity field tends to zero refers to the phase in which the fluid mode of the electrolyte in the channel of the sensor changes to the completely laminar flow mode. The area where the radial velocity field fluctuates at the inlet and outlet of the channel is the turbulent area. We define the change in the fluid flow state of the electrolyte that occurs at the inlet and outlet of the channel as the inlet effect. In Figure 4b, there is a turbulent region in the inlet and outlet area. Because the laminar flow model ignores the effects of channel-to-channel and channel-to-screen structure barrier of the sensitive chip. The model becomes very narrow in the turbulent region within the inlet and outlet areas. The velocity field distribution in the laminar flow model is also affected by the inlet effect. Figure 4a, the change in the electrolyte flow state from turbulent to laminar and from laminar to turbulent requires a longer distance; the variation trend of the radial velocity field in Figure 4a is smoother than that in Figure 4b. The fluid changes described at the inlet and outlet locations are more realistic. Thus, based on the above description, the turbulent-laminar flow model can better describe the inlet and outlet effect. Analysis of the Inlet/outlet Effect As shown in Figure 5a, when the thickness of the outermost insulation layers is 50 µm, the intensity of the inlet effect is large, and the change in the axial velocity distribution field caused by the inlet effect reaches the electrode; the inlet effect causes a velocity field in the electrode region where the electrochemical reaction occurs; this affected the accuracy of the chip, and therefore, the output current of the electrode cannot maintain the correct outermost excitation signal. As shown in Figure 5b, when the thickness of the outermost insulation layers is 100 µm, the intensity of the inlet effect is small, and the change in the axial velocity field distribution will not spread to the vicinity of the electrode, and the effect of the inlet effect on the accuracy of the sensor is small. This shows that the longer the outermost insulation layer, the smaller is the influence of the inlet effect, and the smaller the influence of the inlet effect on the accuracy of the sensor. We take the velocity field distribution in the quarter cycle under the sinusoidal excitation signal as shown in Figures 6 and 7. As shown in Figure 6, the amplitude of the axial velocity field increases gradually as the intensity of the excitation signal increases. As shown in Figure 7, the area affected by the inlet effect of the axial velocity field increases gradually as the intensity of the excitation signal increases, the inlet effect intensity increases gradually. This change will cause the velocity field near the electrode to become unstable, which will result in a velocity field in the electrode region to be disordered, thereby affecting the accuracy of the sensor. To further study this influencing factor, we define a variable called error ratee x = v x −v steady v steady × 100%-to describe the influence of the inlet effect on sensor accuracy; in this variable, v x is the axial velocity on the central axis in the channel, and v steady is an ideal stable value of the axial velocity on the central axis in the channel that is not affected by the turbulent laminar noise. In Sun's laminar flow condition, the result of the error rate is the same and the relationship between the amplitude of the input signal and the error rate is not evident. Because there is no effect of inlet effect in the model under the simple laminar flow condition, the error rate in the sensor channel is almost unchanged. The error rate is always low and can be considered as an ideal state. Further, the acceleration of the excitation signal increases with an increase in the intensity of sinusoidal excitation signal, as shown in Figure 8. When the electrolyte in the channel of the sensitive chip moves by a small acceleration (from a = 1e −7 m/s 2 to a = 1e −4 m/s 2 ), the error rate is about 0.1%, which is in the lower area, and the axial velocity is less affected by the inlet effect. Thus, the inlet effect has a small influence on sensor accuracy. When the electrolyte in the channel of the sensitive chip moves by a large acceleration (a = 1e −3 m/s 2 or a = 1e −2 m/s 2 ), the error rate is about 1%-10%, which is in the higher region, and the axial velocity is greatly affected by the inlet effect. Thus, the inlet effect has a big influence on sensor accuracy. Figure 6 shows that, the greater the excitation signal strength, the greater is the influence of the inlet effect on axial velocity. In conclusion, the higher the excitation signal strength is, the higher the error rate will be. Also the greater the intensity of the inlet effect is the larger impact on sensor accuracy will be, therefore, the amplitude of the signal input is an important factor which can affect the strength of the inlet effect and the accuracy of the sensor. The inlet effect is influenced by many factors, such as the composition of the electrolyte, and the number, size, location and shape of the holes in the sensor. In this paper, it only covers two qualitive factors for the influence of inlet effect, which are the thickness of the outermost insulation layer and the amplitude of the input signal. The other factors on the inlet effect and some various factors in a quantitative way will be our future research orientation. Conclusions In previous studies, a laminar flow model was used to study the morphology of electrolyte fluids, ignoring the effect of the coupling inlet effect between multichannel structures in an actual scenario. The turbulent-laminar flow model proposed in this paper can better describe this inlet effect. Compared with the single-channel laminar flow model, the real advantage of this model lies in the coupling design of the reaction chamber and sensitive element models. The simulated electrolyte fluid flow pattern is closer to the actual situation. We can rely on this model to set different MET structural parameters and boundary conditions, and apply it to MET noise research, MET elastic membrane structure research, electrolyte flow field research in the reaction chamber, etc. This model provides not only a more realistic simulation model for studying MET performance and also helps to optimize the performance of MET and the configuration of achieving the optimal performance of MET sensors for different fields and application environments. Therefore, the model in this paper can broaden the application field of MET electrochemical sensors. Through a numerical simulation, the actual fluid morphology of an electrolyte was studied qualitatively, and the existence of the inlet effect was proved. Simultaneously, it was found that the thickness of the outermost insulation layer and the amplitude of the input signal are the two factors that affect the intensity of the inlet effect and the accuracy of the sensor. The longer the thickness of the outermost insulation layer, the smaller is the influence of the inlet effect, and smaller is the influence on the accuracy of the sensor. The larger the amplitude of the input signal, the greater is the impact of the inlet effect, and greater is the impact on the accuracy of the sensor. In this paper, two factors that affected the intensity of the inlet effect and the accuracy of the sensor are studied qualitatively; however, no method was proposed to improve the accuracy of MET using this effect in practical applications. In future studies, we will optimize the structural layout of the sensor elements in the reaction chamber through quantitative research on the thickness of the outermost insulation layer. Simultaneously, we will study the amplitude of the input signal using vibration-level calibration, which corresponds to different application scenarios and vibration signals. Sensors with different vibration-level calibrations are used to achieve the maximum suppression of the fluctuation which is caused by the electrolyte velocity field change. It is expected that through these studies, the effectiveness of MET liquid sensor can be further optimized for better application in more fields.
4,622.8
2020-04-01T00:00:00.000
[ "Engineering", "Physics" ]
Activation leads to a significant shift in the intracellular redox homeostasis of neutrophil-like cells Neutrophils produce a cocktail of oxidative species during the so-called oxidative burst to attack phagocytized bacteria. However, little is known about the neutrophils' redox homeostasis during the oxidative burst and there is currently no consensus about the interplay between oxidative species and cellular signaling, e.g. during the initiation of the production of neutrophil extracellular traps (NETs). Using the genetically encoded redox sensor roGFP2, expressed in the cytoplasm of the neutrophil-like cell line PLB-985, we saw that stimulation by both PMA and E. coli resulted in oxidation of the thiol residues in this probe. In contrast to the redox state of phagocytized bacteria, which completely breaks down, the neutrophils' cytoplasmic redox state switched from its intital -318 ± 6 mV to a new, albeit higher oxidized, steady state of -264 ± 5 mV in the presence of bacteria. This highly significant oxidation of the cytosol (p value = 7 × 10-5) is dependent on NOX2 activity, but independent of the most effective thiol oxidant produced in neutrophils, MPO-derived HOCl. While the shift in the intracellular redox potential is correlated with effective NETosis, it is, by itself not sufficient: Inhibition of MPO, while not affecting the cytosolic oxidation, significantly decreased NETosis. Furthermore, inhibition of PI3K, which abrogates cytosolic oxidation, did not fully prevent NETosis induced by phagocytosis of bacteria. Thus, we conclude that NET-formation is regulated in a multifactorial way, in part by changes of the cytosolic thiol redox homeostasis in neutrophils, depending on the circumstance under which the generation of NETs was initiated. Introduction Neutrophils are the most abundant circulating granulocytes in the human body. As the first defenders of our immune system, neutrophils attack pathogens by several means. Upon encounter, pathogens such as bacteria are engulfed and internalized into compartments in neutrophils, a process called phagocytosis. As the phagosome matures into the phagolysosome by fusion with different intracellular granules, encapsulated bacteria are attacked by a mixture of toxic molecules including antimicrobial proteins and potent oxidants [1]. The production of reactive oxidants within the phagolysosome is initiated by assembly and activation of the membrane complex NADPH oxidase 2 (NOX2) [2,3]. Activated NOX2 transfers electrons from NADPH to phagosomal oxygen, which generates superoxide anion (O 2 •-). Oxidants derived from this radical include hydrogen peroxide (H 2 O 2 ) and the hydroxyl radical ( • OH). H 2 O 2 reacts further with chloride to form HOCl, a highly reactive oxidant, in a reaction catalyzed by myeloperoxidase (MPO) [4,5]. The activity of NOX2 is known to be essential for killing of microbes. Individuals suffering from chronic granulomatous disease (CGD), a hereditary disease in which NOX2 is inactive, are highly susceptible to microbial infections [6]. Oxidants produced downstream of NOX2 can directly react and thus oxidatively damage cellular components of trapped microbes [7][8][9]. A growing body of evidence highlights NOX2-related oxidants also as important signaling molecules to regulate cellular functions [10][11][12][13]. As such, NOX2 as well as MPO activity was shown to be involved in the activation of the formation of neutrophil extracellular traps (NETs), another crucial antimicrobial mechanism in neutrophils [14][15][16][17]. Due to the transient nature of the phagosomal environment, quantitative redox measurements have proven to be difficult [18]. Conventional methods include HPLC quantification of redox pairs after cell disruption and the use of redox-active fluorogenic dyes such as the widely used 2′,7′-dihydrodichlorofluorescein (H 2 DCF) [19][20][21][22]. However, those approaches often lack specificity, are prone to photobleaching or can simply not be used for subcellular dynamic measurement in living cells [23][24][25]. Many of those limitations were overcome by genetically encoded redox sensors. roGFP2, a variant of the enhanced green fluorescent protein (EGFP) has been widely used to study redox dynamics in various cell compartments across different organisms [26][27][28][29][30]. Like in EGFP, the chromophore of roGFP2 is formed by the cyclization of the residues 65-67 (Thr-Tyr-Gly). In close proximity to the chromophore are two engineered cysteine residues (C147 and C204). When they form a disulfide bond, a reversible conformational change in roGFP2 promotes the protonation of Tyr66. roGFP2 emits light at 510 nm and has two excitation maxima at 488 nm and 405 nm respectively [28,31]. Oxidation of C147 and C204 increases the excitation peak at 405 nm at the expense of the excitation peak at 488 nm. The redox states of roGFP2 can thus be measured by a ratiometric determination of its emission intensity at 510 nm at the excitation wavelengths 405 and 488 nm [28,32]. In our study, we developed a neutrophil-like cell line (based on PLB-985) that expresses the genetically-encoded redox sensor roGFP2 in the cytoplasm. This gave us a tool to analyze the redox dynamics in neutrophil-like cells upon activation by external stimuli such as PMA and during physiological events, such as phagocytosis of bacteria. Both PMA and phagocytosis of bacteria led to substantial roGFP2 oxidation, showing that, upon stimulation, the cytoplasmic redox homeostasis of neutrophils shifts to a more oxidizing environment. It also allowed us to study the involvement of oxidation events in the induction of NETformation through both PMA exposure and bacterial phagocytosis. Our data suggests that the observed cytoplasmic redox-shift by itself is not sufficient to induce NET-formation, but additional components dependent on MPO activity and PKC signaling are required. Generation of genetically encoded roGFP2 for expression in PLB-985 For the construction of PLJM1-roGFP2 that was used for the expression of roGFP2 in PLB-985, the gene region encoding roGFP2 was amplified using the primers listed in Supplementary Table 1 from pCC_roGFP2, which served as template (Supplementary Table 1). The PCR products were cloned into PLJM1-EGFP using the restriction enzymes BamHl and Nsil. E. coli stbl3 served as a cloning host and was subsequently used to amplify plasmid DNA. Endotoxin-free plasmids were obtained using the plasmid isolation kit NucleoBond Xtra Midi Plus EF according to the manufacturer's instruction (Macherey-Nagel, Düren, DE). The genomic integrity of roGFP2 was confirmed by restriction analysis and Sanger sequencing (Microsynth Seqlab, Göttingen, DE). Transduction of PLB-985 The roGFP2 expressing PLB-985 cell line was established using a lentiviral transduction system of the 2nd generation with a single packaging plasmid encoding the gag, pol, rev, and tat genes [35]. Viral particle was produced by transfecting the roGFP2 containing plasmid PJLM1-roGFP2 along with the packaging plasmids pCMV-VSV-G and pCMVR8.2 into HEK-293T cells (Supplementary Table 1). 2.5 × 10 6 cells were seeded on a 10 cm 2 cell culture dish and cultured in Dulbecco's modified Eagle's medium (DMEM) (Life Technologies, Darmstadt, DE) supplemented with 10% FBS (Life Technologies, Darmstadt, DE) and 1% Penicillin/Streptomycin (PenStrep, Life Technologies, Darmstadt, DE) at 37°C and 5% CO 2 . After 24 h, for each 10 cm 2 culture dish, 6 μg of pCMV-VSV-G, 12 μg of pCMVR8.2 and 12 μg of PJLM1-roGFP2 were mixed with H 2 O to a final volume of 438 μl and supplemented with 62 μl of 2 M CaCl 2 . 500 μl of 2xHBS phosphate buffer was added to the mixture dropwise. Afterwards, the solution was incubated for 10 min at RT and added to the HEK-293T cells. After 16 h of incubation at 37°C and 5% CO 2 , DMEM was exchanged by RPMI supplemented with 10% FCS and 1% GlutaMAX. Those cells were incubated at 37°C and 5% CO 2 to allow production of lentiviral particles for 16 h. Viral particles were collected by harvesting the supernatant and pressing it through a 0.45 μm syringe filter (Filtropur s0.45, Sarstedt, Nürnbrecht, DE). To increase the virus titer, viral particles were concentrated to ¼ of the initial volume using an ultrafiltration unit (VIVASPIN 20, 100,000 MWCO PES, Sartorius, Göttingen, DE) at 3000 g and 4°C. For transduction of PLB-985 cells, 5 ml of concentrated viral particles containing 4 μg/ml Polybrene (Sigma, Darmstadt, DE) was used to resuspend 2 × 10 6 of PLB-985 cells, which were seeded 24 h prior to transduction. Cells were incubated for 16 h at 37°C and 5% CO 2 to allow transduction. Then, the culture media was exchanged with RPMI and the cells were incubated for another 72 h to allow protein expression. roGFP2 expression was analyzed by fluorescence microscopy (IX50, Olympus, DE). Pictures were taken with a SLR camera (Olympus, DE) and the CellP software (Olympus, DE). Those cells were further used for the generation of monoclonal cultures. Generation of monoclonal culture by FACS Monoclonal cultures of roGFP2 expressing PLB-985 cells were generated using a fluorescence-activated cell sorter (FACS, BD FACSAria III, BD, Franklin lakes, USA) and the respective software BD FACSDiva (version 8.0.1, BD, Franklin lakes, USA). For this purpose, cells were washed once with PBS (pH 7.4) and resuspended in PBS (pH 7.4) at approximately 10 6 cells/ml. Then the cells were analyzed to determine the gating parameters for positive clones. In total, 96 GFP-positive clones were sorted into a 96-well plate (Sarstedt, Nürnbrecht, DE) in a single cell mode using the excitation wavelength at 488 nm with a 530/ 30 emission filter. Single cells generated this way were kept at 37°C and 5% CO 2 in RPMI supplemented with 1% GlutaMAX (Life Technologies, Darmstadt, DE) and 30% FBS. The development of single viable cells to single colonies was monitored microscopically and subcultured in RPMI supplemented with 1% GlutaMAX and 10% FBS. Real-time analysis of roGFP2 oxidation state in PLB-985 cells The redox state of roGFP2 in PLB-985 was measured in a 96-well format as described by Degrossoli et al. with minor modifications [37]. In short, 50 μl of roGFP2 expressing PLB-985 cells at a concentration of 10 7 cells/ml were incubated with respective inhibitors as described (100 nM Wortmannin; 10 μM Diphenyleneiodonium chloride (DPI); 500 μM 4-aminobenzoic acid hydrazide (ABAH); 1 μM Gö 6983 (Gö)) (Sigma, Darmstadt, DE) or in case of control, with PBS for 1 h at 37°C in a 96-well plate (Nunc black, clear-bottom, Rochester, NY). Afterwards, 50 μl of E. coli at an OD 600 of 1.0 as well as the respective stimulants were added. The fluorescence intensity was recorded every minute for 2 h at the excitation wavelength 405 nm and 488 nm, unless described otherwise. The emission wavelength was set to 510 nm. The Calculation of the 405/488 nm ratio was done using Microsoft Excel 2016 (Microsoft, USA). Visualization of respective graphs were done using GraphPad Prism (version 5.00, USA). All plate reader assays were performed in at least three independent experiments. The end point ratio, as depicted in the bar graphs, was calculated based on the final point of a linear regression over the last 10 min of the measurement. Determination of the redox potential of roGFP2 in PLB-985 The redox potential of roGFP2 expressed in the cytoplasm of PLB-985 neutrophil-like cells was calculated as described in previous studies [29,38,39]. Fluorescence intensities were measured in PBS, pH 7.4. Oxidized roGFP2 was generated using 2 mM Aldrithiol-2 (AT-2) and reduced via 50 mM Dithiothreitol (DTT) respectively. The degree of roGFP2 oxidation (OxD roGFP2 ) was calculated using the following formula: roGFP2 is -280 mV [28], R is the gas constant (8.314 J K -1 mol -1 ), T is the temperature (310.15 K), n is the number of transferred electrons (2) and F is the Faraday's constant (96,485 C mol -1 ). Phagocytosis of bacteria by PLB-985 cells Cultures of E. coli harbouring pASK-IBA3 containing mCherry or alternatively pCC LV (Supplementary Table 1) was grown to an OD 600 of 0.5 at 37°C with 100 μg/ml ampicillin. For mCherry expression overnight at 20°C, 100 μM Isopropyl-β-D-thiogalactopyranosid (IPTG) were added. Bacteria were washed twice in PBS (pH 7.4) and opsonized with 5 mg/ml human immunoglobulin G (hIgG, Sigma, Darmstadt, DE) for 30 min at 37°C. Then, bacteria were washed twice with PBS and resuspended in PBS supplemented with 0.5% FBS to an OD 600 of 1 (~10 9 cells/ml), unless described differently. Differentiated PLB-985 cells, which stably expressed roGFP2 in the cytoplasm, were washed once with PBS, resuspended in PBS supplemented with 0.5% FBS to a concentration of 10 7 cells/ml and mixed with opsonized E. coli in the same volume (multiplicity of infection, MOI = 100) to start phagocytosis. Fluorescence live-cell imaging with subsequent ratiometric image analysis The fluorescence live-cell imaging of roGFP2 oxidation in PLB-985 cells was performed as described previously [37]. Differentiated PLB-985 cells with roGFP2 stably expressed in the cytoplasm were washed once with PBS and diluted in PBS with 0.5% FBS to a final concentration of 10 7 cells/mL. 1 mL of the PLB-985 cell suspension was mixed with opsonized E. coli cells with a ratio of one PLB-985 cell to 100 E. coli in an imaging dish (μ-Dish 35 mm, high, Ibidi, DE). Fluorescence images were acquired using the LSM 880 ELYRA PS.1 microscope (Carl Zeiss Microscopy GmbH, Jena, DE). Images were acquired in different channels according to the fluorophore, Ex 405nm /Em 513nm and Ex 488nm /Em 513nm to detect roGFP2; Ex 561nm /Em 610nm to detect mCherry; Ex 633nm /Em 645nm to detect Alexa Fluor 647 and Ex 405nm / Em 442nm to detect Hoechst 33342. Individual single channel images were exported using ZEN 2.1 (Zeiss, DE). Individual cells were detected using the function "MeanROI view" in the ZEN software and calculation of the 405/488 nm ratio was done based on the "mean intensity measurement" over the surface of the respective cell. For the assembly of ratiometric time series, images were smoothed and subtracted from background. The normalized 405/488 nm ratio image series were exported directly as pictures or assembled to a movie using a software kindly provided by Mark Fricker [40]. Visualization and quantification of NET formation Microscopic visualization of NET formation was performed as described by Brinkmann et al. with minor modifications [41]. Briefly, 250 μl of differentiated PLB-985 in PBS (pH 7.4) with 0.5% FCS were seeded on 12 mm non-coated coverslips (High precision, thickness 170 μm, Paul Marienfeld, Lauda-Königshofen, DE) that were laid into a 24 well cell culture plate (Sarstedt, Nürnbrecht, DE). These cells were incubated with or without inhibitors for 1 h at 37°C prior to stimulation with 250 μl of 250 nM PMA or E. coli (MOI = 100). After 4 h of incubation at 37°C, cells were fixed with 4% paraformaldehyde for 15 min at RT, permeabilized with 0.25% Triton X-100 (Sigma, Darmstadt, DE) for 10 min, washed 2x with PBS and blocked with 5% bovine serum albumin (Sigma, Darmstadt, DE) overnight at 4°C. Immunostaining was performed with mouse anti-DNA/Histone1 antibody (1:1000, MAB 3864, Merck, Darmstadt, DE) for 1 h at RT followed by an Alexa Fluor 647-conjugated goat anti-mouse antibody (1:1000, A-21235, Thermo Fisher Scientific, Waltham, USA) for 1 h at RT. Coverslips were stained with Hoechst 33342 (0,6 μg/ml, Thermo Fisher Scientific, Waltham, USA) for 10 min under low-light conditions. Then, coverslips were washed 2x with PBS (pH 7.4) and mounted in ProLong Diamond antifade mountant (Thermo Fisher Scientific, Waltham, USA). Samples were visualized in a Zeiss LSM880 ELYRA PS.1 microscope (Carl Zeiss Microscopy GmbH, Jena, DE). Fluorescence images were exported using ZEN 2.1 (Carl Zeiss Microscopy GmbH, Jena, DE). Quantification of NETosis rate was performed as described by Brinkmann et al. [42]. Briefly, using ImageJ 1.51e (National Institutes of Health, USA), fluorescence images were transferred to an 8-bit format, thresholded and converted to a binary mask. Cell numbers were counted automatically using the function "analyze particles". Subsequently, NETosis rate (%) was calculated as the ratio of number of neutrophils heavily stained by anti-DNA/Histone1 antibody, and thus underwent NETosis, divided by the total cell number, as visualized by the merged channel of GFP and anti-DNA/Histone1 (see Figure supplement 1). These experiments were performed in biological triplicates. Expression of roGFP2 in PLB-985 cells Professional phagocytic immune cells such as neutrophils activate NOX2 when they encounter invading pathogens. Activation of NOX2 is accompanied by a broad range of different cellular responses, chief amongst them the production of different reactive oxidants. In previous studies, we observed that this oxidant production, particularly the production of HOCl, leads to a total breakdown of the thiol redox state of bacteria phagocytized by a neutrophil-like cell line [37,43]. We were interested if a similar disturbance in the thiol redox state can also be observed in the host cell, after all, the only barrier between the oxidants produced downstream of NOX2 is the membrane of the phagolysosome. For our experiments, we used the myeloid PLB-985 cell line that can undergo granulocytic differentiation, showing highly similar properties to PMNs [34,44]. Using a lentiviral transduction system, we could stably express the genetically-encoded redox-sensor roGFP2 in the cytoplasm of PLB-985. Since transduction results in variable numbers of lentivirus integration [45], single clones expressing roGFP2 were sorted into 96-well plates using fluorescence-assisted cell sorting (FACS). One monoclonal cell population generated this way was used for our experiments. The expression of roGFP2 was confirmed by fluorescence microscopy and Western Blot analysis ( Fig. 1A and B). Next, we determined the ratiometric response range of roGFP2 towards the oxidant AT-2, which should fully oxidize all roGFP2 present in the cell, and the reductant DTT, which should fully reduce roGFP2. Using fluorescence spectroscopy, emission intensities at 510 nm with excitation wavelengths at 405, and 488 nm respectively, were quantified. Upon addition of 1 mM AT-2 to differentiated roGFP2-expressing PLB-985 cells, roGFP2 showed a 405/488 nm ratio of approximately 0.8. In contrast, cells treated with 50 mM DTT showed a 405/488 nm ratio of approximately 0.25. This suggests that roGFP2 expressed in the cytoplasm of neutrophil-like PLB-985 cells is indeed redox-sensitive. Untreated control cells showed a 405-488 nm ratio comparable to DTT-treated roGFP2, demonstrating an overall reduced state of roGFP2 in resting PLB-985 cells, in agreement with our expectations (Fig. 1C) [32,46]. Based on the ratio of 405-488 nm under control conditions and the known standard redox potential E 0 roGFP2 = −280 mV of roGFP2, we were able to calculate an E roGFP2 of -318 ± 6 mV. Assuming that roGFP2 is in equilibrium with cytoplasmic glutathione [23], this should reflect the steady state redox potential of a resting neutrophil-like cell. Significance was calculated using Student's t-test ***: p < 0.001. NOX2 activation leads to roGFP2 oxidation To test if intrinsic generation of oxidants leads to a change in the cytosolic redox state of neutrophils, we used Phorbol 12-myristate 13acetate (PMA). PMA is a synthetic activator of protein kinase C (PKC). Activated PKC then leads to phosphorylation of NOX2 and thus its activation [47]. To measure roGFP2 response upon PMA stimulation, we stimulated differentiated PLB-985 cells with 250 nM PMA and obtained the 405/488 nm ratio using a fluorescence plate reader. An immediate increase of 405/488 ratio indicated a substantial oxidation of roGFP2, reaching a maximum at 20 min. The changes in 405/488 nm ratio per min was calculated for this time frame to be 2.26 × 10 -2 . This was followed by a minor decrease in probe's oxidation between 20 min and 30 min and finally reached a plateau after 30 min of incubation, which stayed at the same level until the end of the measurement ( Fig. 2A). This suggests that the cytosol reached a new steady state redox equilibrium. The redox potential of roGFP2 at that steady state was determined to be -267 ± 7 mV, substantially higher than resting cells (pvalue PMA vs Medium = 1.6 × 10 -4 ). Phagocytosis of bacteria leads to oxidation of the cytosolic probe roGFP2, when expressed in E. coli, is oxidized within seconds upon phagocytosis of the bacteria by neutrophil-like cells [37]. Here, we monitored the redox state of roGFP2 in PLB-985 neutrophil-like cells during phagocytosis of bacteria. For this, we co-incubated neutrophillike PLB-985 cells with E. coli. The 405/488 nm excitation was then used to determine oxidation state of the roGFP2 expressed in PLB-985. Upon co-incubation with E. coli, the 405/488 nm ratio of roGFP2 in the cytosol of neutrophil-like cells increased gradually, reaching a plateau at 70 min that remained at the same level until the end of the measurement. As compared to cells stimulated with PMA, the increase in 405/488 ratio per min was measurably slower in neutrophils co-incubated with E. coli and showed a slope of 4.86 × 10 -3 during the first 70 min. The redox potential of roGFP2 in this oxidized state was calculated to be -264 ± 5 mV (p-value E.coli vs. Medium = 7 × 10 -5 ), comparable to the oxidation state of roGFP2 in neutrophil-like cells exposed to PMA. When PLB-985 cells were incubated with medium, the 405/ 488 nm ratio did not significantly change (Fig. 2B). roGFP2 oxidation in individual neutrophil-like cells occurs within minutes upon phagocytosis of bacteria In order to verify the changes in 405/488 nm ratio during co-incubation with E. coli and in order to gain further insight into the dynamic of the probe's oxidation, we monitored the change of roGFP2 oxidation during phagocytosis using quantitative fluorescence microscopy. Our data showed that the probe was indeed in an overall reduced state in cells that have not taken up E. coli. As PLB-985 cells started to phagocytize E. coli cells, the redox state of roGFP2 changed gradually into an overall oxidized state (Fig. 3, Video 1). Single cell analysis showed a non-synchronized start of probe oxidation. Once individual neutrophil-like PLB-985 cells phagocytized bacteria, the probe changed its oxidation with up to 6.3 x faster oxidation kinetics (Fig. 4, Videos 2-9). Thus, the gradual increase in probe oxidation measured in the fluorescence plate reader reflects the accumulation of roGFP2 oxidation from single PLB-985 cells that have phagocytized bacteria over time. roGFP2 oxidation during neutrophil activation is dependent on NOX2, but not on myeloperoxidase activity When incubated with E. coli, the oxidation of roGFP2 in PLB-985 cells was not as fast as the probe's response in cells that were stimulated with PMA. This is probably due to the asynchronous phagocytosis of individual bacteria as compared to the instantaneous exposure to the molecular stimulant PMA. It could also be caused by a different underlying mechanism of roGFP2 oxidation. Diphenyleneiodonium (DPI), a widely used inhibitor of NOX2 [48], strongly abrogated roGFP2 oxidation during PMA activation. (Fig. 5A). This suggests that PMA leads to roGFP2 oxidation indeed in a NOX2dependent way. Then, we examined the role of NOX2 in E. coli-induced probe oxidation. When pre-treated with the NOX2-inhibitor DPI, the oxidation of roGFP2 in PLB-985 cells during co-incubation with E. coli was strongly diminished as well (Fig. 5C). This indicates that roGFP2 oxidation in PLB-985 cells during phagocytosis relies on NOX2 activity, too. The generation of non-mitochondrial superoxide anion in Recently, we showed that HOCl generated by myeloperoxidase is the main factor in the oxidation of roGFP2 expressed in bacteria during phagocytosis [37]. As HOCl is known to be a highly effective thioloxidant [49], and has been shown to react promptly with roGFP2 in vitro [50], we assessed the role of myeloperoxidase on roGFP2 oxidation in PLB-985. Surprisingly, pre-incubation of neutrophils with the myeloperoxidase inhibitor 4-aminobenzoic acid hydrazide (ABAH) had almost no effect on probe oxidation by PMA stimulation (Fig. 5B) or E. coli phagocytosis (Fig. 5D). This suggests that in contrast to phagocytized bacteria, HOCl is not the reactive species that leads to roGFP2 oxidation in the cytoplasm of PLB-985 cells. Different pathways lead to roGFP2 oxidation in neutrophil-like cells treated with PMA and E. coli Intruding bacteria are typically marked by opsonins such as IgGantibodies. Those opsonized bacteria are then recognized by neutrophils via Fcγ-receptors (Fcγ-Rs). Subsequent phosphorylation of Fcγ-Rs leads to the activation of several down-stream signaling molecules. Amongst the enzymes recruited are phosphoinositide 3-kinases (PI3K) and protein kinase C (PKC). Both were shown to be involved during the activation of NOX2 [47,[51][52][53][54][55][56][57]. To evaluate whether PI3K is required to induce oxidation of roGFP2, roGFP2-expressing PLB-985 cells were incubated with 100 nM of the PI3K inhibitor Wortmannin for 1 h before stimulation [58,59]. Inhibition of PI3K by Wortmannin resulted in a visible attenuation of roGFP2 response upon stimulation with E. coli (Fig. 6C). However, the probe's oxidation was not affected when PLB-985 cells were activated by PMA (Fig. 6A). Conversely, pre-inhibition of PKC by the inhibitor Gö 6983 prevented PMA-induced roGFP2 oxidation, as expected, but did not affect probe oxidation during E. coli phagocytosis (Fig. 6B, D) [60]. These observations are in line with the fact that PMA leads to NOX2 activation via a PKC-dependent pathway, whereas NOX2 activation caused by E. coli typically involves the activation of PI3K [61]. PLB-985 neutrophil-like cells, when activated by PMA or E. coli, generate neutrophil-extracellular traps Our data with roGFP2 demonstrated that E. coli phagocytosis lead to NOX2 activation in PLB-985 neutrophil-like cells, which resulted in the probe's oxidation. Generation of oxidants is used by neutrophils to attack intruding bacteria. However, oxidants downstream of NOX2 are also thought to serve as signaling molecules. As such, the formation of neutrophil-extracellular traps (NETs) to facilitate phagocytosis and killing of bacteria was shown to be dependent on NOX2-activity [62,63]. To test if the activation of NOX2 indeed leads to NET-formation in PLB-985 cells, we seeded differentiated PLB-985 cells on coverslips and stimulated the cells with PMA or E. coli for 4 h. Production of NETs was then visualized by immunofluorescence microscopy using an anti-chromatin antibody. Both PMA and E. coli phagocytosis resulted in release of fibers containing decondensed chromatin, which corresponds to NETs (Fig. 7A and B). To quantify NET-formation, we calculated the NETosis-rate based on the percentage of cells heavily stained by an antichromatin antibody relative to the total cell number as described by Brinkmann et al. [42]. Approximately 25% of the neutrophil-like cells that were co-incubated with E. coli underwent NETosis. In contrast, stimulation with PMA was more effective and resulted in a NETosis-rate of nearly 90% (Fig. 7C). NET-formation is dependent both on the activity of NOX2 and myeloperoxidase (MPO) As shown with roGFP2, the inhibition of NOX2 by DPI completely abolished the probe's oxidation after stimulation by both PMA and E. coli. However, the probe's oxidation was not affected by the MPO-inhibitor ABAH. To test the effect of NOX2-related oxidants on the ability of PLB-985 cells to form NETs, we pre-treated the cells with both DPI and ABAH for 1 h and determined the NETosis rate after 4 h incubation with PMA and E. coli. Both NOX2 and MPO have been reported to be essential for NET formation and we wanted to test if the same was the case in our model setting. Upon activation by PMA, both DPI and ABAH prevented cells from forming NETs (Fig. 7D). When activated with E. coli, inhibition of NOX2 and MPO reduced the NETosis rate by 20% and 30% respectively (Fig. 7E). This suggests that the presence of HOCl, produced downstream of NOX2, is an important factor in the mediation of NET-formation, although it has no direct influence on the redox state of the cytosol, as measured by roGFP2. Signaling pathways involved in NET formation Having determined that the overall change in the cytosolic redox potential is not the determining factor in NET formation, we took a closer look at the signaling pathways upwards of NOX2. Using the specific inhibitors Gö 6983 and Wortmannin, we again blocked PKC and PI3K activity, respectively. NET formation induced by PMA was unaffected by Wortmannin but was reduced by approximately 70%, when PKC was blocked (Fig. 7D). This suggests that PMA leads to NET formation that is dependent on the activity PKC, comparable to oxidation of roGFP2. This would be still in line with a redox signal directly downstream of NOX2. We then analyzed the ability of neutrophil-like cells to form NETs after co-incubation with bacteria when pre-treated with Gö 6983 or Wortmannin. Inhibition of PI3K reduced NET-formation by 50%. Interestingly, preincubation with Gö 6983 also decreased NET-formation by 30% indicating that redox-independent PKC signaling is involved in NET formation as well (Fig. 7E). Discussion In neutrophils, the enzyme NOX2 plays a key role in the clearance of invading microbes. Oxidants produced downstream of NOX2 are used to attack and kill bacteria either directly or, indirectly, as signaling molecules, activating further immune functions. However, due to the short life span of neutrophils and the transient and complex nature of NOX2-derived oxidants, the acquisition of spatial, temporal and quantitative information about redox processes underlying immune responses have proven to be difficult [23,64,65]. Here, we generated a stable PLB-985 neutrophil-like cell line expressing roGFP2 and used this biosensor to study real-time redox dynamics in the host cell upon stimulation by PMA and during phagocytosis of bacteria. When expressed in neutrophil-like cells, roGFP2 showed a redox potential of -318 mV. Using the response towards AT-2 and DTT as a measure for the dynamic range of roGFP2 in this cell line, we could show that roGFP2 was initially~95% reduced in the cytosol of PLB- 985 cells, corresponding to the above mentioned E roGFP2 of -318 ± 6 mV. This is in agreement with other reports, in which the basal redox potential of roGFP2 expressing HeLa cells was reported to be -325 mV [32]. Given that the main cellular redox buffer glutathione is found in the millimolar range in cells, the oxidation state of roGFP2 has been suggested to reflect the redox state of glutathione (i.e. the ratio and concentration of GSH and GSSG). As such, roGFP2 has been extensively used to measure E GSH in plants, yeast and mammalian cells, which was determined to be between -300 and -320 mV [23,38,46,66,67]. Overall, we can conclude that neutrophil-like cells, when not activated, do have a cytosolic redox potential similar to other cell types. This, however, changes, once the neutrophil-like cell line is activated. When we measured the response of roGFP2 upon PMA activation of the neutrophil-like cells, the probe was oxidized rapidly and peaked after 20 min at an E roGFP2 of about -267 ± 7 mV. These kinetics were in line with observations monitoring reactive oxygen species using luminol in PMA-induced PMN [68]. The probe then showed a minor, but reproducible reduction before it reached a plateau at 30 min. In the response to reactive oxygen species, both enzymatic and non-enzymatic antioxidants are needed to maintain cellular redox homeostasis. As such, oxidants can be directly reduced by GSH, during which it is oxidized to GSSG [69]. However, oxidants, like peroxides, can be detoxified much more efficiently by enzymes, such as glutathione peroxidase, which uses GSH as the reduction equivalent [70,71]. Generated GSSG is then reduced NADPH-dependently by glutathione reductase [72,73]. Additionally, NADPH also contributes to the functional integrity of other antioxidant enzymes such as catalase and thioredoxins (Trxs) [74,75]. For instance, a CXXC motif in Thioredoxin (Trx) serves as an electron donor to reduce thiol-oxidized substrate proteins. Oxidized Trxs are subsequently reduced by TrxR in an NAPDH-dependent manner [75][76][77]. Consumed NADPH is mainly replenished in the oxidative branch of the pentose phosphate pathway (PPP), in which glucose-6-phosphate dehydrogenase (G6PDH) and 6-phosphogluconolactonase (6PGDH) are generating NADPH [78,79]. Reduction of NADP + is further potentiated under oxidative stress by inhibition of GADPH that leads to the accumulation of glycolytic intermediates resulting in an increased flux into the PPP [80,81]. Taken together, it is conceivable that PMA induced generation of oxidants results initially in an increased ratio of GSSG/GSH and NADP + /NADPH by activation of NOX2, which also consumes NADPH, and an increase in the oxidation of protein-thiols which are then resolved by glutaredoxin and thioredoxin in an GSH and/or NADPH-dependent manner. PMA is a pharmacological analogue of the physiological secondary messenger diacylglycerol. It was reported to activate NOX2 through a PKC-dependent pathway [82]. We assessed the response of roGFP2 after blocking NOX2 or PKC. As expected, the probe's oxidation in the cytoplasm of PLB-985 cells was abrogated. In comparison, inhibition of the PI3Ks had no effect on the probe's response, congruent with previously published reports showing that PMA-induced release of O 2 •occurred independently of PI3K [58,83]. The activation of NOX2 leads to the generation of superoxide anion, from which a mixture of different oxidants is generated, including HOCl, a highly thiol-reactive oxidant [9,49]. Inhibition of MPO, the enzyme responsible for HOCl production in neutrophils, had no effect on PMA-induced oxidation of roGFP2 in PLB-985 cells. This was somewhat unexpected, as we have shown that HOCl is the major oxidative species responsible for the probe's oxidation when expressed in phagocytized bacteria. However, PMA seems to activate NOX2 predominantly at the cell surface and not in intracellular vesicles such as the azurophil granules, in which MPO is mainly located [49,[84][85][86], which could explain the lack of involvement of HOCl. We then tested the response of roGFP2 during the phagocytosis of E. coli. The probe was gradually oxidized, reaching a plateau during the first 70 min at -264 ± 5 mV, in the same range as the final oxidation state achieved by PMA stimulation. A similar kinetic was observed in a previous study performed in our lab, in which roGFP2 was expressed in the cytosol of E. coli during phagocytosis [37]. In these phagocytized bacteria, however, the probe was oxidized to its fullest extent, unlike in the cytosol of the phagocytic cell. Quantitative thiol redox proteomics demonstrated a parallel break-down of protein thiols in those phagocytized E. coli cells [43]. The lesser extent of oxidation of roGFP2 in the neutrophil-like cells' cytosol suggests that these cells are able to maintain their thiol redox homeostasis. Nevertheless, stimulation by E. coli effectively changed the cytosolic redox potential significantly to a more oxidized state. Quantitative fluorescence microscopy revealed that roGFP2 reaches its new, higher oxidized states with substantially faster kinetics on the level of individual cells when compared to the average of the overall population. This change in the redox state presumably happens once the neutrophils phagocytized bacteria (Video 1 and Videos 2-9, Figs. 3 and 4). The apparent slower overall change in fluorescence as observed in the plate reader reflects the individual cells' contribution over time to the overall more oxidized pool of roGFP2 in the sample. Phagocytosis is thus most likely the rate-limiting step for probe oxidation in PLB-985 cells, similar to roGFP2 oxidation observed in phagocytized E. coli [37]. Interestingly, inhibition of MPO activity by ABAH did not influence the probe's oxidation behavior in PLB-985 cells when co-incubated with bacteria. As with our observation in PMA-induced neutrophil-like cells, this is unexpected, as HOCl was the major oxidant responsible for probe oxidation in phagocytized bacteria. Especially and even more so, as HOCl-derived oxidants such as chloramines are membrane permeable [87], thiols in the cytoplasm of neutrophils should be readily oxidized, once such chlorine-containing oxidants are produced in the phagolysosome. This is in contrast to proteins and other harmful molecules that cannot permeate the membrane and, when directly released into the cytosol, would cause severe damage to the host cell [1,18,88]. Thus, neutrophils must have a highly effective defense that prevents HOCl and other reactive chlorine species from permeating the phagolysosomal membrane during phagocytosis of microbes. We then determined the signaling pathways involved in changes of the cytoplasmic redox potential. Here, we focused on Fcγ-Rs mediated NOX2-activation via the tyrosine kinase Syk [89] and phospholipase C (PLC) [90] pathways. The former leads to recruitment of class I PI3Ks. Phosphoinositides produced by PI3K were shown to regulate several D). The probe's full oxidation (red dotted line) and reduction (green dotted line) in PLB-985 cells was determined by incubation with 1 mM AT-2 and 50 mM DTT, respectively. The fluorescence intensity at both the excitation wavelengths 488 nm and 405 nm was monitored over 120 min in a 96-well plate reader. PMA-induced oxidation of roGFP2 is unaffected by the PKCinhibitor Gö 6983, however, significantly decreased when PI3K-inhibitor Wortmannin was used. E. coli-induced probe's oxidation is dependent on activity of PI3Ks but less on PKCs. Scatter plots depict representative results, bar graphs the mean values ± SD of the ratio reached at the end of at least 3 independent experiments under the respective treatments. Significance was calculated using Student's t-test ***: p < 0.001. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) very distinct steps of NOX2 activation [52,91]. We found that inhibition of PI3K by the specific inhibitor Wortmannin abrogated roGFP2 oxidation in neutrophil-like cells co-incubated with opsonized bacteria. This is expected, as Wortmannin effectively inhibited ROS production in response to opsonized E. coli in both human and mouse neutrophils [61]. Conversely, PMA-induced oxidation of roGFP2 was abrogated by inhibition of PKC but not PI3K. PLC generates diacylglycerol (DAG) and inositol triphosphate (IP 3 ) [92], which leads to the activation of protein kinase C (PKC) [54,93]. As such, PKC-δ was shown to induce oxidant production via activation of NOX2, when stimulated with IgG particles [94]. Also, when stimulated with IgG-opsonized Aspergillus, Gö 6983 treated human PMNs were impaired in oxidant production [95]. Similarly, opsonized Candida albicans induces NOX2 activation in human PMN in a PKC-dependent way [96]. Strikingly, inhibition of PKC by Gö 6983 had almost no effect on roGFP2 oxidation in neutrophil-like cells co-incubated with E. coli in our model, while it had a major effect in PMA treated cells. This suggests responses in neutrophils are highly tailored to the type of pathogens and the circumstances of interaction. Our results also highlight that oxidant production induced using PMA as a model for neutrophil activation, is mechanistically very distinct from activation by microbes, although the outcomes are similar. This needs to be taken into account in the interpretation of PMA-derived data. The formation of NETs is thought to immobilize extracellular pathogens and expose them to a high dose of lethal compounds, contributing to the antimicrobial capacity of neutrophils [97]. However, a growing body of evidence suggests that different mechanisms result in NET formation depending on the stimuli [68,98]. Several studies have shown that PMA-induced NETosis depends on oxidants produced downstream of NOX2. As such, no NETs were released in CGD neutrophils upon PMA stimulation [15,99]. We showed that PLB-985 neutrophil like cells form NETs, when activated with PMA. Similar to previous studies, we could show that PMA-induced NET formation is mainly dependent on the activity of PKC, NOX2 and MPO but not PI3K . The formation of NET is expressed by the NETosisrate, which is calculated based on the ratio of cells heavily stained by the chromatin antibody and the total cell number as described in section 2.1 (C). The effect of different inhibitors on formation of NET was assessed by pre-treating PLB-985 cells with the respective inhibitors for 1 h before stimulation. Afterwards, those cells were stimulated with 250 nM PMA (D) or E. coli (E) in a ratio of one hundred E. coli to one PLB-985 cell for 4 h. NET-producing cells were visualized using an antibody directed against chromatin. The calculation of the NETosis rate was done as described in the materials and methods section. The relative NETosis rate represents the normalized values relative to PMA-and E. coli-induced NET formation respectively. NET-formation induced by PMA is abrogated upon inhibition of NOX2, MPO and PKC. The formation of NETs generated upon E. coli phagocytosis is significantly inhibited by DPI, ABAH, Wortmannin and Gö 6983. The quantitative results represent mean values ± SD of at least 3 independent experiments. Significance was calculated using Student's t-test (*: p < 0.05, **: p < 0.01, ***: p < 0.001). For raw image data associated with C -E please see linked dataset: figure supplement 1. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) [62,68,[100][101][102][103]. However, in our model, NET-formation induced by phagocytosis of bacteria was not exclusively dependent on the generation of NOX2-derived oxidants. Currently, there are a lot of discrepancies concerning the correlation of oxidant production and NET formation induced by physiological stimuli. CGD neutrophils with compromised NOX2, or neutrophils treated with DPI were shown to attenuate NET formation upon stimulation with S. aureus [15,104]. Nevertheless, others observed opposite effects [101,105,106]. Our results showed that DPI treatment of neutrophils inhibited the formation of bacteria-induced NET-formation only partially, despite the complete inhibition of roGFP2 oxidation. Similar to studies conducted with PMN, inhibition of MPO by ABAH significantly decreased NETosis [16,68,107] although roGFP2 oxidation in PLB-985 cells was not dependent on MPO. The most potent inhibition of phagocytosis-induced NET formation was the PI3K inhibitor Wortmannin, which is consistent with previous studies [101,108,109]. Taken together, this suggests that the observed significant change in the redox homeostasis in the cytosol of activated neutrophils is not sufficient to induce NET formation. Other (presumably also oxidative) events that do not manifest themselves in a change of the overall redox homeostasis seem to play a crucial role as well, as demonstrated by the inhibition of myeloperoxidase, which does not affect the shift in redox-homeostasis but significantly inhibits NET formation. Our observations were made using an in vitro system, using a wellestablished myeloid cell line in a co-incubation assay. While we cannot say for certain if our findings can be directly applied clinically or under physiological conditions, our data suggests that oxidative signaling in immune cells has multiple layers, which lead to a shift in the cytosolic redox homeostasis, but also include highly specific signaling events required for effective microbial killing. Declaration of competing interest None.
9,601.2
2019-10-13T00:00:00.000
[ "Biology" ]
Snapshot Mueller spectropolarimeter imager We introduce an imaging system that can simultaneously record complete Mueller polarization responses for a set of wavelength channels in a single image capture. The division-of-focal-plane concept combines a multiplexed illumination scheme based on Fourier optics together with an integrated telescopic light-field imaging system. Polarization-resolved imaging is achieved using broadband nanostructured plasmonic polarizers as functional pinhole apertures. The recording of polarization and wavelength information on the image sensor is highly interpretable. We also develop a calibration approach based on a customized neural network architecture that can produce calibrated measurements in real-time. As a proof-of-concept demonstration, we use our calibrated system to accurately reconstruct a thin film thickness map from a four-inch wafer. We anticipate that our concept will have utility in metrology, machine vision, computational imaging, and optical computing platforms. The following tables summarize parameters pertaining to the optical system setup and design shown in Figure S1. S2 Optical system architecture In this section, we present a more detailed disucssion of the operating principles behind our optical system.Table S3: Performance parameters. S2.1 4f illumination system Imaging Mueller polarimetry requires sixteen distinct images to be recorded at each object position, each corresponding to distinct combinations of illumination and analyzer polarization states.With a 4f illumination system, a point source at the front focal plane of the first lens (i.e., the illumination plane) gets collimated by the first lens, illuminates and interacts with the sample placed between the lenses, and then refocuses to a point at the back focal plane of the second lens (i.e., the analyzer plane).In this manner, the sample placed at the Fourier plane is illuminated by a plane wave spanning the sample diameter.This concept can generalize to an array of sixteen point sources at the illumination plane that produce multiple incident beams onto the object, refocus to an array of sixteen points at the analyzer plane, and are independently imaged by our multi-pinhole imaging system.Polarimetry imaging is readily achieved by placing unique combinations of polarization filters at the illumination and analyzer apertures.The full beam paths from two illumination apertures are shown in Figure S2.Experimentally, L 1 and L 2 are Celestron achromatic refractor telescopes with a focal length of 400 mm and diameter of 80 mm.We note that obscuration of the lens due to mounting reduces the effective aperture to a diameter of about 65 mm.With this Fourier-based illumination scheme, light from different pinhole apertures in the illumination plane become plane waves that are incident onto the sample with different angles.The full angular spread from all illumination pinhole sources is determined by the distance between the two most spatially separated apertures (7.8 mm) and f 1 (400 mm) and is approximately 0.02 radians (1.2 • ).The angular variation between plane waves incident onto the sample from two adjacent illumination apertures is 0.06 • .These narrow angular ranges place tight constraints on the angular alignment of the system and the maximum curvature or wedge angle of the sample to be tested.Our optical configuration is thus best suited for measuring samples that are known to be flat, such as wafers. We note that in practice, the pinhole apertures in the illumination plane are consolidated to four relatively large area apertures (Figure S3), though they functionally behave as pinholes due to the spatial filtering properties of the 4f system and pinhole analyzer apertures.Our use of relatively large apertures for illumination enables more robust system alignment.The blue box and arrow in the figure represent the mapping of fields from the dashed circular region at the illumination plane with a diagonally-polarized analyzer aperture in the analyzer plane, mediated by the 4f system.Fig. S3: Images of the polarization filter-functionalized apertures at the illumination and analyzer planes, together with red symbols representing polarization filtering function.The 4f system inverts and maps fields from the illumination aperture to the analyzer aperture. S2.2 Light field camera In this subsection, we further elucidate the operating principles of the hyperspectral and polarization-enabled imaging sensor.We note that this sensor system can be used independently as an imaging spectropolarimetry pinhole camera in which the analyzing apertures serve as imaging pinhole apertures.We first consider an imaging system with a single analyzer aperture.In conjunction with our implementation of a telescopic imaging system, L 3 is placed one focal distance away from the aperture plane and collimates light from the aperture (Figure S4a, top left).In the case where the pinhole aperture is axially aligned with the lens, the collimated light propagates with normal incidence onto the sensor array.By collimating light in this way, image demagnification onto the sensor is fixed to f 2 /f 3 .For our experimental system configuration, a 20:1 reduction system from sample to image sensor is achieved. Fig. S4: Conceptual framework for the light field system architecture.1) The lens (L 3 ) sets the magnification of the imaging system and collimates light from the pinhole aperture.2) Addition of a microlens array at the image sensor improves signal-to-noise by performing light concentration.3) Multiple pinhole apertures coupled to the microlens array enable the simultaneous imaging of multiple fields.4) The incorporation of polarization filters at the pinhole aperture and a diffraction grating above the microlenses enable polarizationand wavelength-resolved imaging. We next place a microlens array one microlens focal length away from the image sensor (Figure S4, top right).The incorporation of the microlens array presents a tradeoff between imaging resolution and signal-to-noise: as the light incident onto the microlenses is collimated, the microlenses will focus and concentrate the light to a single pixel, pending that the diffraction-limited microlens spot size is smaller than an individual sensor pixel.In the case where there are N sensor pixels under each microlens (N = 4900 for our system), light throughput to the illuminated pixel is enhanced by a factor of N , thereby enhancing the measured signal-to-noise by a factor of N .As the microlenses concentrate light to a single pixel, the resolution of the imaging system is set by the size of the microlenses and is reduced by a factor of N .As a point of comparison, for a system without microlenses and assuming that the measured signals from a grouping of N pixels are spatially uniform, signal averaging over all pixels will improve signal-to-noise by only a factor of √ N .The added signal-to-noise enhancement with microlenses is particularly essential in our scheme, as light throughput is limited by our use of pinhole apertures.With the usage of microlenses, the signal-to-noise ratio is sufficiently high even for millisecond exposure times. The combination of microlenses and L 3 enables light from different pinhole apertures to be imaged onto distinct sets of pixels on the sensor.The concept is illustrated in the bottom left panel in Figure S4 for two pinhole apertures, one axially aligned with the collimating lens and one in a slightly offset position.As shown before, light from the axially aligned aperture collimates into a normally-incident beam, leading to the focusing of light to sensor pixels that are axially aligned with the microlenses.Light from the slightly offset aperture instead collimates to a slightly off-normal direction onto the microlens array, which focuses the light to sensor pixels with slightly off-normal positions at all microlenses.In this manner, the light field camera captures two distinct and independent images. Finally, polarization analysis is enabled by specifying polarization filters at the apertures (bottom right, Figure S4).In the schematic, broadband vertical and horizontal polarizers are depicted, leading to vertical and horizontal polarization-analyzed images to be recorded onto the image sensor.Sets of devices with distinct polarization responses are nanofabricated in parallel on a single chip, and the total fabricated area is small and dictated by the pinhole aperture dimensions.To obtain hyperspectral information for each polarized image, a diffraction grating oriented along the y-axis (i.e., oriented ninety degrees relative to the pinhole aperture array) is placed just above the microlens plane such that collimated light from each aperture disperses to the +1 diffraction order.The resulting super-pixels on the image sensor comprises rows of pixels, each containing wavelength information for a particular illumination and analyzer polarization state. S2.3 Microlens design and simulation To identify suitable microlens designs that support diffraction-limited performance, we perform a systematic array of on-axis and off-axis ray tracing simulations for a wide range of lens parameters and a fixed wavelength of 550 nm.These simulations were performed with the assumption that the microlenses have a spherical surface.Figure S5a shows a representative example of a ray tracing analysis for a microlens with a width of 190 µm and an effective focal length of 250 µm.The short focal ratio causes focusing aberrations: as shown in Figure S5b and S5c, the geometric spot size exceeds the red circle delineating the diffraction limit. Figure S5d shows the focusing spot size as a function of microlens f /# for various microlens sizes.The points of intersection between the line delineating the diffraction limit and the lines delineating RMS spot size from ray tracing represent the ideal condition in which the diffraction-limited and aberrationlimited spot sizes are equal.These points of intersection are plotted in terms of relationships between microlens f /# versus microlens diameter (Figure S5e) and wavelength resolution versus microlens size (Figure S5f) to cast these results in terms of ideal design rules for the microlenses.To design the linear polarizers, we use Reticolo rigorous coupled-wave analysis (RCWA) to simulate devices [1] and perform a parameterization sweep over a wide range of device geometries and incidence angles.We specifically consider a device that is made of aluminum nanoridges, contains air between the lines, and is clad in silicon dioxide at the top and bottom device interfaces.Compared to devices that are fully encapsulated in silicon dioxide or have no top cladding layer, we find that this configuration supports superior transmission due to impedance matching between the silicon dioxide and metal-air layers.We first consider a parameter sweep over nanoridge period and thickness, with g equal to w, for a normally incident 550 nm plane wave.The corresponding maps of transmittance and extinction ratio are shown in Figures S8b and S8c.Based on these maps and fabricability considerations, we choose as our linear polarizer geometry a device featuring a width of 75 nm and gap of 75 nm, which corresponds to a period of 150 nm (marked as stars in the figures).We next simulate the wavelength and angular bandwidth of these polarizers to ensure their operation within our system requirements.RCWA simulations of the transmission and selectivity of our polarizer for different incidence angles at λ = 550 nm are summarized in Figures S8d and S8e, respectively, and indicate high transmission and selectivity for a very wide range of α and δ angles.Figure S8f shows the the transmittance and extinction ratio of this design for a normally incident beam over a wavelength range of 400 to 700 nm and shows an extinction ratio above 50 dB and transmittance above 0.6 for the entire visible spectrum.These broadband and broad angle characteristics are due in part to the non-resonant nature of our device. S2.4 Nanoridge polarizer design Experimentally measured transmittance and selectivity of the linear polarizer is shown in Figure S8g for normally incident visible light.The measurement is performed by collimating a supercontinuum laser source coupled to a monochromator onto the sample and detecting the transmitted light using a standard broadband linear polarizer analyzer and silicon detector.These measurements indicate that the experimental linear polarizer operates with broad bandwidth and high selectivity. To design the quarter-wave plate (QWP), we use an approach similar to our design of linear polarizers and use RCWA simulations to perform a parametric analysis of aluminum nanoridge structures with varying thicknesses, metal ridge widths, and periods.The results of a representative set of simulations for The extinction plot assumes implementation as an RCP filter as described in (a).c) Simulated extinction ratio of an RCP filter as a function of incidence angle using the starred QWP in (a).devices with a thickness = 240 nm, normal incidence, and an operating wavelength of 550 nm is shown in Figure S9a.The extinction ratio as shown is that for a QWP implemented within an RCP filter (Figure S6b) and is computed considering an RCP wave incident onto the QWP followed by polarization filtering with an ideal linear polarizer.Based on our parametric sweeps, we select devices with a period = 325 nm and width = 45 nm for the quarter-wave plate design (marked as stars in the figures).Simulated tranmission and retardance of the QWP and its extinction within an ideal RCP filter over a range of wavelengths is shown in Figure S9b and indicates the non-resonant device supports broadband functionality.Simulation results of the QWP implemented within an ideal RCP filter for different incidence angles are shown in Figure S9c and indicate that the extinction ratio remains high up to azimuth angles of 15 • .Reductions in extinction ratios at higher angles are due to diffraction. Finally, we simulate the combined physical QWP and linear polarizer stack to evaluate the performance of our RCP polarization filter (Figure S10).The performance of these structures are sensitive to the precise spacing between the individual devices because of parasitic electromagnetic coupling between the structures.Figure S10b shows the simulated extinction ratio of the RCP filter as a function of the separation between the QWP and linear polarizer.We find that for a cavity length of 625 nm, the overall extinction ratio of the RCP filter is relatively high across the broad band of simulated wavelengths. S2.5 Signal-to-noise To quantify the signal-to-noise of our system, we measure the detected intensity levels at super-pixels with the halogen source turned on and off, averaged over one hundred randomly selected measurement pixels.In arbitrary units, the average detected intensity of pixels with the light source on is 0.15 and and the average detected intensity of pixels with the light source off is 0.00051, giving a signal-to-noise ratio of 300.This indicates that in spite of our use of pinhole apertures, light intensification from the microlenses still enables high signel-to-noise measurements to be made at millisecond speeds and modest light source powers.It additionally indicates that the measured 'noise' in the system is not due to low light levels but is due to a lack of proper calibration. S3 Systems assembly and device fabrication S3.1 Polarization filter nanofabrication and characterization The polarization filter arrays comprise linear and RCP filters and are fabricated in parallel using standard nanofabrication methods at the Stanford Nanofabrication Facility (SNF) and the Stanford Nano Shared Facilities (SNSF).Fabrication is performed on four inch fused silicon dioxide wafers for the devices and on four inch oxidized silicon wafers for cross sectional imaging.First, the quarter wave plate devices are fabricated.A 240 nm thick aluminum film is deposited on the wafers using electron beam evaporation, which is to be patterned into the quarter wave plate for the RCP filters.A 55 nm thick silicon dioxide layer is then deposited using plasma-enhanced chemical vapor deposition (PECVD) at 350 °C, which serves as a hard mask for thin film patterning.Microscopic alignment markers, used to align the top and bottom devices in the multi-layer RCP filter, are defined on the wafers using photolithography followed by hard mask etching in an Oxford Capacitively Coupled Plasma (CCP) Etcher and aluminum etching in a Plasma Therm Versaline LL ICP Metal (PT-MTL) Etcher.The quarter waveplate devices are patterned using electron beam lithography (100 keV JEOL 6300 system and positive CSAR-62 ebeam resist) with patterns aligned with the alignment markers, followed by etch steps that follow the CCP and PT-MTL etching procedure from before.The quarter wave plate devices are planarized by spin coating two layers of hydrogen silsesquioxane (HSQ) spin-on-glass on the devices (Corning, FOX-16) at 2000 rpm, followed by baking at 260 degrees Celsius. Next, the linear polarizers are fabricated.A second aluminum thin film and silicon dioxide hard mask layer is deposited as before.The linear polarizers are fabricated using aligned electron beam lithography and etching as before. To encapsulate the linear polarizers with silicon dioxide while maintaining air spacers between the nanoridges, PECVD silicon dioxide deposition is used and a 300-nm-thick layer is grown.The wafer is diced into chips using the DISCO wafer saw. To characterize the linear filters, RCP filters, and quarter wave plates, the devices are mounted on a set of rotation and translation stages.A supercontinuum source (NKT Photonics) coupled to a monochrometer is used as the light source and is collimated onto the sample at a desired incidence angle.A polarization generator and analyzer comprising commercial linear and quarter waveplates is used to specify the incident and analyzed polarization states.Light is detected using a silicon photodetector. S3.2 Optical system assembly To fabricate the light field imaging system, we modify a commercial ASI178MM monochrome CMOS camera (company is ZWO ASI).First, the cover glass is removed by using a 50 W Epilog laser cutter to decompose the epoxy bonding the cover glass to the image sensor package, followed by careful cover glass removal with a razor blade.Second, the microlens array (385-µmthick Microlux Fly's-Eye lens sheets with 133 lenses per inch) is created by dicing pieces into precise dimensions using a DISCO wafersaw, followed by manual deburring with a razor blade.Third, the microlens array is bonded to the image sensor using a Karl Suss contact mask aligner for alignment and Norland Optical Adhesive 75 as the adhesive.Fourth, 8 mm x 6 mm diffraction gratings (Dynasil) are bonded to the microlens array by use of a custom jig, optical adhesive, and the Karl Suss contact mask aligner.Bonding is performed at overlapping grating and microlens array regions that are away from the image sensor.To assemble and align the polarization filters with the halogen source and light field image sensor, custom holders were made using additive manufacturing. S4 Measurement matrix to Mueller matrix conversion The Mueller matrix represents the full polarization response of an object at a particular wavelength using a Stokes vector basis.Given a set of four linearly independent input beams onto the object and the associated outputted optical responses of each beam, the 4 × 4 Mueller matrix relations follow as: For our device, our illumination basis P is based on horizontal, vertical, diagonal and right-circular polarization states: Their description in terms of Stokes vectors is: Conversion between our illumination basis and the Stokes vectors follows a linear operation using the matrix A: For analysis, we use polarization filters also with the P basis.The Mueller matrix M response of the sample using our illumination and analyzer polarization bases can therefore be expressed as: Where C is the calibrated measurement matrix and C ij denotes the measured output intensity of polarization j with respect to the input polarization i. S5 Calibration algorithm S5.1 Neural network architecture The neural network layers are specially designed to meet the unique requirements of our physical system and noise sources.For typical neural networks, strided convolution layers and/or pooling layers are used for downsampling the feature map during the forward process.However, such layers break the translation symmetry of the inputted data form, as a 2x downsampling will result in the feature map shifting by only half the amount of shift in the input.Thus, we do not use any of such layers, and the network does not perform downsampling using large convolution kernels.Rather, a convolution kernel with size n and no padding decreases the feature map size by n − 1.By stacking multiple such layers, the feature maps can be adjusted to suitable sizes.To mitigate overfitting issues and speed up training, we design the expanded convolution block shown in Figure S11a.The convolution operation of the block follows a four-step pattern, in the order of a 1 × 1 pointwise convolution that increases the channel dimension by 4×, two depthwise convolutions with kernels of 1 × 69 and 69 × 1, and a final pointwise convolution.This design significantly reduces the parameter count of the model, resulting in faster and more robust training.A stack of 7 expanded convolution blocks reduces the feature map size from the input size of 512 to 36 (Figure S11b), which corresponds to 4.5 pixels in the original image, the estimated upper bound for the point spread and translation caused by noise.Apart from the new convolution design, Hardswish activation functions [2] are used in the network to encourage sparse learning of features. Upon processing with 7 expanded convolution blocks, the outputted data form comprises a set of channels each containing a feature map of some learnt features of the input image that will be used for generating the sampling kernels K. Ideally, this correction vector would be a sum of various kernels that represent different point spread functions, such that it would be necessary to use a deconvolution (transposed convolution) layer to generate the correction vector (Figure S11b).However, this point spread function should not depend on the input intensity levels, which influences the average value of the final feature map.To deal with this constraint, a softmax layer is added prior to the deconvolution layer to normalize the feature maps (Figure S11b), which limits the sum of all the elements to 1 and removes the effect of overall brightness.For the deconvolution layer, we use a kernel size of 37 and padding of 16, which slightly increases the output size to 40.After this layer, the output results are downsampled with average pooling to 5, which produces K with the correct dimension. S5.2 Neural network training and evaluation We implemented the neural network architecture using Apache MXNet [3], built with CUDA 11.3 support.This enables dramatic speedup in training using graphic cards. To obtain training data, we use an aluminum reflection mirror as our sample and place different combinations of spectral filters at the light source and polarization filters in the incidence or reflected beam path in the 4f system Fourier plane.These filters are in addition to the pinhole polarization filters at the illumination and analyzer planes.We used horizontal, vertical, 45 • , 135 • and right-circular polarizers with 5 band-pass filters.We consider a combination of wavelengths, polarization filter types, and polarization filter positions that yield 42 full wafer calibration images for training, details for each filter combination are shown in Table S5. The training target is computed based on an ideal setup in which the Mueller matrix from a perfectly aligned reflective sample is: To make better use of the limited training data and to reduce over-fitting, we augment the data set by creating linear superpositions of the original 42 calibration images with random weights and weigh the training target images accordingly. During training, we use the squared average of the difference between the outputted and targeted image pixel values as the loss function.We use a Nvidia RTX 3090 GPU for neural network training and an Intel core i7-8700K CPU for evaluation.The network is trained using the Adam optimizer [5] with a starting learning rate of 0.001 and a batch size of 16.With our hardware, it takes 50 minutes to train 1000 batches.The training loss curve is shown in Figure S11e.This curve show that the network training converges after 80 batches, and an extended training of 5000 batches shows no visible improvement.The average inference/calibration time for 1200 super-pixels using the neural network on a CPU takes 15 minutes.With our pre-generated calibration kernels, the calibration process takes only 200 ms on the same computer.For measurement of the reference film thickness, we used a Wollam RC2 Ellipsometer on the wafer with a grid size of 0.45 cm.The measured thickness map is shown in Figure S12. S6 Thin film measurement To perform thin film analysis, the experimental measurement matrices are first calibrated with our neural network and then converted to Mueller matrices using the methods above.To use these calibrated measurements to fit silicon dioxide thickness at each super-pixel position, we minimize absolute error between the measured Mueller matrices and those computed using a thin film model.To model our air-silicon dioxide-silicon system, we use the transfer matrix method to compute the reflected polarization and spectral response as a function of input polarization.The simulated incidence beam is oriented at a 45 degree angle relative to the wafer.The index of refraction of air is specified to be 1.0 and the indices of refraction for the other two dispersive materials are shown below as a function of wavelength [6] (Table S6).Table S6: Index of refraction used for fitting. To demonstrate the need for proper calibration, we show a fitting of the thin film thicknesses without calibration.The results are shown in Figure S13 and display thicknesses that are highly erroneous compared to ground truth values.We also consider a calibration method based on the fitting of a global point spread function for all points in all super-pixels.The results for a 5 × 5 point spread function, fitted using the same training data, are also shown in Figure S13 and indicate that such basic calibration methods are also insufficient to reconstruct an accurate thickness map.To demonstrate the utility of our multi-generation kernel approach, we perform calibration using first generation kernels directly generated from the neural network (Figure S14).The results indicate that without the final optimization, some of the super-pixels are not functional and produces output with too much noise for fitting. Figure S15 shows absolute error maps for all elements of the fully calibrated Mueller matrices for all wavelengths and super-pixels, as measured for the silicon dioxide-on-silicon wafer analyzed in the main text. S5. 2 Fig. S1: Diagram of full optical system with labeled design parameters. Fig. S2 : Fig. S2: Diagram of full sample illumination with beam paths from two illumination apertures drawn.These beams correspond to diagonal polarization illumination with horizontal polarization analysis (dashed lines) and diagonal polarization illumination with right circular polarization analysis (solid lines). Fig. S5 : Fig. S5: Microlenses optical characterization.a) Representative on-and offaxis ray tracing simulations of a microlens.b,c) Spot diagrams at the microlens focal plane for (b) normal incidence and (c) 16°incidence.d) RMS spot size as a function of microlens f/# for microlenses with different diameters.The microlens f/#'s corresponding to minimum RMS spot size for a given lens diameter are delineated by circles.e) Optimized microlens f/# versus microlens size based on the circles from (d). f) Wavelength resolution as a function of microlens diameter given a system with a minimized RMS spot size. Fig. S6 : Fig. S6: Diagrams of the (a) linear polarizer and (b) RCP filter with labeled geometrical design parameters. Figure FigureS6schematically shows the layout and geometric parameters pertaining to the linear polarization filter and RCP filter.The specific geometric parameter values used in our study are summarized in TableS4.Our notation convention for incidence angle onto the filters is visualized in FigureS7. Fig. S8 : Fig. S8: Linear polarizer design and optimization.a) Schematic of the polarizer layout and material composition.b,c) Simulated (b) transmittance and (c) extinction ratio of periodic aluminum nanoridges with different periods and thicknesses at λ = 550 nm.d,e) Simulated (d) transmittance and (e) extinction ratio of a linear polarizer (star in (b,c)) at λ = 550 nm for varying incidence angles.The device has width = 75 nm, period = 150 nm, and thickness = 240 nm.f) Simulated transmittance and extinction ratio of the starred device in (b,c) for normal incidence as a function of wavelength.g) Experimental measurement results for transmission and extinction ratio of the linear polarizer at normal incidence. Fig. S9 : Fig. S9: Quarter-wave plate (QWP) design and simulation.a) Extinction ratio for QWP-based RCP filters with QWP different nanoridge widths and periods (thickness is fixed at 240 nm) at λ = 450,550,650 nm.The filter assumes an incident RCP wave and polarization analysis with an ideal linear polarization filter with alignment shown in Figure S6b.b) Simulated transmittance, retardance, and extinction ratio of quarter-wave plate with period = 325 nm, width = 45 nm and thickness = 240 nm as a function of wavelength (star in (a)).The extinction plot assumes implementation as an RCP filter as described in (a).c) Simulated extinction ratio of an RCP filter as a function of incidence angle using the starred QWP in (a). Fig. S10 : Fig. S10: Circular polarizer filter design.a) Schematic of the RCP filter showing separation length.b) Simulated extinction ratio of the RCP filter as a function of separation between the QWP and linear polarizer. Fig. S11 : Fig. S11: Neural network architecture and training.a) Structure of an expanded convolution layer used within the network.This expanded layer contains four convolution layers and two hardswish activation layers.b) Architecture of the used neural network.The input is upsampled followed by downsampling, and one softmax and deconvolution layer are each specified after the expanded convolution layer blocks.c) Super-pixel positions used from the training dataset comprising images of an aluminum wafer, highlighted in white.d) Structure of the neural network visualized with an example input.e) Training loss curve of the neural network over 200 batches and 5000 batches. Fig. S13 : Fig. S13: Thin film fitting without calibration and using point spread function filtering technique.a) Directly obtained measurement matrix data from the 4" wafer comprising silicon dioxide thin films on a silicon wafer.b) Uncalibrated silicon dioxide thickness map based on the best-fit thin film model.c) Calibrated measurement matrix data from the silicon dioxide-silicon sample using a global fitted point spread function calibration method.d) Calibrated silicon dioxide thickness map based on the best-fit thin film model from (c). Fig. S14 : Fig. S14: Thin film fitting using first and second generation calibration kernels.a) Calibrated measurement matrix data from the 4" wafer comprising silicon dioxide thin films on a silicon wafer using first generation correction kernels.b) Corresponding calibrated silicon dioxide thickness map based on the best-fit thin film model.c) Calibrated measurement matrix data from the 4" wafer comprising silicon dioxide thin films on a silicon wafer using second generation correction kernels.d) Corresponding calibrated silicon dioxide thickness map based on the best-fit thin film model. Table S4 . Our notation convention for incidence angle onto the filters is visualized in FigureS7. Table S4 : Geometric parameters for the linear and circular polarizers.
6,910.4
2023-10-07T00:00:00.000
[ "Physics" ]
Design of Electronic Nose System Using Gas Chromatography Principle and Surface Acoustic Wave Sensor Most gases are odorless, colorless and also hazard to be sensed by the human olfactory system. Hence, an electronic nose system is required for the gas classification process. This study presents the design of electronic nose system using a combination of Gas Chromatography Column and a Surface Acoustic Wave (SAW). The Gas Chromatography Column is a technique based on the compound partition at a certain temperature. Whereas, the SAW sensor works based on the resonant frequency change. In this study, gas samples including methanol, acetonitrile, and benzene are used for system performance measurement. Each gas sample generates a specific acoustic signal data in the form of a frequency change recorded by the SAW sensor. Then, the acoustic signal data is analyzed to obtain the acoustic features, i.e. the peak amplitude, the negative slope, the positive slope, and the length. The Support Vector Machine (SVM) method using the acoustic feature as its input parameters are applied to classify the gas sample. Radial Basis Function is used to build the optimal hyperplane model which devided into two processes i.e., the training process and the external validation process. According to the result performance, the training process has the accuracy of 98.7% and the external validation process has the accuracy of 93.3%. Our electronic nose system has the average sensitivity of 51.43 Hz/mL to sense the gas samples . Introduction Gas is a matter which has an independent shape but tends to expand indefinitely.Most gases are colorless and odorless which difficult to be sensed by the naked eye and human olfactory system.In addition, the gases which result in toxic odor are forbidden to be sensed using the human nose directly [1].Therefore, an electronic device is required for gas recognition.Over the last decades, the electronic nose device has extensively been used in industry for the quality monitoring system, gas identification, chemical analysis, etc. Electronic nose technology refers to the capability of the human olfaction using a sensor configuration and a pattern recognition algorithm [2,3]. In the electronic nose system, a sensor array is required to sense the odor.The Metal-Oxide-Semiconductor (MOS) sensor such as Taguchi Gas Sensor (TGS) becomes the type of sensor widely used for gas sensing applications due to its simplicity [4,5].However, it has the low sensitivity which generally requires the high sample of concentration, i.e., in the range of parts per million (ppm) level [6].Another common gas sensor is quartz crystal microbalance (QCM), which is able to sense the odor at very low concentrations, i.e., single parts per million (ppm) or even parts per billion (ppb) [7,8].To obtain a sensitive gas sensing, an array of QCM sensors is used in the electronic nose [9,10].However, the main problems of these sensors can lead to complexity and interference.Therefore, in this study, we constructed the electronic nose system which has the simple configuration with high sensitivity and good repeatability.A Surface Acoustic Wave (SAW) sensor was selected as the detector.Principally, both SAW In the analytical approximation, Sauerbrey's formula presented in Equation 1 is widely used to determine the change of resonant frequency affected by the absorbed mass on the crystal's surfaces [11]: where ∆F is the change of resonant frequency (Hz), F 0 is the resonant frequency (Hz), ∆m is the mass change (g), A is the active crystal area (cm 2 ), ρ is the crystal density (g/cm 3 ), and μ is the shear modulus of the crystal (g/cms 2 ).In 2014, Hari Agus Sujono et al., applied QCM sensor arrays for vapor identification system which has the resonant frequency of 20 MHz [9].This type of sensor array induces the complex configuration and the interference issues.Therefore, only a single sensor of SAW will be used to sense the odor which has the resonant frequency of 34 MHz.The SAW sensor used in this experiment operates at a higher resonant frequency.Hence it affects the increase in sensitivity because the change of the resonant frequency (∆F) to sense the mass change absorbed by crystal area for both sensors are dependent on their resonant frequency (F 0 ) as explained by Sauerbrey's formula. To achieve a good selectivity in the electronic nose system, we applied a Gas Chromatography (GC) principle for the compound analysis.The GC is a technique based on the compound partition at a certain temperature which involves the two phases, i.e., the stationary phase and the mobile gas phase.The stationary phase material is located in the chromatography column as the partition material, whereas the mobile gas phase consists of a sample carried by dry air into the partition column [12].Each sample has different elution strength because of the polarity suitability with the stationary phase material in the partition column [13].In 2016, the electronic nose system by integrating GC and TGS sensor was conducted by Radi et al. [12].However, the TGS sensor has a low sensitivity which requires the high amount of concentration for the measurement.Therefore, in this study, a combination between GC and SAW sensor in the electronic system is expected to overcome the issues. For the recognition part in the electronic system, we used a learning algorithm of Support Vector Machine (SVM) for the classification process.The SVM is proposed as an effective technique for data classification.It is derived from statistical learning theory introduced by Vladimir Vapnik et al. [14].Basically, another competitive learning algorithm is Artificial Neural Network (ANN), both of them are included in the supervised learning classifier [15].However, many researchers reported that SVM classifier often outperforms than the ANN classifier [16].The ANN classifier achieves an optimal local solution, while the SVM classifier obtains an optimal global solution.It is not surprising that the solution of the ANN classifier is different for each training process which results in a different optimal solution, whereas the solution offered by SVM classifier is same for every running.Hence, it generates the same optimal solution [17][18][19].The contents of this paper are organized as follows: section 2 discusses the experimental design of the electronic nose system, feature extraction, and elaborate the SVM classifier technique.Furthermore, section 3 demonstrates the results and verification analysis.Finally, we present our conclusion in section 4. Research Method 2.1. The Experimental Design In this study, the experimental design of the electronic nose system includes four main parts, i.e., a gas sample, a GC column, a detector, and data analysis.Three types of gas samples were used in this study, i.e., methanol, acetonitrile, and benzene.The chromatography column consisted of Thermon-3000 and ShinCarbon as the stationary phase material.The SAW device which has the resonant frequency of 34 MHz was used as the detector to record the frequency change of the acoustic signal generated by each gas sample. The Experimental Procedure Figure 1 presents the design of our electronic nose system.The experimental setup is depicted in Figure 1a Then the gas sample is transported by the carrier gas into the chromatography column which is located in a chamber operated under the controlled temperature of 80 0 C. Interactions between stationary phase material and the gas sample compound generate a series of fractions which is converted by the SAW sensor as the acoustic frequency change data.Furthermore, the acoustic signal data are transmitted to the computer through the Frequency Counter (FC) device for the data analysis.According to the measurement, the SAW sensor records the frequency value of about 34 MHz at initial condition before injecting the gas sample.The frequency change is described as where ∆f is the frequency change, f ref is the initial frequency of 34 MHz, and f(t) is detected frequency after injecting the gas sample.Furthermore, to collect the acoustic signal data produced by each gas sample needs 500 seconds.Figure 2 shows the sensor response to the acetonitrile. Figure 2. The sensor response to the gas sample of acetonitrile Feature Extraction of Acoustic Signal Processing In this study, the acoustic signal data would be processed to obtain the acoustic features.Figure 3 describes the parameters included to determine the acoustic features using the threshold of -100 Hz value.The four acoustic features including the peak amplitude A p , the negative slope S (-) , the positive slope S (+) , and the length L are used in this study determined in Equation 3,4,5 and 6 respectively: where t f is the fall time, y f is the fall amplitude, t p is the peak time, y p is the peak amplitude, t r is the rise time, and y r is the rise amplitude. Support Vector Machine (SVM) Classifier In this study, we used the SVM classifier to identify the gas type which included four acoustic features as the input parameters.The gas types are divided into three classes.To understand the basic principle of SVM classifier, the simple linear separable case is shown in Figure 4.In the original space, a linear hyperplane f(x) is used to separate the data points according to the support vector position.Hence, the linear hyperplane f(x) groups the data points into two classes, i.e., class +1 and class -1 which are constrained by f(x) +1 and f(x) +1, respectively [20].However, many real cases contain noise or outlier data points which are non-linearly separable [21].Thus, the main objective of the SVM classifier is to obtain the optimal hyperplane model that can maximize the margin (M) of the classes [22].The SVM classifier includes the kernels to optimize the hyperplane model for a nonlinear separable case.The kernels allow transforming the data points into a higher dimensional space called feature space to obtain a linear hyperplane although occasionally result in a nonlinear hyperplane in the original space.In this study, we used Radial Basis Function (RBF) as the kernel function.The RBF kernel is derived in Equation 7. Then the hyperplane model f(x d ) is determined in Equation 8 [23,24].(8) where x d is the data point, α is lagrange multiplier, y i is membership of the gas sample class, γ is gamma, x i is support vector, b is intercept, and i = 1, 2, 3, . . ., n. ( , ) exp( ) It is broadly mentioned that the SVM classifier using RBF kernel requires the best combination of two hyperparameters, i.e., gamma (γ) and cost (c) to build the optimal hyperplane model.The gamma explains how significant the influence of each data point in the training set.For example, a higher value of gamma leads the over-fitting problem because it tries to fit exactly each data point in the training set.Whereas, the cost is used to control the trade-off between smooth decision boundary and classifying the data points in the training set correctly [25,26]. In this study, the identification gas system using SVM classifier consisted of two processes, i.e. the training process and the external validation process.The training process was used to build the hyperplane model which included the acoustic signal data from the total numbers of 150 gas samples.Since the external validation process was used to assess the SVM performance.It used the acoustic signal data obtained from the total numbers of 30 gas sample measurements. To describe the performance analysis of the SVM classifier, the 3x3 confusion matrix table was applied for this study, shown in Table 1 [27].In the confusion matrix table, the actual result is the data based on the observation (reality).It is consisted of three classes i.e., class A, B, and C, whereas the predicted result is the identification result assessed by the SVM classifier which also consisted of three classes.The cases are divided into nine values: TA, FA1, FA2, FB1, TB, FB2, FC1, FC2, and TC.Finally, the accuracy (AC) used to assess the SVM performance to classify gas sample is determined in Equation 9 [28]. Results and Analysis In this study, we designed the electronic nose system which concerns into two terms, i.e., having high sensitivity and good repeatability.Figure 5 shows the measurement result of the signal acoustic data recorded by the electronic nose system.Based on the Equation 2, each gas sample has the specific frequency change (∆f) curve affected by each gas sample's mass absorbed by crystal area of SAW sensors.In the term of sensitivity, according to Figure 5a by using 5 mL odor volume, the amplitude peak determined in equation 3 from gas sample A, B, and C are -2000 Hz, -1000 Hz, and -85 Hz, respectively.Furthermore, by applying 20 mL odor volumes, the gas sample A, B, and C reach the amplitude peak of -2800 Hz, -1750 Hz, and -850 Hz, respectively.It means that the electronic nose system could sense the odor from methanol, acetonitrile, and benzene with the sensitivity of 53.3 Hz/mL, 50 Hz/mL, and 51 Hz/mL, respectively.Hence the average sensitivity is 51.43 ISSN: 1693-6930  Design of Electronic Nose System Using Gas Chromatography.... (Anifatul Faricha) 1463 Hz/mL.Rivai et al. [29,30] have designed the electronic nose using gas chromatography and QCM sensor.According to the result performance, it has the approximate sensitivity of 6.5 Hz/mL to sense the odor of ethanol.Another research paper shows that the frequency change curve resulted by specific odor, fragrance, and gas in the range of hundreds herzt [31].Our system offers higher operating resonant frequency.Hence, it is able to generate the specific or distinctive acoustic signal coming from each gas sample in the range of thousands hertz, shown in Figure 5.In the term of repeatability, each gas sample compound produces a specific acoustic signal data because it has particular interactions with stationary phase material in the chromatography column.The main difference of each gas sample curve is pointed out by its peak amplitude A p .For example, in Figure 5a, the highest peak amplitude is achieved by gas sample C. The gas sample A has the lowest value of amplitude peak and these trends are repeated in Figure 5b when using the amounts odor volumes of 20 mL.Furthermore, the acoustic signal data generated by each gas sample are processed to obtain the four acoustic features. Figure 6 presents the distribution of four acoustic features i.e., the peak amplitude A p , the negative slope S (-) , the positive slope S (+) , and the length L which refer to Equation 3, 4, 5, and 6 respectively.The distribution result given by Figure 6 includes 50 measurements using 20 mL odor volumes.The distribution results of the peak amplitude A p , the negative slope S (-) , the positive slope S (+) , and the length L can be seen in Figure 6a, 6b, 6c, and 6d sequentially.As shown in Figure 6a, 6b, 6c, and 6d, we can conclude that the distribution of the acoustic features from gas sample A, B, and C are classified into the non-linearly separable case.The recognition algorithm of SVM classifier was used to solve the non-linearly separable case.In this study, the total numbers of 150 gas samples were used for training process to build the optimal hyperplane model using RBF kernel.The RBF kernel requires the best combination of two hyperparameters of gamma and cost.In the hyperparameters tuning, we set the interval of gamma started from 2 -15 to 2 2 , whereas the cost has the lower limit of 2 -15 and the upper limit of 0.25. Conclusion The design of the electronic nose system with good repeatability and high sensitivity by integrating the gas chromatography principle and Surface Acoustic Wave (SAW) sensor was successfully demonstrated.Three gas samples were used for the measurement process, i.e., methanol, acetonitrile, and benzene.In the previous research, the electronic nose using gas chromatography and QCM sensor has only the approximate sensitivity of 6.5 Hz/mL to sense the odor.Another research also shows that the frequency change curve resulted by specific odor, fragrance, and gas only in the range of hundreds herzt.Based on the result analysis, our electronic nose system has the average sensitivity of 51.43 Hz/mL and also offers higher operating resonant frequency.Hence, it is able to generate more specific or distinctive acoustic signal coming from each gas sample. The repeatability performance is shown by the distinctive acoustic signal curve from each gas sample due to the specific interactions between the odor and the material located in the chromatography column.In this study, Support Vector Machine using Radial Basis Function was applied to recognize the odor.The four acoustic features obtained from acoustic signal data were used for input parameters in the classifier, i.e., the amplitude peak, the negative slope, the positive slope, and the length.The classification using Support Vector Machine was divided into two processes, i.e., the training process and the external validation process.The training process and the external validation process have the high accuracy of 98.7% and 93.3%, respectively.These results indicate that the classifier can be applied to the electronic nose system.Finally, to achieve the comprehensive result performance, the future work will concern on a deep investigation of sensitivity and repeatability performances at the electronic nose system by varying the temperature of the chamber, the pressure of the air pump, and different type kernel functions in the Support Vector Machine algorithm. Figure 1 . Figure 1.The design of electronic nose system: (a) the experimental setup, (b) the schematic layout Figure 4 . Figure 4.The Linear separable case using SVM method ) where T TA TB TC    ,TA is the correctly classified class A, TB is the correctly classified class B, TC is the correctly classified class C, FA1 is the class B classified into class A, FA2 is the class C classified into class A, FB1 is the class A classified into class B, FB2 is the class C classified into class B, FC1 is the class A classified into class C, FC2 is the class B classified into class C. Figure 5 . Figure 5.The acoustic signal data affected by the odor volume of (a) 5 mL, and (b) 20 mL Figure 6 .Figure 6 . Figure 6.The acoustic features of gas sample: (a) the peak amplitude, (b) the negative slope. Figure 7 presents the detail distribution of the hyperparameter tuning performance.The dark blue color (left) and the dark green color (right) have the lowest and highest accuracy, respectively.According to the training process, the best combinations of gamma and cost are 1 and 0.2, respectively, which have the accuracy of 98.7%.It means the total numbers of 150 observations used for the training process: 148 data are correctly classified and the others are incorrectly classified. Table 2 shows the confusion matrix result from external validation process.The result TA, FA1, FA2, FB1, TB, FB2, FC1, FC2, and TC are 9, 1, 0, 1, 9, 0, 0, 0, and 10, respectively.The SVM classifier has the accuracy of 93.3% to identify the gas sample, which means from the total numbers of 30 observations used for the external validation process: 28 correctly classified and 2 incorrectly classified.These results indicate that the SVM classifier can be categorized as the robust algorithm which can be integrated with the electronic nose system. Table 2 . The 3x3 confusion matrix result
4,373.6
2018-08-01T00:00:00.000
[ "Computer Science" ]
Automated Deception Detection Systems, a Review Humans use deception daily since it can significantly affect their life and provide a getaway solution for any undesired situation. Deception is either related to low-stakes (e.g. innocuous) or high-stakes (e.g. with harmful situations). Deception investigation importance has increased, and it became a critical issue over the years with the increase of security levels around the globe. Technology has made remarkable achievements in many human life fields, including deception detection. Automated deception detection systems (DDSs) are widely used in different fields, especially for security purposes. The DDS is comprised of multiple stages, each of which should be built/trained to perform intelligently so that the whole system can give the right decision of whether the involved person is telling the truth or not. Thus, different artificial intelligent (AI) algorithms have been utilized by the researchers over the past years. In addition, there are different cues for DDS that have been considered for the previous works, which are either related to verbal or non-verbal cues. This paper presents a review on the basic methods and the used deception detection techniques for the recent 10 years, that were studied and performed in the field of DDS, with a comparison of the deception detection accuracy reached and the number of participants used for system training. I. Introduction Deception is defined as concealing the truth from other individuals using face and body gestures [1]. People tend to use deception for many reasons. From a psychological perspective, there are two types of deception, low-stakes (face saving) and high-stakes (malicious deception). The low-stakes is related to human social life and it is not necessary to be detected while the high-stakes is necessary to be detected because this type is considered as malicious deception. For example, interviewing is necessary to detect whether either suspect person is guilty or innocent [2]. Many researches and studies have been conducted to detect the second type. Moreover, the person that tends to lie uses cognitive load than the innocent person because deception requires thinking and imagination before answering any question [3][4][5][6][7][8]. Recently, DDS has been widely used in different applications, such as security, hiring new employees for business, criminal investigation, law enforcement, terrorism detection …etc. [9]. The earliest implementation of DDS was in the polygraph test, commonly referred to as the lie detector, which detects suspected persons based on measuring different psychological cues, such as blood pressure, pulse rate, brain activity, respiration, and skin color change [10][11][12]. The polygraph test has several drawbacks, such as requiring a high level of training and violating the participant's body (physical contact). It is also inclined to the difficulty of distinguishing the high error rate for false positives for stressed innocent participants, or false negatives when emotions are controlled by guilty participants [13][14][15][16][17]. These problems prompted the use of other methods, yielding more reliable and non-invasive techniques, such as the visual feature extraction from suspects' face and body. Deception features can be classified as either verbal or non-verbal. Each type contains specific categories. The verbal cues are extracted from the voice analysis while non-verbal cues are extracted from various physical measures, including full body motion, head movement, facial expressions, eye gaze, pupil dilation, and eye blinking [18,19]. Figure-1 shows the classifications of deception detection features. The next two sections will discuss verbal and non-verbal features. II. Verbal Features The voice tone can directly reveal the internal intent of participants and determines whether the subject is deceptive or not. There are two states, in which the voice tone either rises or becomes lower. The tone rises when a person becomes angry or excited, while the tone lowers in sadness and shame. When a suspect talks, the voice tone differs whether the person was under stress or not. Thus, voice can be used as a non-invasive technique for DDS. The voice tone is considered as a verbal feature from which the researchers can determine the deception state for participants [20]. A voice analysis-based DDS study [5] detected the mean fundamental frequency (F0) and formant frequencies (F1, F2). It was concluded that when a person is under stress due to deception, F0 value increases for all participants. The values of F1 and F2 also increase for some participants, but not all. Figure-2 shows the results of mean F0 at normal (baseline) and stressed states for 12 participants. Another study was designed to investigate deception using human voice [21]. The used database was available online and the collected video clips were extracted from real world. The designed algorithm consisted of several steps. At the beginning, the extracted speed segments are passed to the normalization process, followed by applying hamming window that is used for each speech signal. Then, Discrete Wavelet Transform (DWT) was used in order to obtain time-frequency features of the selected speech signal. A reduction process was performed on the collected features, which included the calculation of signal energy, entropy, skewness, kurtosis, and standard deviation. Finally, an Extreme Learning Machine (EML) was used for classification. The detection accuracy was 91.66% when tested on 24-speech examples only. III. Non-verbal features These are more likely considered for DDS due to the efficiency and high detection accuracy. These features are listed in Figure-1, some of which , including eye blinking, head movements, and facial expressions, are going to be discussed in details below, since they are most commonly used and attracting more attention in research works recently. Psychological theories behind the Non-verbal cues-based DDS technique were proposed in 1850 [22] , in 1851 [23], and in 1872 [24]. However, Darwin"s theory has not been tested until lately [25] after performing several experiments. The team declared that facial expressions due to concealing emotions are completely different than those of normal persons [25]. Another research found that emotional leakage can happen everywhere on the human face. All the above mentioned research works utilized facial action coding system (FACS) which was developed by an earlier work [26]. FACS is a comprehensives system that distinguishes seven classes of emotions, namely anger, surprise, fear, sadness, happiness, disgust, and contempt. It categorizes all visual facial activities into 44 unique Action Units (AUs). Each AU is related to specific facial muscles. These AUs, which are either a single one or a combination of several AUs, are also referred to as emotion-specified facial expressions. For example, to represent the happy state, it is required to activate both AU 6 and AU 12 [6,27,28,29]. Non-verbal features are: 1) Full Body Motion Full body motion means tracking the motion of all human body parts. There are many types of techniques that are used to detect and recognize full body motion. The first technique depends on silhouettes without more detailed appearance information. The second technique depends on the use of Histograms of Oriented Gradients (HOG). While the third technique is the use of deep learning [18]. 2) Eye Gaze This is another non-verbal technique based on identifying eye gaze direction. The signal sent from human eye is considered as a rich source of information because this signal directly reveals the mental process. Eye gaze direction estimation is used to determine the feelings, imagining, remembering something happened, lying, and performing internal dialogues. The gaze direction can give an indication of the mental process, which leads to help to detect whether the person is innocent or guilty. Both eye motion and estimation of gaze direction are related to nonverbal cues that are used in DDS [18,30,31]. 3) Head Movement Other cues for deception detection are based on analysing head movement and position. When suspected persons tend to deceive, they relatively move their head in non-regular patterns or in different direction due to the use of more cognitive load. While the innocent persons move their head in a regular or in specific direction or they do not move their head during the interview because they utilize less cognitive load. Many algorithms are used for determining head position, most of them depend on the holistic approach that can either use the displacement of the Region of Interest (ROI), which is a face part, like eyes, mouth…etc., or the whole head. The major advantage of this approach is providing a complete picture due to its dependence on a local approach using ROI, which leads to a more comprehensive information [7] . A developed technique for head movement detection was proposed [7]. The algorithm consists of several steps. The first step is capturing the first frame and transforming it from a coloured image, RGB, into a grayscale image, then performing face detection using Viola-Jones algorithm. The second step selects the local ROI from the detected face image with no or little movement in order to be used for optimal head motion. The third step is performing convex hull function to determine the centroid, followed by determining reference points. When the next frame comes, the centroid for this frame is computed to determine the output. Figure-3 shows the red point that represents the centroid of the first frame, which is considered as reference, while the centroid of the current frame is the yellow point, which is used to compute the current position of head. Finally, the blue points are marked as reference points for computing the centroid for the current frame. This study was performed on ten participants with a detection accuracy of 58.25%. Another study [32] focused on deception detection based on blob analysis. The used technique analysed the movement of both head and hand as well as its dependence on identifying skin colour [32]. 4) Pupil Dilation When the eye pupil dilates, it becomes bigger than normal. The size of pupil is affected by two factors; the muscles in the coloured part of the eye (iris) and the amount of light directed to the eye. 5) Eye Blinking Eye blinking count is one of the most common non-verbal cues for deception detection. Eye blinking count means the number of times that the human eye performs blinking. It is usually used with eye blinking duration as features to distinguish lying or telling the truth. One related study [8] showed that blinking count and duration increased during deception. An algorithm was designed for detecting blinking using AU 45. The algorithm starts by capturing a sequence of images then performs landmark detection on these images, as shown in Figure- In that study [8], the distances between the landmarks are used to detect whether the eye is open or closed. For instance, the equation below can be used to determine the distance for eye opening using left eye lower (eye LL), left eye upper (eye LU), right eye upper (eye RU), and right eye lower (eye RL) points, as in Equation (1) (1) For blinking duration calculation, the study emphasized the necessity of using a high-speed camera with specific resolution to calculate the time required for one frame. This time is multiplied by the total number of effective frames collected during participant interviewing. The following formula calculates the eye blinking duration [8]: Blinking duration = number of frames × time for one frame (2) Another study performed by Elkins employed blinking rate to identify deception. The result of this study on 176 subjects showed that the blinking rate increased during deception because the subject utilized cognitive load while thinking to answer the question during the interview. The detection accuracy of this study was 93% [33]. 6) Facial Expressions Human face is considered as a rich source of emotional expression. Each facial muscle is responsible for a specific emotion; these muscles are encoded into AUs. These AUs are encoded according to FACS. The facial expressions are the most popular and more reliable cues for DDS. Each expression can be described by its related AU, where each AU is related to a single or a combination of facial muscles. The AUs are encoded based on FACS to design a DDS, which can distinguish innocent from guilty participants. A previous work [34] presented a DDS that consists of three stages. The first stage is video recording and dataset collection, in which each participant was asked several questions with either truthful or deceptive answer. The second stage is feature extraction in the form of AUs. Eight AUs are represented as indictors for deception. Table-1 demonstrates the selected features for the proposed DDS. The study was performed on 43 participants. The recorded videos were used for training and testing the system. The detection accuracy was 84%. Another research team [35] designed an automatic deception detection system that depends on facial clues. They detected specific AUs and used them as indicators for deception. These AUs are AU1, AU2, AU4, AU12, AU15 and AU45. Table-2 shows that each AU is responsible for a specific facial expression for the mentioned study. The detection accuracy of this technique was 76.92%. Eyelids close and open rapidly Eyes Finally, the differences between verbal and non-verbal features are as explained in Table-3. Table 3-The main difference between verbal and non-verbal features Verbal Features Non-Verbal Features Using direct communication between participants and interviewer Using non-direct communication between participants and interviewer Speech signal is considered as the only feature used for direct communication Include different types of cues like facial expressions, eye gaze, pupil dilation, head movements, eye blinking and full body motion Easy to analyze Difficult to detect and analyze Less popular and considered less efficient compared with non-verbal features, because they achieve less detection accuracy More popular and high efficiency because they achieve high detection accuracy IV. Discussion of the Used Deception Detection Techniques A DDS mainly consists of three stages, which are video capturing and pre-processing stage, features extraction stage, and finally, the classification stage. After applying the required system stages for each research work, different deception detection techniques were used. The techniques that are used for the last decade are listed in Table-3, demonstrating the number of participants, features" details, and detection accuracy for each work. The table aids in providing a broad view for the recent DDS works and, accordingly, recognizes the pros and cons for these works so that a decision can be made on the most efficient techniques [34]. By analysing Table-4 with respect to accuracy, it is concluded that the extraction facial microexpressions-based DDS technique used by [36] scored the highest accuracy of 85%. However, the number of participants was only 4, which helped in reducing the load on the classification process. The works of [34] and [37] achieved the second highest accuracy of 84% equally. The work in [34] depended on facial expression, specifically AUs, as the base technique for deception detection, while that in [37] considered measuring temperature change in the nose area. In [34], 43 participants were tested, while in [37], only 11 participants were tested. Accordingly, the work of [34] is considered as presenting the optimum performance deception detection technique since it accomplished high accuracy with relatively high number of participants. Other researchers used other DDS techniques, achieving accuracies between 70% to 80%. The used techniques were facial expression, facial micro-expression, thermal imaging, and measuring brain activities. The highest accuracy among them was 79.2%, obtained by [40] using thermal imaging technique and testing 27 participants. The other accuracy of 70.26%, with 270 participants was obtained from Mafia game database collected from the web [43], who depended on facial expressions. The worst (lowest) accuracy of 58.25% was obtained by [7] that considered detecting head movement on 10 participants. Finally, other works which did not include accuracy determination cannot be discussed and compared here. V. Conclusions The present paper presented an overview about the automated deception detection systems that are used in different applications, such as security, hiring new employees for business, criminal investigation, law enforcement, and terrorism detection. There are different types of cues that are used for deception detection. These cues or signals that fall into one of two categories, either verbal or nonverbal. The non-verbal features are more likely to be used than the verbal ones, due to their simplicity and provision of high detection accuracy. The additional details of these features were demonstrated in this paper. The different deception detection techniques introduced by research works that were performed over the past decade were listed, including the accuracy levels, number of participants, and type of used features. These works" results were analyzed in details after Table-4, and accordingly, the work in [34] yielded the optimum DDS technique due to its high accuracy level and relatively high number of participants.
3,826.2
2021-05-08T00:00:00.000
[ "Computer Science" ]
Effect of Fe Concentration on Fe-Doped Anatase TiO 2 fromGGA + U Calculations To comprehend the photocatalytic mechanisms of anatase Ti1−xFexO2 with various concentrations of Fe, this study performed first principles calculations based on density functional theory with Hubbard U on-site correction to evaluate the crystal structure, impurity formation energy, and electronic structure. We adopted the effective Hubbard U values of 8.47 eV for Ti 3d and 6.4 eV for Fe 3d. The calculations show that higher concentrations of Fe are easily formed in anatase TiO2 due to a reduction in the formation energy. The band gap of Fe-doped TiO2 decreases Fe doping level increases as a result of the overlap among the Fe 3d, Ti 3d, and O 2p states, which enhances photocatalytic activity in the visible light region. Additionally, a broadening of the valence band and Fe impurity states within the band gap might also contribute to the photocatalytic activity. Introduction The increase in global pollution has led researchers to search for new techniques and materials to promote environmental protection.Since Fujishima and Honda's report in 1972 [1], the unique properties of TiO 2 have attracted considerable attention in the fields of air and water purification, hydrogen production, and dyesensitized solar cells.Anatase TiO 2 also has a wide band gap capable of only absorbing ultraviolet (UV) light (≤387 nm).UV light accounts for a small fraction (∼5%) of the available solar energy; therefore, the utilization of solar energy is low.To improve the photocatalysis of TiO 2 , determining how to expand the optical absorption into the visible light region (∼45%) has become a topic of considerable interest among researchers. Considerable research has gone into modifying the band gap of TiO 2 .One of the most effective methods involves doping TiO 2 crystals with impurities including transition metals, such as V [2], Mo [3], Fe [4][5][6], Co [7,8], Pt [9], Au [10], and nonmetals such as N [11,12], F [13], P [14], S [15].Among these impurity elements, the radius of Fe 3+ (0.64 Å) is similar to that of Ti 4+ (0.68 Å) and is therefore easily incorporated into the TiO 2 crystal [16].In addition, the Fe 3+ dopant can serve as a charge trap, impeding the electronhole combination rate and enhancing photocatalysis within a range suitable to the concentration of the dopant [17].Fe is considered an appropriate candidate element and has been widely studied [18][19][20][21][22][23][24][25].Zhang et al. [18] prepared Fedoped mesoporous TiO 2 thin films and suggested that the doped Fe forms Fe 3+ ions, which could play a role as e − or h + traps, thereby reducing the e − /h + pair recombination rate.Wang et al. [19] reported that the band gap of Fedoped TiO 2 thin films decreased from 3.29 to 2.83 eV with an increase in the Fe 3+ content from 0 to 25 wt%.The decrease in unit cell volume indicates that Fe 3+ replaced Ti 4+ in the lattice, forming a solid solution.Some experimental results [20,21] have found that Fe doping in TiO 2 could narrow the band gap of TiO 2 , thereby increasing the efficiency of the photocatalysis in the visible range.Yalc ¸in et al. [22] performed calculations based on density function theory (DFT) to characterize the influence of Fe 3+ doping on the electronic and structural properties of TiO 2 .The results indicate that the visible light activity in Fe 3+ -doped TiO 2 is due to the introduction of additional electronic states within the band gap.Recently, first-principles calculations were conducted for Fe-doped TiO 2 , but these have been restricted to a single concentration of Fe [23][24][25].To the best of our knowledge, few of these studies have focused on the photocatalytic mechanisms of Fe-doped anatase TiO 2 with different Fe concentrations.Additionally, most theoretical calculations have greatly underestimated the band gap of TiO 2 due to the adoption of the conventional DFT method, which is known to include an insufficient description of the on-site Coulomb interaction between electrons occupying the Ti 3d orbitals.This study performed first-principles calculations using the generalized gradient approximation + Hubbard U (GGA + U) approach to investigate the crystal structure, formation energy, and electronic structure of anatase Ti 1−x Fe x O 2 . Calculation Models and Methods Anatase TiO 2 has a tetragonal structure with lattice parameters a = b = 3.776 Å, c = 9.486 Å.To calculate various concentrations of Fe, a 2 × 2 × 1 supercell was constructed with 16 Ti and 32 O atoms, as shown in Figure 1(a).The calculations for the doping system were conducted for the 2 × 2 × 1 supercell containing one, two, and three Fe atoms in the substitutional sites of the Ti atoms, as shown in Figures 1(b)-1(d), which correspond to the Fe concentrations of 2.08, 4.17, and 6.25 at.%, respectively.First-principles calculations were performed using the CASTEP module [26] in Materials Studio 5.0 developed by Accelrys Software Inc. Electron-ion interactions were modeled using ultrasoft pseudopotentials in the Vanderbilt form [27].The valence configurations of the atoms were 3s 2 3p 6 3d 2 4s 2 for Ti, 2s 2 2p 4 for O, and 3d 6 4s 2 for Fe.The wave functions of the valence electrons were expanded through a plane wave basis-set and the cutoff energy was selected as 400 eV.The Monkhorst-Pack scheme [28] Kpoints grid sampling was set at 4 × 4 × 3 (less than 0.04 Å−1 ) in the supercells.The convergence threshold for self-consistent iterations was set at 5 × 10 −6 eV.The lattice parameters and atomic positions for each supercell system were first optimized using the generalized gradient approximation (GGA) together with the method introduced by Wu and Cohen [29].The optimization parameters were set as follows: energy change = 9 × 10 −5 eV/atom, maximum force = 0.09 eV/ Å, maximum stress = 0.09 GPa, and maximum displacement tolerance = 0.009 Å.To describe the electronic structures more accurately, the GGA + U method was adopted with the strong on-site Coulomb repulsion among the localized Ti 3d electrons described according to the following formalism [30,31]: where ρ σ denotes the spin (σ) polarized on-site density matrix.The spherically averaged Hubbard parameter U describes the increase in energy caused by the placement of an additional electron at a particular site, and the parameter J (1 eV) represents the screened exchange energy.The effective Hubbard parameter U eff = U − J, which accounts for the onsite Coulomb repulsion for each affected orbital, is the only external parameter required for this approach. Results and Discussion 3.1.Structural Optimization.Table 1 summarizes the optimized lattice parameters, average bond lengths, and differences in volume, obtained from structure optimization.The optimized lattice parameters, a = b = 3.778 Å, and c = 9.549 Å, for pure anatase TiO 2 are in good agreement with the experimental values of a = b = 3.782 Å and c = 9.502 Å [32], indicating that our calculations are reliable.In anatase TiO 2 , each Ti atom is bonded to its four nearest and two second nearest oxygen neighbors.Average bond lengths are represented as Ti-O 1st and Ti-O 2nd .With the same concentration of Fe, all of the Fe-O bond lengths are shorter than those of Ti-O.Therefore, the volume decreases with an increase in Fe concentration, which is consistent with the experimental results [33].This indicates that Fe doping causes a contraction in the overall volume, which may be the result of the difference in the radii of the ions: 64 pm for Fe 3+ and 68 pm for Ti 4+ . Hubbard U Parameter. The GGA + U approach uses an intra-atomic electron-electron interaction for on-site correction to describe systems with localized d and f electrons, capable of producing a more optimal band gap.Determining an appropriate Hubbard U eff parameter is necessary in GGA + U calculations to correctly interpret the intra-atomic electron correlation.As shown in Figure 2, for anatase TiO 2 , the band gap widens with an increase in the effective Hubbard U eff of Ti 3d.The band gap was widened by increasing U eff from 2 to 8 eV.Here, the on-site Coulomb interaction was U eff = 8.47 eV for Ti 3d using the GGA + U approach and the calculated band gap of pure anatase is 3.21 eV, which is close to the experimental value of 3.2 eV.Using the same method, the U eff = 6.4 eV of Fe 3d was determined by fitting the band gap of Fe 2 O 3 (2.2 eV). Formation Energy. To examine the relative stability of TiO 2 doped with various concentrations of Fe, the defect formation energies were calculated according to the following formula: Here, E tot (Fe-doped) and E tot (pure) are the total energies of Fe-doped TiO 2 and pure TiO 2 ; n is the number of substitutional Fe atoms; μ Fe and μ Ti represent the chemical potentials of the Fe and Ti atoms, respectively.The formation energy depends on the growth conditions, which can be Tirich or O-rich.For TiO the Ti-rich conditions, indicating that the incorporation of Fe into TiO 2 at the site of the Ti atom is favorable.In addition, the formation energies decrease with an increase in Fe concentration under O-rich conditions, suggesting that higher concentrations of Fe facilitate the synthesis of Fe-doped anatase TiO 2 .The electrons in the VB can be excited to localized impurity states within the band gap and subsequently to the CB through the absorption of visible light.In addition, the substitutional Fe atom only transfers 3 electrons to the surrounding O atoms, which leads to unfilled O 2p orbitals and an electron remaining in the Fe 3+ ion as shown in Figure 4.At 4.17 at.% (Figure 3(c)), the overlap of Fe 3d, Ti 3d, and O 2p bands near the CB results in a decrease in the CBM.Therefore, the band gap of Fe-doped TiO 2 at 4.17 at.% narrows to 2.70.From 4.17 at.% to 6.25 at.% (Figure 3(d)), the CBM continuously moves toward the Fermi level and hybridization among Fe 3d, Ti 3d, and O 2p near the Fermi energy level also occur, resulting in a reduced band gap. Figure 5 shows the relationships between E g and W VB and Fe concentration.E g decreases with an increase in the Fe doping level, which is similar to other experimental results [18,19].It also reveals that the decrease in E g from 2.08 at.% to 6.25 at.% is more obvious than that from 0 to 2.08 at.%. W VB increases with an increase in Fe concentration due to the contribution from the lower Fe 3d band, which benefits the hole mobility in VB.As a result, the electron transition energy from the valence band to the conduction band decreases as a result of Fe-doping, which may induce a red shift at the edge of the optical absorption range.In addition, the valence band was found to broaden after Fe was incorporated into TiO 2 due to the contribution from the lower Fe 3d states.W VB broadens with an increase in Fe concentration.The wider valence band results in an increase in the mobility of the photogenerated electron-hole pair.In this manner, both the narrowing of the band gap and the increased mobility of the photogenerated carriers can improve the photocatalytic activity under visible light. Conclusions This study used the GGA + U method to investigate the influence of doping concentration on the crystal structure, impurity formation energy, and electronic properties of Fedoped anatase TiO 2 .We adopted the effective Hubbard U values, 8.47 eV for Ti 3d and 6.4 eV for Fe 3d, to accurately determine the band gap in the experiments.The calculated results imply that higher concentrations of Fe facilitate the formation of anatase TiO 2 .In addition, doping anatase TiO 2 with Fe can effectively narrow the band gap, thereby increasing photocatalytic activity in the visible light region and the trend of band gap decreases with an increase in Fe concentration.Both the broadening of the valence band and Fe impurity states within the band gap might also enhance photocatalytic activity. 2 Figure 2 : Figure 2: Relationship between the effective Hubbard parameter (U eff ) and the band gap (E g ) for anatase TiO 2 and Fe 2 O 3 . Figure 3 indicates the total density of states (TDOS), and the projected density of states (PDOS) to investigate the electronic properties of Fe-doped anatase TiO 2 .The zero-point energy is taken as the Fermi level.The band gap (E g ) of pure anatase TiO 2 is 3.21 eV, as shown in Figure3(a), and is consistent with the experimental value of 3.2 eV.In pure anatase TiO 2 , the valence band (VB) mainly comprises O 2p states with a small number of Ti 3d states, while the conduction band (CB) comprises Ti 3d states with a small number of O 2p states.This indicates that there exists a little covalence bond character between Ti and O atoms.The valence band of TiO 2 has a large bandwidth (W VB ) of approximately 4.63 eV, showing among the O 2p electrons.At 2.08 at.% (Fe concentration) from Figure 3(b), we observe Fe 3d impurity states in the band gap ranging from 1.74 eV above the valence band maximum (VBM) to 0.52 eV below the conduction band minimum (CBM). Table 1 : 2 , μ Ti and μ O satisfy the relationship μ Ti + 2 μ O = μ TiO2 .Under the O-rich growth condition, μ O is determined by the total energy of an O 2 molecule (μ O = μ O2 /2) and μ Ti is determined by the formula μ Ti = μ TiO2 − 2 μ O .Under the Ti-rich growth condition, μ Ti is the energy of one Ti atom in bulk Ti and μ O is determined by μ O = (μ TiO2 − μ Ti )/2.The values of μ Ti are −1601.80and−1592.61eVunder the O-rich and Ti-rich, respectively.μFe is the energy of one Fe atom in bulk Fe, and the calculated value is −856.33 eV.Table 2 summarizes the calculated formation energies for TiO 2 doped with various concentrations of Fe.It should be noted that the smaller the E Optimized lattice parameters, average bond lengths, and volume difference (ΔV ) of TiO 2 doped with various concentrations of Fe. f value is, the easier it is to incorporate impurities into the TiO 2 supercell.The formation energy of Fe-doped TiO 2 is reduced to a greater degree under O-rich conditions than under Table 2 : Formation energy of Ti 1−x Fe x O 2 for different Fe concentrations.
3,374
2012-07-15T00:00:00.000
[ "Materials Science" ]
Altimetry-based ice-marginal lake water level changes in Greenland: Unveiling annual variations in glacial lake outburst �oods linked to runoff Greenland holds more than 3300 ice-marginal lakes, serving as natural reservoirs for out�ow of meltwater to the ocean. A sudden release of water can largely in�uence ecosystems, landscape morphology, ice dynamics and cause �ood hazards. While large-scale studies of glacial lake outburst �oods (GLOFs) have been conducted in many glaciated regions, Greenland remains understudied. Here we use altimetry data to provide the rst-ever Greenland-wide inventory of ice-marginal lake water level changes, studying over 1100 lakes from 2003–2023, revealing a diverse range of lake behaviors. Around 60% of the lakes exhibit minimal �uctuations, while 326 lakes are actively draining, collectively contributing to 541 observed GLOFs from 2008–2022. These GLOFs vary signi�cantly in magnitude and frequency, with the highest concentration observed in the North and North East regions. Our results show substantial annual differences in the number of GLOFs and the variations are driven by annual difference in meltwater runoff, except for the South West region. Our method detected a 1200% increase in the number of draining lakes compared to existing historical databases. This highlights a signi�cant underreporting of GLOF events and emphasizes the pressing need for a deeper understanding of the mechanisms behind and the consequences of these dramatic events. Introduction Globally, proglacial lakes, including ice-marginal lakes, hold approximately 0.43 mm of sea-level equivalent 1 .In Greenland, there are more than 3300 ice-marginal lakes 2 and over the past three decades they have increased number, while changes in their size have remained less clear [1][2][3] .These observed changes have been suggested to be associated with changes in ice sheet surface melt 3 and retreat of the ice margin 2,4 .Covering around 10% of the Greenland Ice Sheet (GrIS) ice margin and 5% of the Peripheral Glaciers and Ice Caps (PGICs) 4 , ice-marginal lakes exert an important in uence on ice dynamics.They have shown to accelerate glacier mass loss and terminus retreat through calving 5,6 and ice velocities at the GrIS margin are roughly 25% higher for glaciers terminating in lakes compared to on land 4 . The water out ow from ice-marginal lakes can vary from a constant discharge to sudden outburst oods termed jökulhlaups or glacial lake outburst oods (GLOFs).These rapid drainage events have drastic consequences, including alterations in fjord circulations 7 and downstream geomorphology 8 , changes in local ice dynamics 9,10 and bedrock displacements 11 , as well as notable societal impacts 12 .The societal impacts of GLOFs in Greenland is lesser than other regions (such as the Himalayas, the Swiss Alps and Iceland) because of the sparse population density, with few settlements in close proximity to the Ice Sheet and/or surrounding ice caps.Moreover, small scale studies from Greenland have shown that the dynamics of GLOF events have evolved over time, resulting in changes in timing, frequency, release volume, and/or rerouting of the release water 8, 13,14 .Despite recent comprehensive studies of icemarginal lakes across Greenland utilizing optical and Synthetic Aperture Radar (SAR) satellite imagery, and Digital Elevation Models (DEMs) 2,3 , the recorded incidents of GLOFs remain low at 153, with only 25 GLOF locations documented 15 .This count is notably low when considering the substantial number of ice-marginal lakes in Greenland.Moreover, this greatly contrasts the number of reported GLOFs in other glaciated regions globally [15][16][17] in large-scale studies, likely re ecting the differing relevance of GLOFs to societal infrastructure and, in some cases, preservation of life.This suggests a scarcity of documentation rather than a scarcity of GLOF events in Greenland, highlighting the need for further investigation and emphasizing that the phenomenon is largely understudied in this region.Typically, GLOFs in Greenland have been monitored at individual level or focused on small regional areas, employing in situ observations 18 and remote sensing techniques 13,19 . This study presents the rst-ever, comprehensive large-scale study of water level uctuations in icemarginal lakes across Greenland.Distinguishing itself from previous large-scale studies relying predominantly on optical and SAR satellite imagery 2,3,20 , our approach utilizes satellite and airborne altimetry data acquired from 2003 to present (Data and Methods).We utilize pre-de ned lake outlines 21 to extract altimetry measurement of all ice-marginal lakes larger than 0.2 km 2 (n = 1387).For each lake, we construct a reliable water level time series by applying a simple statistical outlier detection framework, and calculate the largest observed water level difference (dWL) during the observation period (Data and Methods).All lakes with a dWL exceeding 4 m are manually inspected and categorized into one of three general groups: i) lakes with GLOF behavior, ii) lakes without GLOF behavior but with an overall decrease in water level, and iii) lakes without GLOF behavior but with an overall increase in water level (Data and Methods).Finally, we investigate how annual changes in GLOFs relate to variations in runoff and ice dam thickness. Ice-marginal lake changes across Greenland Out of the 1387 ice-marginal lakes included in the study more than 80% (n = 1152) had a minimum of two observations after the statistical ltering (Fig. 1), with an observation de ned as the median water level of all altimetry measurements captured within one day.Our results show that 687 ice-marginal lakes have limited surface uctuation between 0 m and 4 m (Fig. 1 and Fig. 2), indicating that these lakes likely have a consistent and continuous water out ow.Some of the lakes may experience larger water level uctuations occurring in between observations; however, as ~ 82% have ve or more observations (Fig. 2), we expect this number to be limited.Ice-marginal lakes with limited water level uctuations are found all around the margin of the GrIS and the PGICs, with the highest relative concentration in the south west (SW) (~ 72%) and south east (SE) (~ 90%) sector.Conversely, the lowest concentration is observed in the central east (CE) sector (~ 47%), whereas the central west (CW), north west (NW), north east (NE) and north (NO) sectors have similar concentrations, ranging from 61 to 63% (Fig. 1). Our ndings reveal that 465 ice-marginal lakes experience large uctuations exceeding 4m, with an average of 24 observations per lake.We detect 326 lakes exhibiting GLOF behavior (category i), corresponding to more than a quarter of all 1152 ice-marginal lakes with altimetry observations.We detect a total of 541 GLOF events, with 45% of the lakes draining more than once (Fig. 3).As our approach does not capture all occurring GLOFs, this represents a conservative minimum estimate (Data and Methods).Nonetheless, we still nd a notable increase of 301 lakes and 388 GLOF events when compared to existing databases documenting GLOFs in Greenland over the past century 15 .The highest number of GLOF events are detected in 2019 (n = 178), accounting for one-third of all events, including 75 events from lakes that exclusively drained in 2019 (Fig. 2).During the four years with complete ICESat-2 coverage (2019-2022), we identi ed a total of 510 events, among which 170 are one-off events from a single lake.Five lakes drained every year, while 29 lakes experienced three events.Additionally, 81 lakes drained twice, with 55% of them draining every second year. Lakes demonstrating GLOF behavior are observed across all regions in Greenland, and in many areas we nd them located in close proximity to one another (Fig. 1 and Fig. 3).The NE, NO and south west (SW) sector have the highest number of ice-marginal lakes with GLOF behavior with 101, 78 and 58 lakes respectively.The highest relative concentration of 43% is observed in the CE sector.By calculating the difference between the pre-and post-GLOF water level, we get an estimate of the minimum drainage magnitude of each GLOF event (Data and Method).All sectors, except the SE, contain lakes with a drainage magnitude larger than 50 m, with the largest absolute and relative concentration in the SW with 11 lakes corresponding to 19% of all lakes in this sector.Additionally, the SW sector also have one of the highest concentration of lakes with 25-50 m drainage magnitude along with the CE, both with 21%. Ice-marginal lakes without GLOF behavior, but with a general decrease in water level during the observational period (category ii), are mainly found in the northern sectors (NE, NW and NO), whereas those with a general increase (category iii) are located in the NE and partly the CE sector (Fig. 1).Additionally, we observe a large number of lakes with an overall water level increase located close to Bredebrae and Storstrømmen in the NE sector (Fig. 1 NE zoom-in). Discussion Water level time series, trends and simultaneous GLOFs Our analysis of water level time series for more than 1150 ice-marginal lakes reveals a diverse range of lake behaviors, along with large differences within lakes of the same category.This variability is in uenced by several factors, such as lake size, shape and location, catchment area, runoff, damming glacier size, and thickness, and, notably, the density and timing of observations.The latter has proved hugely important for properly gauging the water level variations for a large quantity of lakes on a Greenland-wide scale. Within the category of lakes exhibiting GLOF behavior, we nd that those experiencing recurrent events tend to drain at decreasing water levels over time.While in some instances, this may be linked to the timing of the observations taken prior to the event, we observe a similar pattern for lakes with dense observation coverage, as exempli ed by Iluliallup Tasia and Lake Isvand (Fig. 4.).Furthermore, upon examining optical satellite images of the selected lakes, we also identify a reduction in the pre-GLOF lake area (Fig. 4.).Similar patterns have been observed in studies of individual lakes in Greenland and attributed to a thinning of the damming glacier 8,10,13 .Understanding whether this constitutes a general trend across Greenland requires a longer and more consistent dataset, which will become available as more data is continuously collected.However, large-scale studies of GLOFs from other glaciated regions detect only a moderate correlation between magnitude and glacier thinning 17 .The thinning of the damming glacier also in uence those lakes, which show no GLOFs and a general decrease in water level during the observational period (category II) (Fig S6).This can be induced by the lowering altitude of a potential spillway.However, in cases where a substantial decline in water level is observed from the early to the more recent observations, there is a potential risk of overlooking GLOFs that occur between observations.Lakes displaying a general increase in water level (category iii) are likely in uenced by the thickening or advance of the damming glacier (Fig. S7), potentially coupled with changes in runoff from the catchment area into the lake.Alternatively, the rise in water level could be due to GLOFs occurring before the acquisition of the rst altimetry observations, indicating that we are only obtaining measurements during the subsequent lling period.This was con rmed using optical satellite images for a small cluster of lakes dammed by Budol Isstrøm (Fig. 5).Two of the lakes drained simultaneously in late August 2017, while the third drained in the spring or summer of 2018, all prior to the rst ICESat-2 observation. Given the observed close proximity of various draining lakes (Figs. 1 and 3), we hypothesize that simultaneous GLOFs are relatively common and may occur at multiple locations across Greenland.A similar case of simultaneous GLOFs is observed at A.B. Drachmann Glacier, the neighboring glacier of Budol Isstrøm, where two lakes drained simultaneously in 2022, following more than 3 years of water level increase (Fig. S2).The simultaneous occurrences of these GLOFs might be initiated by changes in the damming glacier, such as an acceleration in ice velocity leading to the opening of cavities 23 or a decrease in (sub)glacial meltwater 24 , possibly induced by a sudden drop in air temperature 8 .Alternatively, it may be a cascading effect, where the sudden release of meltwater from one lake triggers an immediate response in others, as it has been observed for supraglacial lakes 25 .However, the mechanism responsible for the simultaneous GLOFs is believed to be highly localized, as none of the other lakes dammed by Budol Isstrøm or A.B. Drachmann Glacier drained during the same period. Annual GLOF response to changes in meltwater runoff Throughout the observational period, we nd marked annual uctuations in the number of observed GLOFs (Fig. 6).From 2008 to 2018, the GLOF count is limited by the availability of observations.However, following the introduction of ICESat-2 in October 2018, there is a notable increase in both the annual number of observations and GLOFs (Fig. 6).Between 2019 and 2022, the number of observations remains almost constant, and thus cannot explain the substantial differences in the annual number of observed GLOFs during this period.Concurrently, there has been continuous thinning of the damming glaciers across all regions (Fig. 6 and S3).Ice dam thinning has been attributed to drainages 8, 10,23 , which would imply a consistent increase in the annual number of GLOFs from 2019 to 2022. We nd that the annual number of GLOFs follows the annual variability in RACMO-derived runoff (Fig. 6 and Fig. 7), indicating that the number of GLOFs is higher in years with high runoff, and vice versa when runoff is low.The correlation is evident both at a Greenland-wide scale and at a regional scale.Previous work 3 has suggested increased runoff as a factor driving regional changes in total lake volume along the CW and SW margins from 1987-2010, but it has never before been linked to large-scale variations in GLOFs.Larger volumes of runoff accelerate the lling of ice-marginal lakes, causing them to reach their maximum threshold more rapidly and indicates that system responds almost instantaneously to these changes.However, regional differences in the covariance and the negative trend in the SW region imply that uctuations in runoff have varying in uence on the GLOF cycle in the different regions.In the SW region, the close clustering of three out of four years (2020, 2021, and 2022) in Fig. 7 indicates that the orientation of the trend is sensitive to the data from 2019.The SW has the highest runoff, receiving more than two times the amount of meltwater during an average year (2020) compared with some of the northern regions during extreme melt years (2019) (Fig S4).The extreme runoff in the SW in 2019 does not result in more GLOFs, pointing to potential alternative controls.The additional meltwater received during extreme years could possibly result in earlier ll-ups and, consequently, earlier drainage dates in all regions.Determining whether the lakes experience earlier outbursts would require longer and more temporally consistent time-series data.However, recent large-scale studies of GLOF changes during the past decade from other glaciated regions report earlier outbursts 17 . Given the increase in runoff over the past 40 years (Fig. S5) and the anticipated acceleration in the coming decades in Greenland 26 , we hypothesize that the number of annual GLOFs has grown during the past decades, and will persist as an upward trend in the future.This may be further accelerated with persistent future thinning of damming ice.However, as runoff from the GrIS is currently 60% more variable form year-to-year compared to the recent three decades 27 , we would also expect large interannual variations.This contrasts with recent large-scale observations from Alaska, which show an unchanged frequency of GLOFs 16 , and points to potential differences in GLOF behavior between glaciated regions in the Arctic. Lake outlines and identi cation of ice-marginal lakes To accurately estimate changes in water levels of lakes using altimetry observations, precise and contemporary lake outlines are essential.We use lake outlines from the recently released mapping of Greenland conducted by The Danish Agency for Data Supply and Infrastructure (SDFI) 21 .The dataset has a high accuracy and internal consistency and is based on high-resolution satellite images (0.5 x 0.5 m) captured between 2018 to 2022, during a time when more than 96% of the altimetry data used in our analysis is captured.The dataset contains more than 150.000 lakes, each assigned a unique lake ID, with 10,073 identi ed as ice-marginal.This identi cation is based on the lakes proximity to the GrIS and peripheral glaciers, with the criterion of being situated within a 100-meter buffer from the corresponding SDFI ice mask.We remove all lakes with an area smaller than 0.2 km 2 , as smaller lakes have limited altimetry coverage and consequently contain a high fraction of potential inaccurate observations, leaving a total of 1387 ice-marginal lakes to be included the study. Altimetry data Lake water levels are determined by combining altimetry observations from four different datasets (Table 1).We used altimetry data products, such as ATL06 and GLAH06, as they are pre-processed and have been previously used for studying uctuations in lake water levels 10,13,28 .All ICESat-2 data was downloaded from the NSIDC homepage 29 between July 1st and July 10th, 2023, with the most recent observations captured on April 16, 2023 (Table 1).The ATL06 is chosen over the ATL13 dataset, which is speci c for inland surface water as it has extremely sparse coverage compared to the ATL06 dataset. Contrary, the ATL03 photons dataset exhibits an exceptionally high density of observations, necessitating a substantial computational capacity, likely with minimal additional information. Estimation of water level time series All altimetry datasets are merged and spatially ltered using the lakes outlines, resulting in a comprehensive dataset of 15,745,605 altimetry measurements spanning all 1387 ice-marginal lakes.We apply a simple outlier detection framework to remove erroneous measurements located inside the lake area and calculate lake-speci c water level time series.The framework is based on the statistical variability of our observations and is designed to effectively handle lakes with varying numbers of observations: 1.For each lake, we calculate the median elevation of each unique day (Observation median) based on all measurements acquired on that day.Subsequently, we determine the median elevation across all unique days (lake median WL) (Fig. 8) to prevent bias due to variations in the number measurements across different days.2. Next, we calculate the absolute difference from all Observation medians to the lake median WL and calculate the median absolute deviation (MAD) (Fig. 8).We then de ne an upper and lower threshold using Eqs.( 1) and ( 2), which we use to identify potential outliers. Upper threshold = lake median WL + C * MAD Eq. ( 1) Lower threshold = lake median WL -C * MAD Eq. ( 2) This detection process employs a conservative outlier detection approach with C set to 3. We opt for the MAD method as it offers greater robustness in handling large outliers observed in our dataset (Fig. 8). 3. If any Observation median falls outside of the upper or lower bounds, we mark them as potential outlier.However, we do not lter the observations immediately, but perform additional test to check if they are actual outliers: 4. If over 30% of measurements during a day are within the threshold, we remove measurements outside and recalculate a new Observation median (Fig. 8) 5.If over 70% fall outside, we test the validity of the measurements.We lter out all measurements if they fail to meet the following criteria: i) total measurements > 3, ii) Observation STD < 20m, and iii) difference between Observation median and lake median WL < 100m (Fig. 8). .Lastly, we perform a nal check on all Observation medians varying more than 50m from the lake median WL, to capture potentially undetected outliers.If the Observation median varies more than 10m from both the previous and following observation (date) within a 200-day window, the Observation median is removed (Fig. 8).This is based on the assumption that observations obtained within close proximity in time are likely to be more similar. Following the outlier removal, we recalculate the Observation median water level of all remaining lake measurements and subsequently generate time-series illustrating the variations in water level for each individual lake.Next, we calculate the largest observed water level difference (dWL) (max.water levelmin.water level) of each lake to identify lakes with notable water level changes. Determining lake characteristics All lakes with a water level difference larger than 4 m were selected (n = 503) for subsequent analysis as we are interested in lakes with substantial uctuations.Within this subset, we examine the changes in water level and correct undetected outliers by manually removing them or by minimizing the lake area used for the spatial ltering using a negative buffer on the original lake area.In rare instances, we refer to optical imagery to con rm whether speci c altimetry observations are indeed outliers.Following this additional ltering, we are left with 465 lakes with a water level difference exceeding 4 m. We classify these 465 lakes based on their water level changes and divide them into three general groups: i) lakes with GLOF behavior, ii) lakes without any apparent GLOF behavior but with an overall falling water level during the observational period, and iii) comparable to (ii) but with an overall rise in water level during the time series.There may be overlaps between categories as lakes experiencing GLOFs might also exhibit rising or falling water levels.However, we assign each lake to only one category due to the relatively short observational time span of the majority of our observations.For each of the lakes exhibiting GLOF behavior, we manually determine signi cant key parameters such as the number of drainages, drainage year, and the pre-and post-drainage water level.For some lakes, such as Lake Hullet (Fig. 4), limited observations make it challenging to discern seasonal variations and precisely estimate the magnitude and timing of GLOFs.Nevertheless, GLOFs occurring in 2020, 2021, and 2022 can be identi ed.Conversely, for lakes with more frequent observations, such as Iluliallup Tasia and Lake Isvand (Fig. 4), we can more accurately determine the seasonal evolution and GLOF characteristics.We de ne the magnitude of each GLOF event as the difference between the pre-and post-drainage water level.The drainage magnitude serves as a valuable indicator for assessing the scale of these events.However, as it depends on the timing of the observations it should be interpreted as a minimum magnitude.Additionally, it does not provide a direct measurement of the volume of released water, as the metric is dependent on the lake's bathymetry.Consequently, ice-marginal lakes with extensive surface areas and drainage magnitudes of less than 10 meters can still discharge signi cant volumes of water, while smaller lakes with larger drainage magnitudes may release less volume. Accuracy assessment the of water level time series To validate our ndings, we conduct a comparison between the water level uctuations derived from altimetry data and observed changes in lake area in optical satellite images for 110 lakes that drain.As some 110 lakes drained more than once during the observations period our validation include a total of 184 GLOF events.For each of these events, we manually examined optical satellite images to con rm whether a drainage in the altimetry-based water level coincided with alterations in the lake area.Out of these, we successfully con rm 95% of the GLOFs.For lakes with more detailed mapping of changes in the lake area (Figs. 4 and 5), we observed a high agreement with the observed uctuations in water level and the timing of the GLOFs.Nevertheless, there may be instances where we erroneously document a drainage event using our altimetry record.This could be attributed to various factors, such as erroneous altimetry observations, frontal uctuations of the damming glaciers, or the presence of large oating icebergs in the lakes. When comparing the generated water-level time series with ndings from comprehensive, lake-speci c studies 8,10 , it becomes apparent that our method may not detect all drainage events, nor the speci c timing of event (Fig. 9).In some cases, this is attributed to GLOFs occurring during periods when altimetry observations are limited, such as before the launch of ICESat-2.Additionally, drainage events may be missed due to a lack of observations, particularly with smaller lakes that receive only sparse altimetry coverage throughout the year (Fig. 9, Russel Glacier Ice-dammed lake).Expectedly, our largescale approach does not capture all drainage events or their precise timing, as in detailed lake-speci c studies.This emphasizes the trade-off between detail and analyzing a broader dataset spanning numerous lakes across Greenland.Consequently, our results on GLOF occurrences represent a conservative minimum estimate. For some lakes, we observe time-series with unusual and abrupt water-level uctuations, such as the sudden increase observed in 2011 at Lake Tininnilik, which drained in 2010 (Fig. 9).These instances primarily occur following drainage events when the lake area is reduced.During lower water levels, we include numerous altimetry observations from outside the post drained lake area, i.e. from the surrounding bathymetry/bedrock.Consequently, observations from the same over ight often exhibit a much larger internal variance, whereas observations from different days may signi cantly differ due to variations in the drained lake bathymetry.In some cases, re ning the lake outline for a more precise spatial ltering of the observations considerably improves the water level time series.Lakes showing a general decrease in water level, which typically coincides with a decrease in lake area, may also be in uenced by spatial ltering issues.In addition, if the lake-terminating glacier front has undergone substantial retreat or advance during the observational period, there is a risk that measurements might include elevation observations of the ice rather than the lake water level.However, decreasing the lake size too drastically, may end up in excluding observations with potential important information.Striking the right balance in selecting the most appropriate criterion for outlier detection is crucial.Being overly conservative in the statistical outlier removal may also result in the inadvertent exclusion of abrupt water level changes as outliers. Automated water detection frameworks applied to optical images, commonly used in studies of icemarginal lake 2,3 , rely heavily on cloud-free images with a distinct water body re ectance.Thus, these approaches would likely not have been able to capture the observed changes in Fig. 5 as the lakes are often covered by ice or snow, which is not uncommon for ice-marginal lakes located in the northern regions.This limitation emphasizes one of the primary advantages of using altimetry data for largescale, Greenland-wide studies of lake changes, as it is largely independent of clouds, snow, ice, and lake water turbidity. Annual runoff The mean annual runoff are determined at Greenland wide and at regional scale using the RACMO2.3p2with a 1km resolution 30 . Ice dam elevation The elevation of the ice dams is calculated using the PRODEM 31 dataset, which contains annual summer elevations of the ice sheet marginal zone between 2019 and 2022 at a 500 m resolution 32 .For all lake exhibiting GLOF behavior, we identify the centroid point and extract the annual elevation value from the closest PRODEM pixel, located 1500 m from the lake centroid to avoid erroneous observations from the frontal, uctuating part of the ice margin.For each lake, we rank the ice dam elevations from one to four, assigning the highest rank (1) to the year when the ice dam had its maximum elevation.Finally, we calculate the mean annual rank of all ice dams.This implies that a higher mean annual rank corresponds to thinner glaciers.Time-series of water level and lake area changes for luliallup Tasia, Lake Isvand and Lake Hullet.These lakes have previously been found to produce substantial GLOFs 22 .Errors bars indicate the STD of all measurements. Declarations Data and code availability Evolution of lakes dammed by Budol Isstrøm.A) series of water level and lake area changes.Note the concurrent GLOF of lake 29377 and in 2017.B) Optical Sentinel-2 images the observed max.and min.lake area of each lake from 2017-2023.Due to ice and snow cover the lakes, the minimum and maximum lake extent may be di cult to determine.observed lake changes would not be detected using automated water detection frameworks applied to optical images, which highlights one of the primary advantages of altimetry data.Note the general agreement between uctuations in lake area water level.Variations in runoff and annual number of GLOFs.A) Annual changes in GLOFs and runoff.Runoff anomaly is relative to a 2000-2022 baseline.B) and C) show the covariance between the annual number of GLOFs and annual mean runoff at B) a Greenland-wide level and C) within and across regions.For cross-regional comparison, we the annual GLOFs and runoffs by calculating the ratio of the total amount across all years (2019-2022) within a region.Thus, the sum each region to one on both axis.Note that the SE region is not included in the correlation plots (B and due to the amount annual GLOFs. Figures Figures Figure 1 Overview Figure 1 Figure 2 Number Figure 2 Figure 3 Overview Figure 3 Table 1 Overview of the types of altimetry data included
6,629
2024-07-08T00:00:00.000
[ "Environmental Science", "Geology" ]
“Saving Lives, Protecting Livelihoods, and Safeguarding Nature”: Risk-Based Wildlife Trade Policy for Sustainable Development Outcomes Post-COVID-19 The COVID-19 pandemic has caused huge loss of life, and immense social and economic harm. Wildlife trade has become central to discourse on COVID-19, zoonotic pandemics, and related policy responses, which must focus on “saving lives, protecting livelihoods, and safeguarding nature.” Proposed policy responses have included extreme measures such as banning all use and trade of wildlife, or blanket measures for entire Classes. However, different trades pose varying degrees of risk for zoonotic pandemics, while some trades also play critical roles in delivering other key aspects of sustainable development, particularly related to poverty and hunger alleviation, decent work, responsible consumption and production, and life on land and below water. Here we describe how wildlife trade contributes to the UN Sustainable Development Goals (SDGs) in diverse ways, with synergies and trade-offs within and between the SDGs. In doing so, we show that prohibitions could result in severe trade-offs against some SDGs, with limited benefits for public health via pandemic prevention. This complexity necessitates context-specific policies, with multi-sector decision-making that goes beyond simple top-down solutions. We encourage decision-makers to adopt a risk-based approach to wildlife trade policy post-COVID-19, with policies formulated via participatory, evidence-based approaches, which explicitly acknowledge uncertainty, complexity, and conflicting values across different components of the SDGs. This should help to ensure that future use and trade of wildlife is safe, environmentally sustainable and socially just. INTRODUCTION Background The COVID-19 pandemic has caused a worldwide state of emergency, with immense human suffering, loss of life, and socio-economic instability. Several early cases of COVID-19 were traced to a wet market in Wuhan, China, which traded domestic and wild animals (Wu et al., 2020). These early cases raised concerns about the role of wildlife trade in the emergence of COVID-19 and zoonotic diseases more broadly. A wide range of policy responses have been suggested. Extreme ones include calls to ban use and trade of wildlife entirely (Singh Khadka, 2020), or blanket global measures for entire Classes of wildlife, in the belief that this will protect public health, while also improving animal welfare and delivering conservation goals (The Lion Coalition, 2020;Walzer, 2020). Others have called for more balanced or targeted approaches, directed toward critical control points in the supply chain, or specific species which are more likely to harbor zoonotic viruses (Petrovan et al., 2020;Roe and Lee, 2021). Some governments have acted decisively to implement new policy measures. For example, China's top legislature adopted a decision to "thoroughly ban the illegal trading of wildlife and eliminate the consumption of wild animals to safeguard people's lives and health." This decision covers all terrestrial wild animals; fish, wild plants, amphibians and reptiles, while animal products for non-edible use remain exempt from this measure, with use regulated under other instruments (Li, 2020;Koh et al., 2021). Vietnam temporarily banned imports of wildlife and wildlife products (with some exemptions for various nonedible products), and called for enforcement of existing laws to eliminate advertising, buying, selling and consumption of illegal wildlife products (Prime Minister of Vietnam, 2020). Similarly, a resolution was passed in Bolivia re-stating bans on wildlife trade and consumption as a matter of public health (Ministerio de Medio Ambiente y Agua, 2020). In Gabon, a more targeted approach has been adopted, via a ban on consumption of bats and pangolins (Afp, 2020). However, while bats have been identified as a likely primary reservoir of COVID-19, evidence that the pandemic emerged due to wildlife trade remains inconclusive (Andersen et al., 2020;Huang et al., 2020;Shereen et al., 2020). Moreover, wildlife trade can both help and hinder the delivery of a broad range of health, livelihood and nature conservation outcomes, underpinning multiple UN Sustainable Development Goals (SDGs). While saving lives through pandemic prevention is undoubtedly a top policy priority, silver-bullet approaches such as blanket bans fail to acknowledge the heterogeneous public health risks present across species and contexts, and the diverse roles of wildlife trade in delivering sustainable development outcomes (Challender et al., 2015;UNEP and ILRI, 2020;Wang et al., 2020). These top-down approaches also fail to account for the complexity, uncertainty and plurality of values associated with wildlife trade, with non-compliance and the emergence of illicit markets potentially undermining such approaches (Fournie et al., 2013;Bonwitt et al., 2018;Zhu and Zhu, 2020). Instead, policy responses to the pandemic should focus holistically on "saving lives, protecting livelihoods, and safeguarding nature" (IPBES, 2020), all of which are fundamental to delivering the SDGs. To broaden the discourse, we describe how wildlife trade affects sustainable development in diverse, complex and dynamic ways, with synergies, trade-offs and feedbacks within and between the SDGs. Based on this, we argue that a risk-based multi-sector approach to wildlife trade policy post-COVID-19 can support health, livelihoods, and the conservation of nature. We suggest how decision-makers might evaluate these trade-offs and synergies for different species and contexts in order to formulate risk-based policies through six illustrative case studies. Finally, we offer some general principles and processes for using such evaluations in decisionmaking in the face of uncertainty, complexity and plurality of values. Overall, we encourage decision-makers to think more holistically and participatorily about wildlife trade, and to adopt risk-based policies which minimize public health risks, while enhancing benefits across other dimensions of wildlife trade for sustainable development. The Diverse Roles of Wildlife Trade in Meeting the Sustainable Development Goals Wildlife trade is the sale or exchange of wild animals, fungi and plants, and their derivatives (Broad et al., 2002). It is extremely diverse and dynamic, encompassing a wide range of species, actors and supply chains at various scopes and scales, with different markets varying in their legality, sustainability and social legitimacy ('t Sas-Rolfes et al., 2019). For example, local trade of wild fungi in Ozumba, Mexico, is safe, sustainable, contributes to local livelihoods, and maintains traditional ethnobiological knowledge (Pérez-Moreno et al., 2008) and game ranching makes a significant contribution to South Africa's GDP, and can incentivize land and wildlife stewardship (Pienaar et al., 2017). In contrast, international trade in sea cucumbers is driving stock collapses, which is undermining coastal livelihoods and associated with illegal fishing activities (Purcell et al., 2013;González-Wangüemert et al., 2018). Similarly, high-value trade in pangolin parts has depleted some populations in Asia, with much trafficking attention now focused on Africa (Challender et al., 2020). With this diversity, wildlife trade has direct positive and negative contributions to the '5Ps' of the SDGs (People, Prosperity, Peace, Partnerships and Planet), and indirect contributions via SDG interactions, feedbacks and policy interventions (Figure 1). "Saving Lives, Protecting Livelihoods": Direct Contributions Toward SDGs for People and Prosperity The hunting, transportation and consumption of some wild animals can increase the risk of zoonosis emergence, and thus hinder progress toward good health and well-being (SDG 3) (Swift et al., 2007;UNEP and ILRI, 2020). Zoonotic pandemics can cost billions or even trillions of dollars in economic and social burden, also hindering progress toward no poverty and decent work (SDGs 1 and 8). For example, in the 2014 Ebola outbreak in West Africa, over 11,000 people lost their lives with a total economic burden estimated at US$ 53 billion (Huber et al., 2018), while the economic opportunity costs of the COVID-19 pandemic could amount to $10trn in forgone Gross Domestic Product (GDP) over -21 (The Economist, 2021. Overexploitation also undermines progress toward responsible consumption and production (SDG 12) and can create poverty traps, thus weakening the capacity of ecosystems to support FIGURE 1 | Illustrative examples of some general positive (green) and negative (red) contributions of wildlife trade to the Sustainable Development Goals (SDGs). Direct contributions are denoted by arrows in the center of the diagram, while interactions between the SDGs are denoted by arrows around the outside (with trade-offs in red and synergies in green). This diagram is illustrative only; it is not intended to provide a complete review of all types of wildlife uses and trades, and their contributions and interactions. good health, well-being and poverty alleviation (SDGs 1 and 3) (Pienkowski et al., 2017). Conversely, wildlife trade also supports the diets and livelihoods of hundreds of millions of people, helping to deliver no poverty, zero hunger and decent work and economic growth (SDGs 1, 3 and 8, respectively) (Roe et al., 2020;Wang et al., 2020). For example, American bullfrog (Lithobates catesbeianus) is a common delicacy in China, with a farming industry valued at around US$ 120 million per year, which employed 24,000 people in 2016 (Chinese Academy of Engineering, 2017). In some cases, wildlife trade chains primarily involve female traders -for example, in Ghana, bushmeat wholesalers and market traders in urban areas are all women (Mendelson et al., 2003) -and these livelihood opportunities create important contributions to gender equality (SDG 5). Wildlife trade also has socio-cultural significance in rural and urban contexts worldwide (Alves and Rosa, 2013), such that restricting access to wildlife can harm social justice, particularly amongst indigenous and marginalized communities, thus hindering progress toward reduced inequalities (SDG 10), peace, justice and strong institutions (SDG 16) and partnerships for the goals (SDG 17) (Antunes et al., 2019). Alternatively, sustainable wildlife management, which is developed and implemented under good governance conditions and through fair participatory processes, can have positive impacts on security and support SDGs 16 and 17 (Cooney et al., 2018;Roe and Booker, 2019; Figure 1). "Safeguarding Nature": Direct Contributions Toward SDGs for Planet Wildlife trade can both help and hinder the protection of life below water (SDG 14) and on land (SDG 15). For example, nearly three-quarters of threatened or near-threatened species are being over-exploited for trade and/or subsistence purposes (Maxwell et al., 2016), representing a leading global threat to biodiversity (Tilman et al., 2017). For several Critically Endangered taxa, such as rhinos, pangolins and wedgefish, trade-driven overexploitation represents the greatest threat to their survival (Maxwell et al., 2016;Kyne et al., 2019;Challender et al., 2020). Capture and trade can also harm the welfare of individual wild animals, particularly the live animal trade, which can cause high stress and mortality (Baker et al., 2013). Conversely, well-managed, sustainable trade can have benefits for biodiversity (Heid and Márquez-Ramos, 2020;McRae et al., 2020). For example, regulated trade in vicuña wool fiber in Bolivia allowed the recovery of the species from near-extinction, with direct benefits from harvesting for local communities and an estimated contribution of US$ 3.2 million to the national economy per annum (Cooney, 2019). Similarly, carefully managed trade of saltwater crocodiles has aided population recovery in Australia, with population density at least doubling since the introduction of an egg harvesting initiative [which also provides US$ 515,000 per year in income to Aboriginal communities (Fukuda et al., 2011;CITES and Livelihoods, 2019b)]; regulated hunting of bighorn sheep in the USA and Mexico has helped once-dwindling populations to recover at least three-fold, whilst funding conservation of associated ecosystems (Hurley et al., 2015); and game ranching in South Africa incentivizes private land stewardship (Pienaar et al., 2017; Figure 1), all of which pose littleto-no public health risk. In general, wildlife trade policies that incentivize sustainable use typically have more immediate positive effects on wildlife populations than outright trade bans (Heid and Márquez-Ramos, 2020). Indirect Impacts on the SDGs Through Interactions, Policy Interventions and Feedbacks The above examples also indicate interactions between the SDGs, such as trade-offs and feedbacks, which arise from wildlife trade. SDGs can interact in many ways, with potential cascading effects (Nilsson et al., 2016(Nilsson et al., , 2018, and those which are most pertinent to COVID-19 and wildlife trade relate to counteracting interactions between food security, public health and life on land. For example, while trade and consumption of horseshoe bats may provide nutritional benefits for some people, they can also pose wide-spread public health risks (Mickleburgh et al., 2009;Wong et al., 2019), creating a trade-off between SDGs 2 and 3, and within SDG 3. In other cases, the substitution of wildlife with domestic livestock could drive agricultural expansion, and exacerbate anthropogenic drivers of zoonosis emergence (Allen et al., 2017;Booth et al., 2021), thus hindering progress toward improved health, responsible consumption, and life on land (SDGs 3, 12 and 15). Conversely, these interactions can also be reinforcing. For example, sustainable use of wild-sourced natural resources may contribute to food security (SDG 2), and reduce land use change and carbon emissions from commercial agriculture, thus contributing to life on land (SDG 15) with potential synergies for climate action (SDG 13) (Figure 1). Wildlife trade policy interventions can also create feedbacks and unintended consequences for the SDGs. For instance, restricting wildlife trade can have conservation benefits (SDGs 14 and 15), but may harm food security, health and well-being (SDGs 2 and 3) (Larrosa et al., 2016;Bonwitt et al., 2018;Short et al., 2019). Overly stringent or socially illegitimate regulation can also lead to non-compliance and black markets, which can erode security and institutions (SDG 16) (Bonwitt et al., 2018;Oyanedel et al., 2020), and can backfire leading to further declines in populations of threatened species (Leader-Williams, 2003). Overall, wildlife trade and its contributions to society are complex, uncertain and divergent. Designing policy interventions in response to COVID-19 therefore requires a holistic multi-sector approach, which explicitly acknowledges trade-offs, feedbacks and pluralistic values, and seeks to minimize direct public health risks from zoonoses, whilst optimizing benefits across other SDGs. A WAY FORWARD: DATA AND PROCESS FOR HOLISTIC POLICY RESPONSES Minimizing disease risk whilst delivering other SDGs requires that policy responses explicitly acknowledge the broader socioecological context of wildlife trade (Bonwitt et al., 2018;Eskew and Carlson, 2020;Zhu and Zhu, 2020). The nature and magnitude of the costs and benefits of wildlife trade will depend on the species and context. As such, considering the range of costs, benefits and associated risks in an integrated way could help to formulate robust policy responses that minimize the risk of future pandemics, contribute positively to SDG outcomes, and identify pinch points for targeting management interventions. We illustrate this through six case studies, and then offer some general suggestions regarding data, principles and process. Case Study Examples We first explore how direct and indirect contributions to relevant SDGs might be explicitly considered in decision-making for different species and contexts, based on qualitative assessments for six case study examples (Table 1 and Figure 2). We selected these case studies to represent a range of geographic and taxonomic diversity, and a plurality of costs and benefits across the 5Ps of the SDGs; and because published data is available on implications of trade for at least three of the 5 Ps of the SDGs. For each case study, we provide a qualitative judgment of the positive contributions (benefits) and negative contributions (costs) of each type of wildlife trade to the SDGs. These are categorized as high, moderate or low, according to available data on: the extent of the contribution, the intensity of the Table 1. Frontiers in Ecology and Evolution | www.frontiersin.org contribution, and its perceived likelihood of occurrence, as per common risk assessment processes used in animal and human health (Narrod et al., 2012;Beauvais et al., 2018). To acknowledge uncertainty, we also offer a qualitative judgment, where: low uncertainty corresponds to robust and complete data available, with strong consistent evidence provided in multiple references; moderate uncertainty corresponds to some data available, but with few references and/or some inconsistencies; high uncertainty corresponds to scarce or no data available, with anecdotal evidence and/or highly inconsistent conclusions (Beauvais et al., 2018;Booth et al., 2020). We emphasize that these case studies are not based on exhaustive literature reviews, expert and stakeholder consultation, or comprehensive quantitative data, nor are the case studies fully representative of the wide range of species, geographies and contexts in which wildlife trade takes place. Rather they are illustrative examples of the types of issues and data that should be considered within real-world decision contexts. We encourage researchers and decision-makers to use all available data, values and expertise to consider the range of costs and benefits within their own decision-making contexts, and to transparently define and disclose their own evaluation criteria and associated thresholds when conducting context-specific risk assessments for policy formulation. Trade in horseshoe bats (Rhinolophidae) in South China currently poses a high public health risk in terms of extent, severity and likelihood (Han et al., 2016;Wong et al., 2019) and creates potential negative impacts for bat populations and habitats (SDG 15, Zhang et al., 2009). These high potential downside costs may outweigh socio-economic benefits: while bats are consumed as supplements in some rural diets (SDG 2), often consumption is not targeted (Mickleburgh et al., 2009), making this benefit limited in terms of extent and intensity (Figure 2A). Thus, a ban on all trade and consumption of bats in South China may be appropriate, though enforcement challenges and the input and values of rural communities would need to be carefully and explicitly considered (Table 1 and Figure 2B). Similarly, the high public health risks and limited benefits of great ape trade indicate that bans may be an appropriate pathway to simultaneously protect health (SDG 3) and life on land (SDG 15), (Keita et al., 2014;Plumptre et al., 2019). However, it is already illegal to hunt and trade great apes in most of their range states, so interventions may need to focus on implementation of existing regulations, or additional regulation with a public health lens, considering the concerns of affected residents and lessons from previous interventions (e.g., Bonwitt et al., 2018). In contrast, trade in Bighorn sheep (Ovis canadensis) in North America and rays (Batoidea) in The Gambia do not pose immediate public health concerns in terms of extent and severity of disease outbreak. However, these trades provide significant benefits in terms of food security (SDG 2) and livelihoods (SDGs 1 and 8), though careful management is needed to ensure utilization is compatible with responsible consumption and production (SDG 12), and life below water (SDG 14) and on land (SDG 15), (Hurley et al., 2015;Moore et al., 2019). Trade in other species, such as live waterfowl (Anseriformes) traded in live bird markets in Egypt, represents a moderate public health risk (SDG 3). Influenza A (H5N1) is pathogenic with a high likelihood of transmission from animal-to-animal and animal-tohuman, however human-to-human transmission is limited, such that the pandemic potential and thus extent of the cost is likely to be limited. However, this trade also provides myriad benefits for people, as a source of protein, income and cultural value (SDGs 1 and 2) (Kayed et al., 2019; Figure 2C). In this context, a regulated trade may be most appropriate, with strict hygiene standards, routine surveillance, and no flock mixing (Fournie et al., 2013). Evidence from live bird markets in Vietnam suggests that regulated trade may be more effective at minimizing public health risks and preventing illegal or illicit trade than poorly enforced bans (Fournie et al., 2013), thus creating a better delivery mechanism for protecting health (SDG 3) and peace, justice, and strong institutions (SDG 16) (Table 1 and Figure 2D). More detailed background information for each of these case studies is available in the SI. We emphasize that these worked examples are qualitative assessments to illustrate the plurality of values, context and uncertainties, and do not serve as formal policy recommendations. Process Considerations Given the plurality of values associated with different types of wildlife trade, iterative and participatory approaches will be needed to identify the most suitable and effective policy options. We offer a general process, which could be applied in the planning stages of a Plan-Do-Check-Act or adaptive management approach. Steps in this process include: defining the problem, gathering data, assessing synergies and trade-offs, acknowledging uncertainty and incorporating feasibility; all of which would inform a decision, followed by implementation, monitoring and adaption (Figure 3). This entire process can be strengthened by participation of policy-affected people, with expert elicitation methods, and application of integrated frameworks to draw together disparate data, and transparently communicate value judgments, risk and uncertainty (Milner-Gulland and Shea, 2017;Shea et al., 2020; Figure 3). Defining the Problem As per the 'species and context' column in Table 1, any decisionmaking process should first clarify the taxa in question, the scope of the policy decision and the socio-economic context. This will aid with identifying policy-affected people and stakeholders to include in the process, and the plurality of values that should be considered. The taxa in question could be considered as a broad taxonomic group, where biological characteristics, trade dynamics and public health risks are relatively homogenous (e.g., Batoidea, Table 1), or as a single species (e.g., Ovis canadensis, Table 1), where necessary due to exceptional characteristics and context. The scope should also consider the market dynamics and governance context. This may need to be informed by a prioritization exercise, to create a shortlist of which taxa, geographic regions and/or markets warrant policy reform, which can be informed by available literature on hotspots, anthropogenic drivers and animal hosts of zoonotic diseases [e.g., see Allen et al. (2017) and Han et al. (2016)]. Gathering Data As per Table 1 (and the SI), a range of different datasets can be used to evaluate the costs and benefits of wildlife trade for the SDGs. Where available, quantitative data can be used. For example, risks for health and well-being (SDG 3) could be measured through estimated disability-adjusted life years (DALY) lost as a result of a pandemic (Narrod et al., 2012), or the total estimated economic and social burden attributed to a zoonotic outbreak. For example, in the case of great apes, Huber et al. (2018) estimated that the total mortality and economic burden attributed to the 2014 Ebola outbreak in West Africa at 11,000 lost lives and US$ 53 billion ( Table 1, SI). Similarly, in the case of coronaviruses in horseshoe bats, the current COVID-19 pandemic has led to an estimated 2 million lives lost worldwide (at the time of writing), and an estimated US$ 10 trillion in foregone GDP (The Economist, 2021). Likewise, other costs and benefits for people, such as poverty, hunger and inequality (SDGs 1, 2, 5 and 10) can be measured through both subjective and objective measures of well-being attributed to wildlife use (Milner-Gulland et al., 2014). Again, this can be measured in dollar values, such as the total income derived from the trade and total number of people employed (e.g., the American bullfrog case study, Table 1, SI), or in terms of contributions to DALY, such as via benefit of wildlife consumption to childhood nutrition (Golden et al., 2011). The costs and benefits for life on earth and life below water (SDGs 14 and 15) can be measured in terms of extinction risk or rate of population change at the species level, as attributed to wildlife trade and associated policy responses (e.g., see the Bighorn sheep case study, Table 1, SI), or in terms of welfare-adjusted life years (WALY) for individual animals (Ripple et al., 2016;Teng et al., 2018). In other cases, it may be more appropriate to use semiquantitative or qualitative data, such as expert and stakeholder judgments. Such approaches are particularly useful in datalimited risk assessments (Beauvais et al., 2018;Booth et al., 2020), for consensus-building when integrating perspectives and evidence from diverse sources and stakeholders (Booy et al., 2017), and for accounting for risk and uncertainty (Shea et al., 2020). Importantly, consultative processes not only help to obtain data, but also weigh priorities, explore the feasibility of management options, set societal thresholds and the burden of proof needed for policy (in)action, engage diverse stakeholders and address inequalities (Booy et al., 2017;Defries and Nagendra, 2017); all of which will be needed to turn evidence in to action. As well as indicating the direction and magnitude of costs and benefits, uncertainty and data gaps should be explicitly acknowledged. When using qualitative data, this could include qualitative judgments of uncertainty (as in Table 1). In quantitative assessments, uncertainty can be communicated using iterative or statistical methods, such as Value of Information Analysis, which is used to value the contributions of different types of research exercises in terms of expected reduced uncertainties (Runge et al., 2011). Data gathering may be an iterative process, wherein available data is collated, data gaps are identified, and further research and/or expert and stakeholder consultation is conducted to fill data gaps. This can also be supported by a participatory process, and adoption of an integrated framework to collate and assess data (Booy et al., 2017;Booth et al., 2020;Li et al., 2020). Assessing Synergies and Trade-Offs As we have highlighted, it is not only important to consider the direct impacts of wildlife trade on public health and the SDGs, but also interactions and feedbacks. For example, bat trade may provide nutritional benefits for some people, but pose risks of zoonotic disease outbreaks for others (Mickleburgh et al., 2009;Wong et al., 2019); while a ban on wild-sourced wildfowl, to protect wild populations from overexploitation, could drive expansion of higher-risk illicit markets (Fournie et al., 2013), or agricultural expansion of poultry farms, which exacerbate other anthropogenic drivers of biodiversity loss and zoonosis emergence (Allen et al., 2017;Tilman et al., 2017 ; Figure 2). Frameworks and methods are available for exploring interactions between the SDGs, which have already been applied to other complex socio-ecological systems (e.g., Nilsson et al., 2016;Nash et al., 2020), and could easily be applied to wildlife trade decision-making. A highly quantitative approach to assessing synergies and trade-offs could involve assessing all positive and negative contributions of wildlife trade to the SDGs in terms of expected DALYs, and conducting a cost-benefit analysis (Narrod et al., 2012). However, this may be unfeasible in many cases, due to data limitations; and risks being and overly reductive, where certain values cannot be accounted for within this metric. Instead, a more realistic and inclusive approach could be an integrated framework with a simple high-to-low or traffic light categorization system, with qualitative or semi-quantitative assessments of the magnitudes of different costs and benefits (as outlined in Table 1), and various weightings applied to each category of cost/benefit based on uncertainties, risks and value judgments. Combining these different assessments and their weightings can then help to build consensus and make an informed judgment, even where the metrics for different costs and benefits are diverse and difficult to compare (Beauvais et al., 2018;Booth et al., 2020;Li et al., 2020). Acknowledging Uncertainty and Setting Thresholds Rigorously evaluating all costs and benefits may be challenging, particularly in data-limited contexts. Pre-defining the burden of proof, and acceptable levels of uncertainty for action or inaction, can help with iterative and adaptive decision-making. When establishing the burden of proof, a "do no harm" precautionary approach should be adopted as best practice (Cooney and Dickson, 2012). However, in many cases it will not be possible to identify optimal solutions which do no harm across all SDGs. Rather, it may be necessary to identify step-by-step solutions which are most acceptable to stakeholders in a given time or context (Head, 2008). Decisions may also entail moral dilemmas, such as weighing-up human disease risk against animal extinction risk, or human disease risk now against human disease risk in the future. This is particularly difficult in the face of uncertainty, such as cases where the likelihood of a pandemic is deemed very low, but its scope and severity are hypothetically large. In these situations, harm minimization may be more pragmatic. Decision-makers may wish to set thresholds of 'permissible harm' in each SDG, based on priorities and societal perspectives. If certain thresholds are reached -such as an unacceptable risk to human health, or an unacceptable cost to the economy -then that issue takes precedent above others. Thresholds of permissibility will be shaped by culture and social norms, and should therefore be adapted to each decision context, and transparently communicated. Methods from multi-criteria decision analysis, which help to explicitly evaluate multiple conflicting criteria in decision-making (e.g., Huang et al., 2011;Runge et al., 2011), could help to evaluate multiple conflicting values and objectives regarding wildlife trade policy, and identify thresholds for permissible costs under different SDGs. In many cases, there may also be a pressing need for management action, yet insufficient time or resources to collect detailed information, creating trade-offs between knowing and doing (Knight and Cowling, 2010). Decision-makers must strike a balance between reactionary crisis-driven interventions, which may be suitable in the short-term, though can lead to perverse outcomes in the medium-term (Bonwitt et al., 2018), and evidence-based preventative measures, which lead to better outcomes in the long-term. The adage 'hard cases make bad law' should be considered here; i.e., the extreme case of COVID-19 may be a poor basis for a general law covering a wider-range of less extreme wildlife trade scenarios. 'Wicked problems' such as this call for adaptive management rather than definitive topdown technical solutions, so that policy interventions can be updated as feedbacks play out and knowledge of the system expands (Head, 2008;Defries and Nagendra, 2017). Incorporating Feasibility Policy formulation should also consider costs and feasibility of implementation, based on resources for monitoring and enforcement, and legitimacy of new measures as felt by the stakeholders most likely to be affected (Challender et al., 2015;Bonwitt et al., 2018;Oyanedel et al., 2020) (e.g. see 'implementation issues' outlined in Table 1). Lack of capacity and political will within government agencies can undermine laws, and is a commonly cited reason for the failure of many existing wildlife trade regulations (Dellas and Pattberg, 2013). As such, new policies may require investment in implementing agencies, to support monitoring and enforcement. Limited resources for implementation further emphasizes the need for riskbased problem-oriented approaches, with enforcement resources directed toward critical control points (Krumkamp et al., 2009). Interventions must consider the needs and preferences of affected people, the underlying drivers of wildlife use and trade, and the legitimacy of any new regulations. Failure to do so is not only unethical but may result in misguided policy responses that do not address the root causes of unsustainable wildlife trade and zoonoses emergence, resulting in non-compliance, with even greater risks to wildlife and public health (e.g., Fournie et al., 2013, Bonwitt et al., 2018Oyanedel et al., 2020). Social research may help to identify and reduce drivers of noncompliance with wildlife laws or key barriers to behavior change (Travers et al., 2019). Making Decisions; Implement, Monitor, Adapt Finally, all information and options need to be drawn together to make a policy decision, which is likely to deliver the greatest overall benefits to the SDGs. If a participatory process and an integrated decision framework have been applied throughout, these tools can facilitate consensus and/or informed judgment on which to base a final decision (see below). If the burden of proof has not been met, it may be necessary to iterate the process, with further research and deliberation. Once a policy decision has been made, a range of instruments and interventions will be required for implementation, such as investments in monitoring and enforcement, infrastructure and technology, or training and incentives. Monitoring of SDG outcomes after the policy intervention will help to determine its impact, and inform adaptive management. Participatory Processes Past experiences with previous complex, uncertain and divergent public policy problems suggest that the process is equally if not more important than the evidence-base (Head, 2008;Booy et al., 2017;Defries and Nagendra, 2017). Participatory processes can help to collate and evaluate data on the range of costs and benefits of wildlife trade across multiple SDGs and for multiple sectors of society. Group-based deliberation can also support valuation of costs and benefits, and colearning amongst different groups (Kenter et al., 2011;Shea et al., 2020), thus facilitating multi-sector decision-making amongst local and national governments, inter-governmental platforms and policy-affected-people. Participatory processes for designing wildlife trade interventions can also build legitimacy and foster support for policy decisions, thus improving implementation, uptake and compliance (Weber et al., 2015;Roe and Booker, 2019). Integrated Frameworks All of the above could be supported by integrated frameworks, which can help to draw together and evaluate disparate data; facilitate multi-sector engagement; highlight information gaps, uncertainties and value judgments; and thus, guide transparent evidence-based decisions and collective action. For example, integrated frameworks have previously been used for risk management in human and animal health (Narrod et al., 2012;Beauvais et al., 2018), wildlife policy and management (Booy et al., 2017;Booth et al., 2020) and interfaces between the two (Coker et al., 2011). Existing frameworks are also available for mapping interactions between SDGs, which are intuitive, broadly replicable and could be easily adapted to a wildlife trade context (Nilsson et al., 2018(Nilsson et al., , 2016Nash et al., 2020). For example, Nilsson et al. (2016) offer a simple semi-quantitative scale for exploring the influence of one SDG on another, while Nash et al. (2020) suggest extensions to the current SDG assessment framework to better acknowledge interactions between SDGs for planet, prosperity and people. Importantly, integrated frameworks are flexible and can be used iteratively as part of participatory and adaptive processes, allowing incorporation of diverse values and uncertainty. For example, decision-makers can develop primary indicators for costs and benefits alongside secondary indicators on value judgments and uncertainty, and further indicators to evaluate feasibility, such as practicalities, costs and likely impacts of different policy responses (Booy et al., 2017;Booth et al., 2020). This could help to manage conflicting values and data, by explicitly assessing the relative weight or importance of different priorities, and thus improve the transparency of decision-making processes. DISCUSSION In the wake of COVID-19, there are calls for policy interventions to minimize public health risks related to zoonotic diseases through measures including banning wildlife trade. However, uncertainty remains regarding the role of wildlife trade in the emergence of COVID-19 (Cohen, 2020;Huang et al., 2020). Moreover, wildlife trade does not represent a homogeneous risk to public health, and can be beneficial to both biodiversity and people (Hurley et al., 2015;Cooney, 2019;McRae et al., 2020). As such, wildlife trade policies in responses to COVID-19 must consider the trade-offs within and between public health and other dimensions of the SDGs. We have presented how decision-makers might evaluate these trade-offs and synergies for different species and contexts, in order to formulate risk-based policies. Explicitly considering the diversity of costs and benefits of wildlife trade along supply chains could guide decision-makers toward more appropriate policy interventions for heterogenous species, contexts and scales, to maximize different sustainable development outcomes without compromising others. Implementing a Risk-Based Approach to Wildlife Trade Policy: Practical Challenges and Potential Solutions Despite the benefits of adopting a risk-based approach for formulating wildlife trade policy, challenges remain for practice and implementation. These include data needs and gaps, and effective and equitable compliance management. For instance, the process we have outlined (Figure 3) will be more data intensive and time consuming than taking rapid, reactive (and potentially ill-informed) decisions, which may be necessary in times of crisis such as a global pandemic. A middle ground may be to adopt crisis measures in the short-term, with a shift toward more nuanced measures in the medium-term, once a range of potential policy options have been identified and evaluated. Data gaps may also hinder this process. For example, a lack of data on species' population statuses or the benefits from informal trade could create information asymmetries in costbenefit analyses. Similarly, there are unknown unknowns, for example from new or undescribed zoonotic pathogens, which are difficult to predict or account for. Such data gaps underline the importance of adaptive management (step 7, Figure 3), so that policies can be adapted as situations change or new information comes to light. A further challenge relates to how people and institutions respond to new policies, particularly if they are negatively affected, and therefore how to design effective and equitable compliance management systems. For example, if trade in a species is restricted, and existing traders face large barriers to adaptation, they could face large absolute costs in terms of income forgone. Though these costs should be minimized via a risk-based approach, they cannot always be completely avoided, and could create strong incentives for non-compliance or negative impacts on the well-being of certain groups. In such cases, a 'no net loss to human well-being' approach could be adopted (Griffiths et al., 2019), whereby opportunity costs are evaluated and compensation is provided to ensure vulnerable people are no worse off. Taxa-and location-specific policies can also create additional challenges for monitoring and enforcement, such as identifying prohibited species or monitoring diffuse and complex markets. These issues can be addressed via more significant investments in infrastructure, technology and human capacity for wildlife trade monitoring and bio-security, which are likely to become more serious political priorities following the COVID-19 pandemic. In most cases, 'smart regulation' will be needed, whereby a combination of instruments are used to create an appropriate policy mix, which can flexibly, efficiently and equitably incentivize multiple stakeholders and institutions (Young and Gunningham, 1997;Gunningham and Sinclair, 2017). Wildlife trade is also a highly emotive topic, and policy decisions can be influenced by strong public opinions, which aren't necessarily rational or data-driven (Hart et al., 2020). More transparent approaches to decisionmaking are needed to address wildlife trade in the face of public health crises and beyond, wherein decision criteria and costs and benefits are clearly outlined and publicly available. Global Problems Require Global Solutions: The Role of Multilateral Agreements Moving forwards, new or revised multi-lateral agreements may be needed to strengthen cross-sectoral coordination and political commitment at the intersection of wildlife use and sustainable development, with key stakeholders currently in the process of deciding what is needed and how it might be delivered. For example, discussions have begun on the role of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) in protecting human health, by regulating animal health in international trade (Ashe and Scanlon, 2020;CITES, 2021). However, relying on CITES would likely result in an overly narrow focus on CITES-listed species, whilst missing heavily traded taxa not under the purview of the Convention (e.g., farmed mink) and critically, other key drivers of zoonotic disease emergence, such as intensive animal agriculture and land-use change. In contrast, the Convention on Biology Diversity (CBD) has a broader remit, and is soon to establish the post-2020 agenda (CBD, 2020). However, the CBD arguably lacks compliance mechanisms and political commitment for instituting and incentivizing the necessary transformational policies, to unite multiple sectors and cut across multiple aspects of sustainable development (Leach et al., 2018;Díaz et al., 2019). Rather, a new and more integrated agreement, which perhaps builds on the Agreement on Climate Change, Trade and Sustainability (ACCTS) and the World Organisation for Animal Health, may be necessary to foster serious political will toward the crosssectoral challenge of "saving lives, protecting livelihoods, and safeguarding nature, " as a matter of global urgency. Next Steps for Wildlife Trade and Beyond In the medium-term, we must better understand the transmission pathways of zoonotic diseases in traded wild species, and the extrinsic and intrinsic drivers of zoonosis emergence across species and supply chains. Interactions and trade-offs between wild-sourced and domesticated food systems, and the substitution relationships between different protein sources, should also be better understood. This will help to predict potential displacement effects of policy interventions, and overcome some of the challenges highlighted above. More broadly, there is a need to expand the scope of policy responses to zoonotic disease risk, beyond the current narrow focus on wildlife trade. Evidence indicates that land-use change and agricultural expansion are major drivers of the emergence of zoonotic diseases (Han et al., 2016;Allen et al., 2017). Rather than a narrow focus on wildlife trade, the COVID-19 crisis should serve as a wakeup call to re-think many aspects of humanity's relationship with nature. A paradigm shift toward holistic risk-based management of wildlife trade, embedded within a broader socio-ecological systems perspective, could ensure that future use and trade of wildlife is safe, environmentally sustainable and socially just. AUTHOR CONTRIBUTIONS HB was responsible for conceptualization. HB, MA, SB, MK, TK, YL, AO, RO, and TP were responsible for analysis and writing the original draft. DC and EM-G were responsible for validation and review, editing, and supervision. HB and TP were responsible for designing graphics. All authors contributed to the article and approved the submitted version.
9,331
2020-06-04T00:00:00.000
[ "Economics" ]
Research & Application of Ash Condensation Analysis Technology On the basis of self-developed cigarette ash analysis software, through the calculation of the cigarette ash index, the ash condensation ability of the different types of cigarettes was analyzed in this paper. The results show that the cigarette ash index has the advantages of simple calculation, quick response of ash analysis software, small coefficient of variation between different cigarette ash condensation index, among different samples there is no obvious little difference in ash condensation index. Therefore, the cigarette ash condensation index can be used as an important index and effective means to measure the performance of the cigarette ash, meanwhile, the index can be applied to the comprehensive assessment of the ash condensation performance of the cigarette paper. Introduction Cigarette paper is one of the main materials during the making process of cigarettes, which affects significantly not only on the packaging of tobacco leaf, cigarettes' appearance, combustion velocity and smoldering, but also on sensory quality and chemical composition of smoke [1][2][3].Cigarette paper not only carries the key image and connotation of the cigarette products, but also has a special and direct influence on the cigarette smoke due to its involvement in combustion.On the one hand, the gas composition after combustion of cigarette paper is an important component of cigarette smoke, which will be together smoked with the mainstream smoke; on the other hand, the intrinsic quality of cigarette paper will also affect the combustion status of cigarettes, which plays a decisive role in ash appearance and grey degree of cigarette paper after combustion.How to improve the sensory quality and integrity of ash are double major problems of cigarette paper [4][5][6][7][8][9]. After combustion, the ideal cigarette ash appearance should show no obvious cracks or tilt parts, and during the smoking process, the ash should not break [10,11].The excellent cigarette ash performance mainly presents as perfect smoke appearance, and after cigarettes' combustion the ash is still bonded together with high integrity degree.The ideal ash is the process of full combustion, and the ash is white but not scattered.So far, there is no systematic solution and evaluation system on how to evaluate the capability of cigarette ash appearance and ash integration, and on how to optimize the performance of cigarette ash. In order to further study the performance of cigarette ash and cigarette paper, ash analysis system with independent research and development was applied, and the concept of cigarette ash index was put forward in this paper.Based on the appearance of the samples' combustion observation and comparison of different cigarettes' ash index, calculation of cigarette ash index different samples from the same batch was completed, and the feasibility of applying ash index as reflection of cigarette paper ash properties was verified.The application prospect of cigarette ash analysis system and influencing factors of cigarette ash integrity were analyzed. Experimental materials and methods KBF240 constant temperature and humidity incubator (German Binder company) was mainly used for the balance of cigarette samples before combustion, and the cigarette samples were placed in constant temperature and humidity incubator with temperature (22 + 1) C for 48 h, and the balance moisture is set as 20% + 2%. The cigarette ash analysis system YNZY-JYNH-V2.1,an independent design and development of China Tobacco Yunnan Industrial Co., Ltd., was applied.The system is mainly composed of a camera module and a computer analysis module.The camera module is mainly designed to capture the instantaneous image capture of cigarette ash, and the computer analysis module mainly completes the analysis of the captured image and calculates the coagulation index. Experimental cigarettes from a certain brand of cigarettes were purchased from markets and the cigarettes were numbered 1# to 6# separately. Results and discussion Table 1 Ash condensation indexes of different samples Ash Condensation Analysis Fig. 1 shows the different ash condensation grades of cigarette paper coagulation effect (from left to right were samples 1 # to 6 #).As can be seen from Fig. 1, there is a certain difference in ash condensation among different brands of cigarette paper, and qualitative analysis of the ash condensation quality of cigarettes could be made.Cheng Zhangang et al insisted that the slump tightness directly affect on the cigarette ash capacity, and the amount and size of ash fragments have a direct impact on the ash fall during the smoking process, which is the decisive factor of ash condensation capacity [12].Cracking of cigarettes and ash smoothness are potential factors in determining the quality of cigarettes.In the definition of ash capacity in some references, the appearance factor accounts for 75%.However, the calculation and determination of the ash capacity still need to be subjective for score calculation of different properties, and the scale and scoring are influenced largely by the subjective factors of judges.Therefore, how to quantify objectively and accurately the cohesion performance of different cigarette paper has always been the focus to determine the ash condensation capacity of cigarettes. Calculation of Ash Condensation Index and Analysis on Variation Coefficient Ash Condensation Index AI was defined by authors in the process of ash condensation.In Fig. 2, for example, the ash condensation index AI is the ratio of the area of the uncracked area to the total area after the image recognition [13] (1 ) 100% Fig. 2 Ash condensation calculation It is not difficult to find that the greater the ash condensation index, the better the ash condensation effect is after comparing the data in Table 1 and the ash appearance results in Fig. 1.The index AI can intuitively and accurately reflect the ash condensation ability.In order to further investigate the difference among the ash condensation index of different samples in the same batch, authors calculated the variation coefficient of the ash condensation index CV as follows [13]: CV=SD/MN*100% (2) Where SD is the standard deviation of the same set of samples and MN is the standard value for the same batch of different samples.Each group of samples was measured five times and the corresponding standard deviation, mean value and variation coefficient of were calculated as shown in Table 2. It can be seen from Table 2 that the variation coefficient of different samples from the same brand is less than 3%, which indicates that the ash condensation index of the same batch of different samples is little enough, and the ash condensation index can be applied as a measure of the ash condensation quality of cigarettes.Variation of ash condensation index is mainly caused by random errors in the production process.However, in order to ensure the measurement accuracy, multiple measurements could be taken to reduce the impact of random errors, and the maximum accuracy could be guaranteed of the ash condensation index in the application process.It can be seen from Table 2 that the variation coefficient of different samples from the same brand is less than 3%, which indicates that the ash condensation index of the same batch of different samples is little enough, and the ash condensation index can be applied as a measure of the ash condensation quality of cigarettes.Variation of ash condensation index is mainly caused by random errors in the production process.However, in order to ensure the measurement accuracy, multiple measurements could be taken to reduce the impact of random errors, and the maximum accuracy could be guaranteed of the ash condensation index in the application process. Advantages and Application Trend of Dust Analysis System In conclusion, the ash condensation index has the advantages of intuitionistic, accurate and analytical ability in measuring the ash condensation effect of cigarette paper.The variation coefficient data of the same batch samples is less than 3%, and the ash condensation index is suitable in the evaluation of ash condensation of cigarette products. At the same time, the ash condensation analysis system based on the principle of ash condensation index measurement has many advantages such as high image resolution, fast analysis speed and high precision, which is suitable as an important means of evaluating cigarette paper.The existing evaluation methods are basically subjective judgments and the appearance of the ash condensation score, which are not accurate and objective.How to use the evaluation method and the development of the ash condensation system as a measure of the effect of cigarettes as a more accurate, more effective tool is still a key technology to be solved in the future.At the same time, there is still room for improvement for the system in resolution and calculation accuracy, and the future system optimization should focus on how to optimize the relevant parameters and data processing and analysis ability more rationally. Conclusions (1)After comparing the ash condensation index and related ash appearance figures, the ash condensation index can be used as a measure of the effect of cigarette paper's ash condensation.The data obtained through ash condensation index are highly intuitive, accurate and analytical.The variationc oefficient of different sample data is less than 3%. (2) The ash condensation analysis system developed by authors has many advantages such as high image resolution, fast analysis speed and high precision, and it is suitable as an important means to evaluate and analyze the ash condensation of cigarettes, which could be combined with other evaluation methods as an effective tool for evaluating the ash condensation of cigarette paper. Table 2 Standard deviation, mean value and variation coefficient of samples from the same brand
2,163
2018-09-01T00:00:00.000
[ "Materials Science" ]
Generation of Subdiffraction Optical Needles by Simultaneously Generating and Focusing Azimuthally Polarized Vortex Beams through Pancharatnam–Berry Metalenses Needle beams have received widespread attention due to their unique characteristics of high intensity, small focal size, and extended depth of focus (DOF). Here, a single–layer all–dielectric metalens based on Pancharatnam–Berry (PB) was used to efficiently generate and focus an azimuthally polarized vortex beam at the same time. Then, additional phase or amplitude modulation was respectively adopted to work with the metalens to produce optical needles. By decorating the PB metalens with the binary optical element (BOE), an optical needle with full–width–at–half–maximum (FWHM) of 0.47 λ and DOF of 3.42 λ could be obtained. By decorating the PB metalens with an annular aperture, an optical needle with long DOF (16.4 λ) and subdiffraction size (0.46 λ) could be obtained. It is expected that our work has potential applications in super–resolution imaging, photolithography, and particle trapping. Introduction In recent years, the generation of tiny focal spots with high intensity has been an important research part in nanophotonics [1,2]. Focal spots with subdiffraction size and extended depth of focus (DOF), also called needle beams or optical needles, can be applied in super-resolution imaging [3], photolithography [4], and particle trapping [5]. Generally, the generation of needle beams mainly comes from the focusing of radially polarized beams (RPBs) [6,7]. Corresponding total focal fields are dominated by significantly enhanced longitudinal electric field components, resulting in strong longitudinally polarized optical needles. Compared with longitudinally polarized optical needles, subwavelength transversely polarized optical needles are especially popular in specific fields such as ultra-high density magnetic storage and atomic trapping [8]. Transverse polarization is usually associated with the focusing of an azimuthally polarized vortex beam rather than RPB. Hao et al. demonstrated that a sharper and purely transversely polarized focal spot could be obtained by focusing a phase encoded azimuthally polarized beam (APB), which is smaller than that of RPB or a linearly polarized beam (LPB) [9]. Transversely polarized needle beams can be further formed by highly focused azimuthally polarized vortex beams through phase elements [10], amplitude filters [11], or axicons [12]. Since the generation of azimuthally polarized vortex beams involves polarization and phase modulation, this means that it requires complex and bulky optical elements. Metasurfaces can not only provide a new channel for manipulating the phase, amplitude, and polarization of incident light, but also reduce the thickness of optical devices, leading to miniaturized and integrated optical elements. Various optical elements from lenses [13], vortex plates [14] to polarization converted devices [15] have been demonstrated using metasurfaces. Metalens that possesses the main function of lens can also perform complete phase modulation. A polarization-insensitive metalens has been designed to obtain extended DOF of the focal spot and longitudinal high-tolerance imaging [16]. A needle beam with long DOF and subwavelength size by illuminating APB was generated through a polarization-insensitive metalens [17]. However, the production of pre-prepared vector beams requires complex optical elements. While metalenses based on Pancharatnam-Berry (PB) have the capability to control the polarization of light and subsequently produce vector beams [18,19]. These metalenses are based on polarization-dependent nanorods that locally act as wave plates (usually half−wave plates), but have different rotation angles on metalenses [19]. Moreover, PB metalenses can provide control of the polarization and phase simultaneously [20] where focused vector beams have been generated [21,22]. In this work, single-layer all-dielectric PB metalenses were used to simultaneously generate and focus azimuthally polarized vortex beams to form subdiffraction needle beams. The all−dielectric metalens was composed of TiO 2 nanobricks and a SiO 2 substrate. In addition to its high refractive index, TiO 2 also has negligible absorption loss, which is contrary to the high loss caused by the use of silicon material metalens in the visible light band [13]. At present, such metasurfaces can be made using electron beam lithography and atomic layer deposition [23]. There will be some fabrication imperfections when fabricating the metasurface, resulting in slight changes in the structure. Capasso et al. [24] demonstrated that an imaging spatial resolution of about 200 nm was achieved by integrating liquid-immersed metalens into a commercial scanning confocal microscope. Even though the structure of the fabricated nanobrick was slightly changed, it still achieved the expected function, which indicates that the designed nanobrick had performance robustness against the fabrication imperfections. Based on the PB phase, simple LPBs, which are more readily available in the laboratory than vector beams, can be converted to APBs. According to the transmission phase, metalens can simultaneously control the discrete phases for vortex generation and light focusing. We first verified that the designed PB metalens could convert LPBs into azimuthally polarized vortex beams and focus simultaneously. Then, a binary optical element (BOE) or an annular aperture was used to combine with the metalens to generate optical needles. The obtained optical needles may apply to super-resolution imaging, photolithography, and particle trapping. Theoretical Study on Focusing the Azimuthally Polarized Vortex Beam Under the illumination of the azimuthally polarized vortex beam (topological charge = 1), the electric field distributions near the focal field can be obtained by vector diffraction theory [25]: where In Equation (1), A is a constant. In Equation (2), θ max = arcsin(N A) is the maximum angle of focus determined by the numerical aperture (NA, NA = sin[arctan(R/ f )], where R is the radius of the metalens), and the value of NA is 0.9. P(θ) is the pupil function, Jn is the n order Bessel function of the first kind, k = 2π/λ is the wave number, and λ is the corresponding incident wavelength. l 0 (θ) signifies the amplitude of the incident light at the pupil plane. For the planar wavefront, l 0 (θ) can be regarded as a constant A, while l 0 (θ) is the Bessel-Gaussian beam, which is given by: 3 of 10 where β is the ratio of the pupil radius to the beam waist and set as 1. It can be seen from Equation (1) that an additional radial electric field component is generated when the spiral phase is introduced. This is because polarization and phase are related, and the introduction of a phase difference in the incident wavefront results in a change in the polarized distribution. Due to this radial component, the polarization near the focal plane is spatially variable and complex (the polarization distribution related ellipticities of our generated needle beams were calculated as shown in the Supplementary Materials) and the polarization singularity at the center of the focal plane disappeared, which was similar to the bright and sharp focal spot of the RPB. As can be seen in Figure 1a, the electric field distributions of the focal spot of the azimuthally polarized vortex beam were theoretically calculated. The radial and azimuthal components constitute the total electric field in the focal plane. The focal fields of both the radial and azimuthal components possess bright focal spots, which are different from the doughnut-shaped focal spots of APBs with polarized singularities. As shown in Figure 1b,c, the full-width-at-halfmaximum (FWHM) and DOF of the focal spot were 0.57 λ and 1.57 λ, respectively, which is close to the optical diffraction limit. where β is the ratio of the pupil radius to the beam waist and set as 1. It can be seen from Equation (1) that an additional radial electric field component is generated when the spiral phase is introduced. This is because polarization and phase are related, and the introduction of a phase difference in the incident wavefront results in a change in the polarized distribution. Due to this radial component, the polarization near the focal plane is spatially variable and complex (the polarization distribution related ellipticities of our generated needle beams were calculated as shown in the Supplementary Materials) and the polarization singularity at the center of the focal plane disappeared, which was similar to the bright and sharp focal spot of the RPB. As can be seen in Figure 1a, the electric field distributions of the focal spot of the azimuthally polarized vortex beam were theoretically calculated. The radial and azimuthal components constitute the total electric field in the focal plane. The focal fields of both the radial and azimuthal components possess bright focal spots, which are different from the doughnut-shaped focal spots of APBs with polarized singularities. As shown in Figure 1b,c, the full-width-athalf-maximum (FWHM) and DOF of the focal spot were 0.57 λ and 1.57 λ, respectively, which is close to the optical diffraction limit. The BOE was introduced to improve the performance of the needle beam by enhancing the longitudinally polarized component of the focal field [6], so we used BOE to optimize the focal spot in order to generate an optical needle. When the phase distribution of BOE is loaded onto the metalens, P(θ) in Equation (2) is replaced by P(θ)T(θ), where the transfer function ( ) exp( ) The phase of the BOE φBOE exists only in the case of 0 or π, which appears alternately. We used a five-belt BOE, the corresponding phase distributions are as follows: For four angles θi (i = 1, 2, 3, 4), corresponding to four radial positions sin / i i r NA   , were optimized by the particle swarm optimization algorithm [26]. Here, the parameters of the BOE are given by: The BOE was introduced to improve the performance of the needle beam by enhancing the longitudinally polarized component of the focal field [6], so we used BOE to optimize the focal spot in order to generate an optical needle. When the phase distribution of BOE is loaded onto the metalens, P(θ) in Equation (2) is replaced by P(θ)T(θ), where the transfer function T(θ) = exp(iϕ BOE ). The phase of the BOE ϕ BOE exists only in the case of 0 or π, which appears alternately. We used a five-belt BOE, the corresponding phase distributions are as follows: For four angles θ i (i = 1, 2, 3, 4), corresponding to four radial positions r i = sin θ i /N A, were optimized by the particle swarm optimization algorithm [26]. Here, the parameters of the BOE are given by: Meanwhile, the upper and lower lines of the integral in Equation (2) change with θ i . The Bessel-Gaussian beam is incident, which means the corresponding amplitude l 0 (θ) is Equation (3). In Figure 1b,c, FWHM and DOF of the focal spot were 0.51 λ and 2.17 λ, respectively. Obviously, the BOE can help reduce the FWHM and prolong the DOF of the focal spot. In addition, an annular aperture also has a similar effect on the modulation of the focal spot [17,22]. After applying the annular aperture, the lower limit of Equation (2) becomes θ min and the incident light is a plane wave. When the annular aperture σ = 0.9 (σ represents the ratio of the radius of the opaque gold plate to R, θ max = arcsin(0.9), θ min = 0.9θ max ), the FWHM and DOF of the focal spot reached 0.42 λ and 15 λ, respectively, both of which were significantly improved compared with the results of the lens only. Design of Metalens The schematic diagram of the generator of the needle beam is shown in Figure 2a. The PB metalens can convert the incident LPB (y−polarized) to APB, accompanied by vortex form and light focusing. The building blocks of the metalens are composed of TiO 2 nanobricks and SiO 2 substrates, as shown in Figure 2b. Dielectrics were chosen due to their high transmission efficiency compared to metals. The refractive indices of TiO 2 and SiO 2 were 2.46 and 1.46 at 532 nm, respectively. In order to prove the capability of local field manipulation by unit cells, we utilized the finite difference time domain (FDTD) method to calculate the magnetic energy density of nanobricks, as shown in Figure 2d,e. Because of the high refractive index contrast between the nanobrick and its surroundings, light energy is concentrated in the nanobrick, so that each nanobrick can be considered as a waveguide truncated on both sides. Thus, this leads to differences in the phase shifts (Φx and Φy) and effective refractive indices [18] of two waveguide modes polarized along L and W of each rectangle. Therefore, the LPB possesses a phase delay due to different velocities of polarization components along L or W of the nanobrick. Precisely because of this phase delay between the electric field components, the polarization of the transmitted beam changes. As a result, unit cells can work as local wave plates with an optical axis. respectively. Obviously, the BOE can help reduce the FWHM and prolong t focal spot. In addition, an annular aperture also has a similar effect on the m the focal spot [17,22]. After applying the annular aperture, the lower limit o becomes θmin and the incident light is a plane wave. When the annular aper represents the ratio of the radius of the opaque gold plate to R, θmax = arc 0.9θmax), the FWHM and DOF of the focal spot reached 0.42 λ and 15 λ, resp of which were significantly improved compared with the results of the lens Design of Metalens The schematic diagram of the generator of the needle beam is shown The PB metalens can convert the incident LPB (y−polarized) to APB, accomp tex form and light focusing. The building blocks of the metalens are com nanobricks and SiO2 substrates, as shown in Figure 2b. Dielectrics were c their high transmission efficiency compared to metals. The refractive indice SiO2 were 2.46 and 1.46 at 532 nm, respectively. In order to prove the capa field manipulation by unit cells, we utilized the finite difference time do method to calculate the magnetic energy density of nanobricks, as shown i Because of the high refractive index contrast between the nanobrick and its light energy is concentrated in the nanobrick, so that each nanobrick can be a waveguide truncated on both sides. Thus, this leads to differences in th (Φx and Φy) and effective refractive indices [18] of two waveguide modes po L and W of each rectangle. Therefore, the LPB possesses a phase delay du velocities of polarization components along L or W of the nanobrick. Precis this phase delay between the electric field components, the polarization of th beam changes. As a result, unit cells can work as local wave plates with an To realize the conversion from LPBs to APBs, these cells are requ half−wave plates (Figure 2c). When y−polarized beams propagate throug transmitted light can be calculated using the Jones matrix method, as follow To realize the conversion from LPBs to APBs, these cells are required to act as half−wave plates (Figure 2c). When y−polarized beams propagate through these cells, transmitted light can be calculated using the Jones matrix method, as follows [27]: cos 2φ sin 2φ sin 2φ − cos 2φ In Equation (6), the Jones matrix denotes a half−wave plate with its fast axis rotated by an orientation φ with respect to the y-axis. The Jones matrix of the APB is sin ψ − cos ψ T , which means that Equation (6) is the electric field vector of APB and ψ = 2φ (where ψ = arctan(y/x) is the azimuthal angle). Only when there is a π phase difference between the two linear polarizations aligned with L and W of the rectangle can each cell act as a local half−wave plate. To achieve this function in the metalens, a periodic array of nanobricks is illuminated by x− and y−polarized light at 532 nm to obtain the relationship between the sizes (L and W) of the nanobricks and phase shifts (Φx and Φy) as well as the transmission efficiencies (Tx and Ty). The L and W of nanobricks were both swept in the range of 20-320 nm while the height H and the period size p were kept at 600 nm and 370 nm, respectively. Periodic boundary conditions were set along the x− and y−directions of the nanobricks to avoid the influence of the interaction of neighboring unit cells. The mesh steps were 10 nm. As shown in Figure 3a,b, eight different nanobricks with a π phase difference between Φx and Φy were selected. Meanwhile, these nanobricks could maintain 0-2π phase coverage, and the phase increment of π/4 was maintained between the adjacent nanobricks. As shown in Figure 3c,d, the transmission efficiencies of the two incident beams were essentially larger than 90%, which ensures that the proposed metalens maintains a high transmission regardless of the rotation angle of unit cells. As a result, nanobricks can work as highefficiency half-wave plates. In Equation (6), the Jones matrix denotes a half−wave plate with its fast axis rotated by an orientation ϕ with respect to the y-axis. The Jones matrix of the APB is   sin cos T    , which means that Equation (6) is the electric field vector of APB and ψ = 2ϕ (where ψ = arctan(y/x) is the azimuthal angle). Only when there is a π phase difference between the two linear polarizations aligned with L and W of the rectangle can each cell act as a local half−wave plate. To achieve this function in the metalens, a periodic array of nanobricks is illuminated by x− and y−polarized light at 532 nm to obtain the relationship between the sizes (L and W) of the nanobricks and phase shifts (Φx and Φy) as well as the transmission efficiencies (Tx and Ty) The L and W of nanobricks were both swept in the range of 20-320 nm while the height H and the period size p were kept at 600 nm and 370 nm, respectively. Periodic boundary conditions were set along the x− and y−directions of the nanobricks to avoid the influence of the interaction of neighboring unit cells. The mesh steps were 10 nm. As shown in Figure 3a,b, eight different nanobricks with a π phase difference between Φx and Φy were selected. Meanwhile, these nanobricks could maintain 0-2π phase coverage, and the phase increment of π/4 was maintained between the adjacent nanobricks. As shown in Figure 3c,d, the transmission efficiencies of the two incident beams were essentially larger than 90%, which ensures that the proposed metalens maintains a high transmission regardless of the rotation angle of unit cells. As a result, nanobricks can work as high-efficiency half-wave plates. Based on the transmission phase, the metalens can add the vortex and lens phases to the beam. In order to make the designed metalens play the role in the loading vortex phase Based on the transmission phase, the metalens can add the vortex and lens phases to the beam. In order to make the designed metalens play the role in the loading vortex phase and focusing beam, the selected eight nanobricks were arranged according to the phase distribution in the following equation: where f and m represent the focal length and topological charge, respectively. Here, m = 1. Based on the PB phase, the metalens can change the polarization of the incident light. In order to convert LPB to APB, the above eight nanobricks had a rotated angle of ψ/2 around their local position. Therefore, the azimuthally polarized vortex beam is expected to be generated and focused simultaneously. Generating and Focusing the Azimuthally Polarized Vortex Beam with the Metalens With the above designed metalens, we simulated the intensity distributions of the electric field in the propagating and focal planes (Figure 4b,c) by the FDTD method. In the following simulations, the radii R and focal length f of metalenses were 10 µm and 4.84 µm, respectively, NA was 0.9, and the mesh steps were 20 nm in the x− and y−directions, but in the z−direction, they were 30 nm. A perfectly matched layer was set along the x−, y−, and z−directions. In Figure 4b,c, the FWHM and DOF of the focal spot were 0.55 λ and 1.9 λ, respectively. The intensities of the simulated radial and azimuthal electric field components in Figure 4c were not zero when y = 0, indicating that both of them had bright focuses, which is consistent with the phenomena in Figure 1a, but different from the doughnut-shaped focuses of the APBs. Furthermore, in combination with the phase distributions of the designed metalens and the polarization distributions of the transmitted light shown in Figure 4a,d, the proposed metalens may successfully convert a LPB into an azimuthally polarized vortex beam and focus the transmitted light simultaneously. and focusing beam, the selected eight nanobricks were arranged according distribution in the following equation: where f and m represent the focal length and topological charge, respectivel Based on the PB phase, the metalens can change the polarization of the inc order to convert LPB to APB, the above eight nanobricks had a rotated angle their local position. Therefore, the azimuthally polarized vortex beam is e generated and focused simultaneously. Generating and Focusing the Azimuthally Polarized Vortex Beam with the Me With the above designed metalens, we simulated the intensity distrib electric field in the propagating and focal planes (Figure 4b,c) by the FDTD m following simulations, the radii R and focal length f of metalenses were 10 μm, respectively, NA was 0.9, and the mesh steps were 20 nm in the x− and but in the z−direction, they were 30 nm. A perfectly matched layer was set y−, and z−directions. In Figure 4b,c, the FWHM and DOF of the focal spot w 1.9 λ, respectively. The intensities of the simulated radial and azimuthal elec ponents in Figure 4c were not zero when y = 0, indicating that both of the focuses, which is consistent with the phenomena in Figure 1a, but diffe doughnut-shaped focuses of the APBs. Furthermore, in combination with tributions of the designed metalens and the polarization distributions of th light shown in Figure 4a,d, the proposed metalens may successfully convert azimuthally polarized vortex beam and focus the transmitted light simultan The Needle Beam Generated by BOE with Metalens According to Figure 1b,c in the theoretical calculation, the BOE can not only reduce the size of the focal spot, but also extend DOF, so we loaded the phase distributions of BOE onto the metalens to produce the needle beam. After the phase distributions of BOE are loaded, the phase distributions of the metalens become: The multi−region BOE used in this work is shown in Figure 5a. The total electric field distribution of the generated needle beam at the propagating plane is shown in Figure 5b. As can be seen from Figure 5b-d, the DOF and FWHM of the optical needle produced in this way were 3.42 λ and 0.47 λ, respectively, and both results were better than the results of focusing an azimuthally polarized vortex beam by the metalens only, which is consistent with the results in Figure 1b,c. Therefore, it can be considered that the single−layer metalens decorating phase distributions of BOE can generate an optical needle. Interested readers may refer to the supplementary document for further study of the results and discussions on the polarization evolution of the generated optical needle. Nanomaterials 2022, 12, 4074 The multi−region BOE used in this work is shown in Figure 5a. The total e distribution of the generated needle beam at the propagating plane is shown in As can be seen from Figure 5b-d, the DOF and FWHM of the optical needle p this way were 3.42 λ and 0.47 λ, respectively, and both results were better than of focusing an azimuthally polarized vortex beam by the metalens only, wh sistent with the results in Figure 1b,c. Therefore, it can be considered that the si metalens decorating phase distributions of BOE can generate an optical needle. readers may refer to the supplementary document for further study of the r discussions on the polarization evolution of the generated optical needle. The Needle Beam Generated by an Annular Aperture with Metalens According to the theoretical results in Figure 1b,c, the annular aperture c extend DOF and form a sharper focus. Then, an annular aperture is introduce with metalens to generate the needle beam. As can be seen from Figure 6a, th had an inner opaque gold plate to form the annular aperture σ. The total el distribution at the axial cross−section is shown in Figure 6b. Obviously, the D focal spot obtained by this method was significantly longer than that in Figure is consistent with the theoretical calculation results in Figure 1c. In combinatio ure 6b,c, the needle beam with a FWHM of 0.46 λ and DOF of 16.4 λ was obta The Needle Beam Generated by an Annular Aperture with Metalens According to the theoretical results in Figure 1b,c, the annular aperture can help to extend DOF and form a sharper focus. Then, an annular aperture is introduced to work with metalens to generate the needle beam. As can be seen from Figure 6a, the metalens had an inner opaque gold plate to form the annular aperture σ. The total electric field distribution at the axial cross−section is shown in Figure 6b. Obviously, the DOF of the focal spot obtained by this method was significantly longer than that in Figure 4b, which is consistent with the theoretical calculation results in Figure 1c. In combination with Figure 6b,c, the needle beam with a FWHM of 0.46 λ and DOF of 16.4 λ was obtained under σ = 0.9. In this method, the phase, polarization, and amplitude are controlled simultaneously to achieve an optical needle with subdiffraction size and long DOF. Interested readers may refer to the supplementary document to further study the results and discussion on the polarization evolution of the generated optical needle. Nanomaterials 2022, 12, 4074 field intensity of the side lobes of the needle beam also increases with the incre is effective to obtain optical needles using the annular aperture, but this me efficient and can be optimized by a higher−order mode with multi-zone binary pil filters [28]. Conclusions In conclusion, we proposed dielectric metalenses based on the PB capabl ating and focusing an azimuthally polarized vortex beam simultaneously. It w that the proposed additional phase or amplitude modulation working with can achieve optical needles with narrow FWHM and extended DOF. The FWHM of the focal spot of the azimuthally polarized vortex beam were 0.55 λ and 1 focused by the metalens only. The optical needle with a FWHM of 0.47 λ and D λ was generated by loading the phase distribution of BOE to the metalens. By the annular aperture and metalens, an optical needle with a DOF of 16.4 λ and tion FWHM of 0.46 λ could be obtained. Through the capability of metalens to the amplitude, phase, and polarization at the same time, an optical needle w diffraction size and long DOF can be generated. These efficient and flexible m obtain needle beams can potentially benefit applications in super-resolutio photolithography, and particle trapping. Supplementary Materials: The following supporting information can be dow https://www.mdpi.com/article/10.3390/nano12224074/s1, Figure S1: The cross-section o city of the local polarization ellipse of the optical needle generated by the BOE and annu (AA) combined with the metalens, respectively. Reference [29] is cited in the suppleme rials. To show the impact of σ on focusing, we studied the opaque metal plate with different radii but a fixed R of 10 µm. The DOF and FWHM varying with σ are plotted in Figure 6d. With the increase in σ, DOF gradually extends and FWHM becomes smaller, and these two characteristics of the needle beam are improved. However, in terms of the property of the needle beam, it does not mean that the bigger σ is, the better. Because the electric field intensity of the side lobes of the needle beam also increases with the increase in σ, it is effective to obtain optical needles using the annular aperture, but this method is not efficient and can be optimized by a higher−order mode with multi-zone binary phase pupil filters [28]. Conclusions In conclusion, we proposed dielectric metalenses based on the PB capable of generating and focusing an azimuthally polarized vortex beam simultaneously. It was proven that the proposed additional phase or amplitude modulation working with a metalens can achieve optical needles with narrow FWHM and extended DOF. The FWHM and DOF of the focal spot of the azimuthally polarized vortex beam were 0.55 λ and 1.9 λ when focused by the metalens only. The optical needle with a FWHM of 0.47 λ and DOF of 3.42 λ was generated by loading the phase distribution of BOE to the metalens. By combining the annular aperture and metalens, an optical needle with a DOF of 16.4 λ and subdiffraction FWHM of 0.46 λ could be obtained. Through the capability of metalens to modulate the amplitude, phase, and polarization at the same time, an optical needle with a subdiffraction size and long DOF can be generated. These efficient and flexible methods to obtain needle beams can potentially benefit applications in super-resolution imaging, photolithography, and particle trapping. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12224074/s1, Figure S1: The cross-section of the ellipticity of the local polarization ellipse of the optical needle generated by the BOE and annular aperture (AA) combined with the metalens, respectively. Reference [29] is cited in the supplementary materials.
6,815.2
2022-11-01T00:00:00.000
[ "Physics" ]
Improved annotation with de novo transcriptome assembly in four social amoeba species Annotation of gene models and transcripts is a fundamental step in genome sequencing projects. Often this is performed with automated prediction pipelines, which can miss complex and atypical genes or transcripts. RNA sequencing (RNA-seq) data can aid the annotation with empirical data. Here we present de novo transcriptome assemblies generated from RNA-seq data in four Dictyostelid species: D. discoideum, P. pallidum, D. fasciculatum and D. lacteum. The assemblies were incorporated with existing gene models to determine corrections and improvement on a whole-genome scale. This is the first time this has been performed in these eukaryotic species. An initial de novo transcriptome assembly was generated by Trinity for each species and then refined with Program to Assemble Spliced Alignments (PASA). The completeness and quality were assessed with the Benchmarking Universal Single-Copy Orthologs (BUSCO) and Transrate tools at each stage of the assemblies. The final datasets of 11,315-12,849 transcripts contained 5,610-7,712 updates and corrections to >50% of existing gene models including changes to hundreds or thousands of protein products. Putative novel genes are also identified and alternative splice isoforms were observed for the first time in P. pallidum, D. lacteum and D. fasciculatum. In taking a whole transcriptome approach to genome annotation with empirical data we have been able to enrich the annotations of four existing genome sequencing projects. In doing so we have identified updates to the majority of the gene annotations across all four species under study and found putative novel genes and transcripts which could be worthy for follow-up. The new transcriptome data we present here will be a valuable resource for genome curators in the Dictyostelia and we propose this effective methodology for use in other genome annotation projects. Background Whole genome sequencing projects are within the scope of single laboratories. The Genomes OnLine Database [1] reports (as of 13 th May 2016) there are 76,606 sequenced organisms, of which 12,582 are eukaryotes. However, only 8,047 are reported as being complete. Annotation of gene models is a requirement for a complete genome [2]. There are several complementary strategies for achieving gene annotation in novel genomes including gene prediction [3,4], expressed sequence tag (EST) libraries [5] and RNA sequencing (RNA-seq) data [6]. Gene prediction methods are limited in the complexity of the gene models they are able to produce; alternative splice sites are unpredictable and untranslated regions (UTRs) have subtle signals [7]. EST libraries, if available, are usually fragmented and incomplete. RNA-seq data is dependent on good alignments to the reference. De novo transcriptome assembly is equally able to fulfil this function, although it can be computationally challenging [8][9][10]. Transcriptome assembly methods can be either reference-guided or reference-free [11,12]. Reference-guided methods have the advantage of simplifying the search space, but are dependent on the relevance, quality and completeness of the reference. Reference-free methods do not have any dependencies, but need to deal with sequencing errors sufficiently well to avoid poor assemblies [11][12][13]. We present the application of a de novo transcriptome assembly to four eukaryotic species: Dictyostelium discoideum, Polysphondylium pallidum, Dictyostelium fasciculatum and Dictyostelium lacteum. The genome of D. discoideum was published in 2005, it is 34 Mb in size and has been assembled into six chromosomes, a mitochondrial chromosome, an extra-chromosomal palindrome encoding ribosomal RNA (rRNA) and three 'floating' chromosomes [14]. The genome was generated via dideoxy sequencing and contigs were ordered into chromosomes by HAPPY mapping [14,15] and still contains 226 assembly gaps. In contrast, the similar sized genomes of P. pallidum, D. lacteum and D. fasciculatum were sequenced more recently using both dideoxy and Roche 454 sequencing. Their assembly was assisted by a detailed fosmid map and primer walking, leading to only 33 to 54 gaps per genome, but are more fragmented with 41, 54 and 25 supercontigs, respectively [15,16]. The D. discoideum genome has been extensively annotated via the Dictybase project [17], whereas the gene models for P. pallidum, D. fasciculatum and D. lacteum, available in the Social Amoebas Comparative Genome Browser [18], are primarily based on computational predictions. The social amoeba D. discoideum is a widely-used model organism for studying problems in cell-, developmental and evolutionary biology due to their genetic tractability allowing elucidation of the molecular mechanisms that underpin localized cell movement, vesicle trafficking and cytoskeletal remodeling as well as multicellular development and sociality. The social amoebas form a single clade within the Amoebozoa supergroup and are divided into four major taxon groups according to molecular phylogeny based on SSU rRNA and αtubulin sequences [15]. The four species under study here represent each of the four groups: D. discoideum (group 4), P. pallidum (group 2), D. fasciculatum (group 1) and D. lacteum (group 3). Genome annotations are not static and benefit from the application of additional evidence and new methodologies [7,19]. Therefore we present, for the first time, substantially updated annotations based on a de novo transcriptome assembly for the D. discoideum, P. pallidum, D. fasciculatum and D. lacteum genomes. Sample preparation Sequencing data were obtained from four RNA-seq experiments. The D. discoideum data were obtained from an experiment comparing gene expression changes between wild-type cells and a diguanylate cyclase (dgcA) null mutant at 22 h of development [20]. The P. pallidum data were obtained at 10 h of development in an experiment comparing wild-type and null mutants in the transcription factor cudA (Du, Q. and Schaap, P. unpublished results). In this experiment P. pallidum cells were grown in HL5 axenic medium (Formedium, UK), starved for 10 h on non-nutrient agar, and harvested for total RNA extraction using the Qiagen RNAeasy kit. The data for D. lacteum and D. fasciculatum were obtained from developmental time series [21]. For these series cells were grown in association with Escherichia coli 281, washed free from bacteria, and plated on non-nutrient agar with 0.5% charcoal to improve synchronous development. Total RNA was isolated using the Qiagen RNAeasy kit at the following stages: growth, mound, first fingers, early-mid culmination, fruiting bodies. D. lacteum RNAs were also sampled at three time points intermediate to these stages. Illumina paired end sequencing Total RNA was enriched for messenger RNA (mRNA) using poly-T oligos attached to magnetic beads and converted to a sequencing ready library with the TruSeq mRNA kit (Illumina), according to manufacturer's instructions and 100 basepairs (bp) paired-end sequenced using an Illumina HiSeq instrument. For the D. discoideum and P. pallidum samples, 1 μg of total RNA was used as starting material, with 4 ul of 1:100 dilution External RNA Controls Consortium (ERCC) ExFold RNA Spike-In Mixes (Life Technologies) added as internal controls for quantitation for the RNA-Seq experiment and sequenced at the Genomic Sequencing Unit, Dundee. In total there were 433 M, 413 M, 171 M and 319 M reads respectively for D. discoideum, P. pallidum, D. fasciculatum and D. lacteum. Data processing and de novo transcriptomics assembly The quality of the raw reads was checked with FastQC [22] and the reads were found to have high quality scores across their full length. No trimming of the data was performed, as aggressive trimming can negatively impact on the quality of assemblies [23]. All reads for each species were separately combined prior to de novo assembly. Being a more mature genome the D. discoideum data was used to verify the methodology, thereby giving a reference point for the other, less well characterised, species. Figure 1 shows a schematic of the overall workflow. Trinity version 2013.11.10 [8] was used for de novo assembly, and normalisation of the read data was achieved with a kmer of 25 and aiming for 50x coverage of the genome. Following normalisation there remained 5.3 M, 8.3 M and 16.0 M read pairs in D. discoideum, P. pallidum and D. lacteum, respectively. D. fasciculatum reads were not normalised as there were fewer than the recommended 300 million reads as per the Trinity manual. Trinity was run on the normalised reads using the -jacard-clip parameter and setting -kmin-cov to 4 in an attempt to reduce the number of fused transcripts in P. pallidum only. In the other species the parameter made little difference. For the initial transcript set of D. discoideum and P. pallidum assemblies, any transcripts with BLAT (BLAST-like alignment tool) v35x1 [24] hits to the ERCC spike-in sequences were removed from the D. discoideum and P. pallidum assemblies. D. fasciculatum and D. lacteum were not cultured axenically and thus the samples were contaminated by their bacterial food source. In order to remove the bacterial contamination D. fasciculatum and D. lacteum, transcripts were filtered with the TAGC (taxon-annotated GC-coverage) plot pipeline [25]. TAGC determines for each contiguous sequence (contig) the proportion of GC bases, their read coverage and best phylogenetic match. With this information it is possible to identify which transcripts are mostly likely to be contaminants and removed. In order to remove the contamination, first all the transcripts were aligned to the BLAST 'nt' database using BLAST megablast. Using the trinity assembled transcripts, the BAM file of the reads mapped back to the transcripts and the transcripts to species mapping, non-target related transcripts were removed. The contaminant transcripts were differentiated on the coverage vs GC plots (see Additional file 1: Figure S1). The normalised set of reads were aligned with bowtie (0.11.3, with parameters applied as per Trinity script alignReads.pl) [26] to the whole transcript set and the total number of reads matching to each transcript were stored (see Additional file 1: Figure S2 for the read distributions for each dataset). Transcript refinement Program to Assemble Spliced Alignments (PASA) v2.0.0 [27] was used to refine the Trinity transcripts into more complete gene models including alternatively spliced isoforms. Initially developed for EST data, PASA has been updated to also work with de novo transcriptome data. Using the seqclean tool available with PASA, all the transcripts were screened and trimmed for low complexity regions, poly (A) tails and vector sequences. GMAP (Genome Mapping and Alignment Program) [28] and BLAT [24] were used to align the transcripts to their respective genomes. Trinity transcripts that failed to align to the already existing genome in both GMAP and BLAT were removed as 'failed'. Remaining 'good' transcripts at this stage are termed the PASAaa dataset. Next, PASA takes existing annotations and compares them to the PASAaa dataset. PASA uses a rule-based approach for determining which transcripts are consistent or not with the existing annotation and updates the annotation as appropriate: new genes, new transcript isoforms or modified transcripts. PASAua is the term used for the PASA assembled transcripts after updating with existing annotation. Assembly quality check At each stage, the transcript datasets were assessed with Benchmarking Universal Single-Copy Orthologs, BUSCO, [29] and Transrate v1.0.0 [13] These methods take complementary approaches in assessing completeness and/or accuracy. Transrate v1.0.0 uses the read data and Fig. 1 The de novo transcriptomics assembly workflow. The reads are input at the top in green, all computational steps are in blue and all data or quality control outputs are shown in grey. PASA is the Program to Assemble Splice Alignments tool [27]. See main text for description of PASAaa and PASAua steps. BUSCO is Benchmarking Universal Single-Copy Orthologs [29] optionally the reference sequence as input. BUSCO defines a set of 429 core eukaryotic genes. These genes are used as a proxy for minimum completeness based on the assumption that a eukaryotic genome or transcriptome assembly should encode a large proportion of the core set of genes. The BUSCO (v1.1b1) tool uses hidden Markov models (HMMs), defined for each of the core genes in the set, returning whether there are complete or partial matches within the de novo transcripts. When run in genome mode, BUSCO additionally uses Augustus [3] to generate a predicted gene set against which the HMMs are tested. Transrate calculates the completeness and accuracy by reporting contig score and assembly score. Contig score measures the quality of the individual contig, whereas assembly score measures the quality of whole assembly. Orphan RNAs The full set of Trinity transcripts constitutes the best approximation of the assembly of transcripts expressed in the RNA-seq sequencing data. The transcripts were aligned against the existing genome and coding DNA (cDNA) references (from Dictybase (D. discoideum) and SACGB [18], (D. fasciculatum, D. lacteum and P. pallidum)) using BLAT. Any transcripts not matching the existing references were searched against the NCBI 'nt' database with BLAST [30] and with PSI-BLAST against the NCBI 'nr' database for the longest predicted ORF in any remaining transcripts without a match to 'nr'. This exhaustive search allowed the categorisation of 'annotated' (transcript with match to known genome and/or cDNA), 'known' (match to related species), 'artefact' (match to non-related species (non-Dictyostelid)) and 'putative novel' (remainder) datasets. PCR and subcloning D. discoideum genomic DNA (gDNA) was extracted using the GenElute mammalian genetic DNA extraction kit (Sigma). Polymerase chain reaction (PCR) reactions were run for 30 cycles with 50 ng of gDNA and 1 μM of primers with 45 s annealing at 55°C, 2 min extension at 70°C and 30 s denaturation at 94°C. The reaction mixtures were size-fractionated by electrophoresis, and prominent bands around the expected size were excised, purified using a DNA gel extraction kit (Qiagen) and subcloned into the PCR4-TOPO vector (Invitrogen). After transformation, DNA minipreps of clones with the expected insert size were sequenced from both ends. Results and discussion De novo transcript assembly Table 1 shows a summary of the Trinity output for the D. discoideum, P. pallidum, D. lacteum and D. fasciculatum de novo transcriptome assemblies. Overall, the raw assemblies are similar in terms of total transcripts, GC content, and contig N50 or E90N50 (N50 for the top 90% expressed transcripts). D. discoideum is slightly anomalous in N50, E90N50, mean length and transcripts ≥ 1,000 bp with all features being smaller than the other three assemblies. The mean length over all the annotated Dictybase coding sequences is 1,685 bp which is substantially larger than in the assembled transcripts (867 bp) suggesting that the D. discoideum transcripts are fragmented. Figure 2 shows the distribution of transcript lengths for D. discoideum, P. pallidum, D. fasciculatum, D. lacteum (cyan) when compared to the available cDNA datasets (magenta). The D. discoideum cDNAs are manually curated, whereas the others are predicted. The transcript sets are enriched in short transcripts (<1000 bp) as compared to their cDNAs with the effect being most marked in D. discoideum, D. fasciculatum and D. lacteum (Fig. 2a,c and d). The P. pallidum assembly is more similar to its cDNA reference dataset (Fig. 2b). Interestingly, the longest assembled transcript in D. discoideum (21,679 bp) was found to be approximately half of the mitochondrial chromosome. We speculate that as the mitochondrion is gene rich and highly expressed, Trinity was unable to resolve overlapping reads from adjacent genes thereby joining them all into one 'supercontig'. The subsequent steps in the assembly were performed with PASA [27] which uses reference genome and transcript datasets to generate a refined and updated transcriptome assembly. The first stage takes the transcriptome assemblies, aligns them against the genome and clusters them into gene structures according to their genome alignments. Any transcripts, which do not align adequately to the genome are filtered out by PASA, under the assumption that they are misassemblies. This dataset will be referred to as 'PASA annotated assemblies' , PASAaa. In unfinished and complex genomes, it is possible that there are missing gene loci in the genome reference. The missing loci may appear in a de novo transcriptome assembly and would be filtered out by PASA. The second stage uses the aggregated and filtered set of transcripts to refine the existing annotations for each of the species. At this stage, the gene models are updated with new or extended UTRs, new alternatively spliced isoforms are added, and introns are added or removed. New genes are identified, and existing genes are split or merged as required by the de novo assembly data. This dataset is referred to as 'PASA updated annotations' , PASAua. Table 2 shows the results of each stage of the assembly workflow from Trinity to each of the PASA steps and compared to the existing set of gene models from DictyBase (D. discoideum) or Augustus predictions (P. pallidum, D. lacteum and D. fasciculatum). It is clear that at each stage the assemblies become more similar to the existing gene models ( Table 2). For example, in all the species the total number of transcripts was 3-4-fold larger in the Trinity data than in the existing annotations. Although de novo assembly has the potential to identify novel genes and transcripts, a 3-fold increase is unlikely. By the end of PASAua, the transcript counts were within 1,500 of the existing models, with D. fasciculatum, D. lacteum and P. pallidum having more genes than in their Augustus-predicted models, and D. discoideum having 760 fewer genes than in the DictyBase-curated models. This is to be expected as the gene prediction algorithms are unlikely to have found all transcripts, whereas the D. discoideum curated set will include genes expressed under certain conditions only (e.g. developmental time points) that were not part of the experiment included here. Mean transcript lengths increased through the workflow. In particular, for D. discoideum, the mean Trinity transcript length was 871 bp and the final PASAua length was 1,787 bp indicating that the high fragmentation observable by an excess of short transcripts (Fig. 2) has been reduced. Similarly, the total number of identified exons was reduced from the initial Trinity dataset. Overall, the initial Trinity assemblies have been refined from a fragmentary and redundant dataset to a more full-length and less redundant set of transcripts, which are more similar to the existing reference datasets in terms of total transcript counts, mean length, number of exons and exons per transcript (Table 2). Quality assessment Benchmarking Universal Single-Copy Orthologs (BUSCO) and Transrate are tools which allow the assessment of completeness and accuracy of transcriptome assemblies. A set of 429 core eukaryotic genes (CEGs) was defined by BUSCO, for the purpose of assessing completeness in eukaryotic genomes [29]. CEGs are conserved across taxa and the majority should be present in the majority of eukaryotic species. A large fraction of missing BUSCO genes could be indicative of an incomplete assembly. Figure 3 shows the comparison of complete and partial BUSCO matches in all four species for the genome reference, Trinity assembly, PASAaa refined transcripts and PASAua updated annotations. In the ideal situation all the BUSCOs would be detected in an assembly, however high sequence divergence or absence in the species will give a lower maximum detection level. The whole genome BUSCO score represents the upper limit for any of the assemblies. All except the PASAaa datasets have >80% complete or fragmented BUSCOs and are close to the whole genome count, suggesting the transcriptome assemblies are nearly complete. It is noticeable, that the number of identified BUSCOs is consistently lower in the PASAaa data for all four species (Fig. 3). This drop is due to the strict PASA filtering during transcript assembly. PASAaa only retains transcripts, which align to the reference with 95% identity and 90% length coverage. Manual checking of the BUSCOs that are identified in the Trinity data, but not in PASAaa reveals that they all are labelled as failed alignments. This suggests that either BUSCO is overly permissive in defining the orthologues or that PASAaa is overly aggressive in filtering transcripts. PASAua appears to 'rescue' this behaviour, presumably by including good annotations for genes that are poorly assembled in the Trinity data. Transrate assesses transcript quality by calculating several contig-level metrics based on the input RNA-seq data, and measures how well the read data support the contigs. Contigs are scored individually and then combined into an overall assembly score which ranges from 0 to 1. An optimal score is also reported, which predicts the best potential assembly score achievable by removing the worst scoring contigs in the dataset. An assembly score of 0.22 and optimised score of 0.35 were found to be better than 50% of 155 published de novo transcriptome assemblies [13]. A high Transrate score with a small improvement in the optimal score indicates a good de novo assembly, which is unlikely to be improved without further data or information. Figure 4 compares the distribution of Transrate contig scores from the Trinity assembly, PASAaa refinement, PASAua update and reference transcript/coding sequence (CDS) datasets for each of the four species. In contrast to the BUSCO data, the PASAaa data shows an improvement in Transrate contig scores when compared to the raw Trinity output meaning that the PASAaa transcripts are more consistent with the data, confirming that perhaps BUSCO is too permissive when assigning orthologues rather than PASAaa being too aggressive with its filtering. Notably the reference sequence datasets ('CDS' Fig. 4) for D. discoideum and P. pallidum, show a lower median score than the PASAua data, indicating that PASAua is working well in combining the data with the existing annotations. There is little difference in D. lacteum. In D. fasciculatum the CDS data shows the best Transrate score of any of the assemblies. Figure 5 compares the Transrate assembly scores and optimal scores between PASAua and the annotated CDS over the four species. The assembly scores range from 0.31 (D. fasciculatum) to 0.42 (D. discoideum) and the optimal scores range from 0.32 (D. fasciculatum) to 0.53 (D. discoideum). It is clear that PASAua has better Transrate scores (Fig. 5a filled circles) than the annotated CDS (Fig. 5a filled triangles), except for D. fasciculatum, with all the PASAua assemblies scoring better than 50% of published transcriptome assemblies (Fig. 5a dotted black line). The optimal scores for PASAua are also all better than 50% of published transcriptome assembly data (Fig. 5a dotted cyan (0.32) is small (Fig. 5a green filled circles), suggesting that there is little improvement to the assembly possible given the read data for this species. Using the optimal score, Transrate defines a set of 'good' contigs which best fit the data. The proportion of PASAua good contigs ranges from 79.9% (D. discoideum) to 97.2% (D. fasciculatum) which, for all species, is a higher proportion than the annotated CDS (Additional file 1: Table S3). Transrate additionally has a reference-based measure, which aligns the transcripts to the reference protein sequences and the results are shown in Fig. 5b. The y-axis in Fig. 5b shows the proportion of reference protein sequences covered with transcript sequences at several thresholds (25, Interpretation What does an RNA-seq-based de novo assembly achieve when there is an already existing annotation either manually curated or generated via prediction? Is it worth it? Table 3 details the results following PASA refinement of the existing gene models. Despite being a manually curated genome, the D. discoideum gene models where extensively modified by PASA with 7,182 being updated. Most of the updates in D. discoideum (6,750, 94%) are the result of UTR additions at 5′ and 3′ ends of genes, which were mostly missing in the existing models. The assemblies in the other species have a similar number of updates, but UTR-only updates to transcripts are a smaller fraction of the total. 187 new alternatively spliced transcripts, in 170 genes, were identified in D. discoideum (Table 3). There are currently 70 alternatively spliced transcripts, in 34 genes, annotated in Dictybase so this new data represents a 2.7-fold increase in the number alternatively splice transcripts and a 5-fold increase in genes. This number in D. discoideum could be an underestimate as the D. fasciculatum, D. lacteum and P. pallidum assemblies all have~1000 alternate splice isoforms. Figure 6 provides examples, in each of the four species, of changes to the transcript models determined by PASA that are well supported by all the data. Each panel highlights a different type of change to the reference model. Gene DDB_G0295823 has a single transcript (DDB0266642 Fig. 6a) with two exons and a single intron. The RNA-seq data (brown), Trinity assembly (purple) and PASAaa refinement (red) identifies extensions to the model, adding 5′ and 3′ UTRs to the annotation (green, narrow bars). The Trinity transcript (purple) is on the opposite strand to the reference transcript (black) and is corrected by PASA (red & green). The example in P. pallidum (Fig. 6b) shows three new alternatively spliced products of the gene (Fig. 6b green bars 1, 2, 3 labels). The three new models have the same coding region, but differ in their 5′-UTRs: two with differently sized introns and one without an intron. The new models also include a longer second coding exon (Fig. 6b arrow) Reference Coverage Proportion of reference proteins Fig. 5 Transrate assembly scores and reference coverage metric. a Compares the Transrate [13] assembly score and the optimised score between the CDS and PASAua [27] datasets in the four species (Ddis: D. discoideum, Ppal: P. pallidum, Dfas: D. fasciculatum, Dlac: D. lacteum). The dotted lines represent the Transrate scores that would be better than 50% of 155 published de novo transcriptomes as found by Smith-Unna and co-workers [13]: 0.22 overall score (black horizontal dotted line) and 0.35 optimal score (cyan horizontal dotted line). b The proportion of reference protein sequences covered by transcripts in the CDS and PASAua datasets by at least 25%, 50%, 75%, 85% and 95% of the reference sequence length which increased the sequence of the protein product by 9 amino acids. Figure 6c shows an example, in D. fasciculatum, where an alternatively spliced transcript alters the protein product. The alternatively spliced isoform (Fig. 6c, labelled 1) removes the first intron and extends the 5′-UTR when compared to the updated gene model (labelled 2). The CDS is shortened by 45 amino acids with the use of alternate start site, but the rest of the protein is identical. In the RNA-seq data it appears that this new alternative transcript is not the dominantly expressed isoform in the context of the whole organism. The final example is the merging of two D. lacteum genes into one (Fig. 6d). The black bars show two distinct genes (DLA_11596 and DLA_04629), but the RNA-seq data (brown) and the Trinity assembly (purple bars) show uninterrupted expression across the intergenic region between the two genes (arrow). The PASA refinement and re-annotation (red and green bars) encapsulate the expression as a contiguous region with the coding region being in-frame over the two existing gene models. The annotation for the upstream DLA_11596 gene in SACGB [17] gives its best bi-directional hit in Uniprot/ TrEMBL as gxcN in D. discoideum (DDB0232429, Q55 0V3_DICDI). gxcN codes for a 1,094 amino acid protein where DLA_11596 codes for a 762 amino acid protein and the pairwise alignment of DLA_11596 with DDB0232429 shows no overlap over the C-terminal 300 residues. The PASAua gene fusion of DLA_11596/DLA_04629 (Fig. 6d) codes for a longer, 1,029 protein which aligns across the full length of DDB0232429 in a pairwise alignment. We suggest that the existing gene model, DLA_11596, is a truncated form of a D. discoideum gxcN orthologue and that the fusion with the downstream DLA_04629 gene represents the more accurate gene model. Given that D. discoideum has been extensively studied and the annotation curated by Dictybase, it is of note that our pipeline identified putative changes which altered the protein sequence of 554 genes (4.5% of total reference models) ( Table 3). D. discoideum has been the focus of many functional studies including about 400 deletions in genes that are required for normal multicellular development [16]. Comparing the 554 D. discoideum genes with modified proteins to the developmentally essential genes, we found 16 genes (2.9%) that overlapped (see Additional file 2: Figure S3 for domain diagrams). Out of the 16, nine are either truncated or extended at the N-or C-terminal. In the remaining seven proteins, there is loss or gain of exons. Five proteins were updated with additional exons: DDB_G0268920, DDB_G0269160, DDB_G0274577, DDB_G0275445 and DDB_G0277719, and two proteins have an exon deletion: DDB_G0271502 and DDB_G0278639. Investigating these protein changes in more detail revealed some errors in the underlying genome sequence, which resulted in some unusual gene models. Figure 7 shows clcD (chloride channel protein, DDB_G0278639) as an example. In the domain architecture of clcD, there are two CBS (cystathionine beta-synthase) domains present at positions 827-876 and 929-977 in the transcript sequence. In the updated sequence the protein is truncated and these two domains have been removed. This is likely to be incorrect since all eukaryotic CLC proteins require the two C-terminal CBS domains to be functional [31]. How did this change occur in the de novo transcript assembly? In the existing annotation, there is an impossibly short two-base intron between the CLC domain and first CBS domain. Splicing requires a two-base donor and a two-base acceptor at either end of the splice site meaning at least four bases are required, not including any insert sequence. Careful investigation of the RNA-seq genome aligned reads reveals a singlebase insertion immediately after the intron in 22 out of 23 reads overlapping the region (yellow inset, Fig. 7). The RNA-seq data turns the two-base intron into a three-base, in-frame codon inserting an isoleucine into the protein sequence and retaining the CBS domains. By implication there is a missing base in the genome reference, which interrupts the open reading frame with a premature stop upstream of the CBS domains (arrow, Fig. 7). PASA cannot deal with missing bases in the reference and erroneously truncates the, now out-of-frame, coding region four codons downstream of the missing base at a TGA stop codon. It also cannot create an impossible intron, which a human annotator presumably added in order to keep the transcript in-frame and retain the conserved CBS domains. PASA did make an error updating this gene, but it does not seem possible for it to have dealt with the missing base any other way. Inspection of all the D. discoideum gene models identified 119 sites in 102 genes with introns shorter than 5 bp (see Additional file 3: Table S1). Of these genes, five have three tiny introns each. Four of them are either in poorly expressed genes or in poorly expressed regions within genes. One gene (DDB_G0279477), however, is well expressed across the full length. The gene contains two 3 bp introns and one 1 bp intron. The two 3 bp introns contain a TAA sequence encoding a stop codon, but according to the RNA-seq data the codons should be TTA (Leu) with evidence from 56 and 33 reads in the two sites, respectively, 100% of which contain the TTA codon. The 1 bp intron region is covered by 38 reads and one would not expect to see introns in RNA-seq data, by definition, it does seem highly unlikely for a 1 bp intron to exist given our current knowledge of mRNA splicing: canonical GU-AG dinucleotides and a branch point >18 bp upstream from the 3′ splice site. For this gene, there are clear errors in the genome sequence, which have lead to the creation of an erroneous gene model to compensate for them. It is arguable that none of the 119 < 5 bp introns are genuine but are artificial constructs to fix problems with the gene models. We recommend that gene annotators revisit these genes and consider updating the models [7,32] and the underlying genome using RNA-seq data as evidence [33,34]. In addition to what we have shown here, it would be possible to use the RNA-seq data to directly improve the genome assembly of the four dictyostelid species mentioned herein. Xue et al. [35] have shown with their 'L_RNA_Scaffolder' tool that improved scaffolding of complex genomes such as human and zebrafish is possible with RNA-seq indicating the feasibility in more gene dense species. The protein changes in D. fasciculatum, D. lacteum and P. pallidum number in the thousands (Table 3) highlighting that computational gene prediction is only a first step in annotating a genome. A reliable genome annotation requires evidence from many sources of information [19]. The types of protein changes seen in these three species range from inappropriately fused or split genes (see Fig. 6 bottom panel for an example) via insertions/deletions to changes in protein coding start/ stop codons positions resulting in extended or truncated coding sequences. All the PASAua outputs are in the form of GFF files viewable within any genome browser. We have made an IGB Quickload server available for easy browsing of the data (http://www.compbio.dundee.ac.uk/ Quickload/Dictyostelid_assemblies). In the D. discoideum, D. fasciculatum, D. lacteum and P. pallidum datasets 44, 19, 21 and 175 novel putative genes were identified by PASA respectively (Table 3). These novel genes are in genomic loci with no current annotated gene model or where an existing model is substantially modified. The 44 D. discoideum novel genes, defined by 47 transcripts, were examined by eye in IGB [36] against all known D. discoideum reference datasets, including predicted gene models (see Additional file 4: Table S2). Of the 47 transcripts, 8 are novel alternate splice transcripts (Additional file 4: Table S2). Although 'novel' suggests there is no existing annotation at the locus of interest, if a gene update is sufficiently different from the reference gene model, PASA may consider that locus as a novel gene. In most of these cases the new transcript represents a corrected model for a previously computationally predicted gene. Many of the predicted gene models were annotated in Dictybase as pseudogenes and were originally ignored by PASAua, which only considers protein coding genes. Fragments of the pseudogenes do encode ORFs and PASA has reported them as being novel genes (Additional file 4: Table S2), but it is not possible to be sure whether the protein products are expressed in vivo with this data. Out of the 47, it appears only 6 are truly novel as they do not overlap any previously annotated transcripts: novel_model_13, novel_model_23, novel_model_30, novel_model_31, novel_model_38 and novel_model_39. All except novel_model_23 have a sequence match to existing genes, suggesting that they are paralogues. The longest novel unannotated model is 510 AA in length (novel_model_31) and appears to be a duplicate copy of the leucine rich repeat protein lrrA present on the chromosome 2. Notwithstanding the large number of updates to the existing D. discoideum annotations it is clear from Table 3 that there are substantially more changes in the other three species. In particular, the numbers of modified protein sequences are 4, 9 and 10-fold larger in P. pallidum (2,252), D. lacteum (4,741) and D. fasciculatum (5,393), respectively. Similarly, there are 7, 5 and 6-fold more novel alternate splice isoforms in the three species, respectively. For P. pallidum (1,321), D. lacteum (1,088) and D. fasciculatum (842), the gene models were predicted with Augustus (G. Glöckner, personal communication) which, given the updates found with PASA, suggests that although the predicted gene models are in the correct locus, many are inconsistent with empirical RNA-seq evidence. With respect to novel genes annotated by PASA, it is notable that D. fasciculatum and D. lacteum have fewer than either D. discoideum or P. pallidum. It is unclear why this would be. Many genes were inspected by eye with IGB [36] and overall the annotations appear appropriate, but there are many occasions where human intervention would make further improvements. Orphan RNAs As mentioned above, PASA requires that transcripts align to the genome before it can consider them for further analysis. It makes sense to use the genome as a filter for valid transcripts, however this makes the assumption that the genome is complete. Any gaps in the genome that include genes will result in filtering out perfectly valid transcripts. To determine whether this has happened here, we isolated the transcripts that did not align to the genome and used a process of elimination to identify those transcripts that could be genuine. Table 4 breaks down the number of orphan RNAs and whether they match nondictyostelid genes ('artefact'), genes in other dictyostelids ('known') or neither ('novel'). D. fasciculatum and D. lacteum have far more 'novel' non-genome transcripts (6,559 and 6,465, respectively) than D. discoideum (69) or P. pallidum (26). This is likely due to the fact that these species, which were cultured on bacteria, contain chimeric misassemblies of bacterial and dictyostelid transcripts. Despite this, they still have 525 and 945 'known' transcripts which have sequence matches to other Dictyostelids, higher than seen in D. discoideum (14) and P. pallidum (82). These transcripts are probably the best candidates for experimental assessment as genuinely non-genome transcripts. We further investigated the 69 D. discoideum 'novel' transcripts with a more sensitive PSI-BLAST search on their longest ORFs and queried their cognate proteins for functional domains using SMART. Table 5 shows the 11 most interesting hits based on the sequence match, read count and ORF length. They are all well expressed and have ORF lengths consistent with functional proteins. Three novel transcripts (comp4660_c0_seq1, comp4660_c4_ seq1 and comp5569_c2_seq1) show similar sequence matches to DDB_G0292950 via PSI-BLAST searching, in spite of very low sequence similarity between them. DDB_G0292950 codes for a hypothetical protein which is not conserved in other dictyostelids and is poorly expressed (RPKM <1) at all time points in dictyExpress [37]. The three transcripts match across different parts of DDB_G0292950 indicating that they are different parts of the same larger gene. All transcripts identified in Table 5 were selected for experimental validation via PCR amplification, 8/11 were confirmed. The comp5787_c28_seq1 transcript has putative homologues in D. fasciculatum, D. purpureum and P. pallidum as shown in Fig. 8. Sequence conservation is high as well as conservation of the Importin-beta N-terminal domain (IBN_N) and HEAT-like repeat (HEAT_EZ) domain architecture although the D. discoideum sequences appears to have an additional HEAT repeat domain (Fig. 8). Genomic cloning of orphan Dictyostelium discoideum mRNAs The newly assembled transcripts that could not be mapped onto the genome are either contaminants or genuine mRNAs for which the genomic counterpart is in an assembly gap of the genome. To investigate the latter option, we used PCR to attempt to amplify the genes from D. discoideum genomic DNA (gDNA). Oligonucleotide primers were designed to amplify regions of about 0.5 -1.4 kb of 11 transcripts (Additional fie 1: Table S4). The amplified size can however be larger due to the presence of introns. For eight transcripts, corresponding gDNAs could be amplified, but for two genes two transcripts were part of the same gene (Additional file 2: Figure S3). The six genes in total were all protein coding genes. For three transcripts, comp470_c0_seq1, comp4678_c1_seq1 and comp2066_c1_seq1 no PCR products were obtained, but the first two transcripts contained multiple stop codons in all reading frames and are likely assembly errors. The amplified PCR products were sub-cloned and sequenced from both ends. Sequences were assembled and aligned with the transcript sequence. Apart from just a few mismatches, the transcript and gDNA sequences were identical (Additional fie 1: Table S5). Only one amplified fragment contained introns (Additional file Table 1: S5). Six out of seven of the protein coding orphan transcripts therefore had a counterpart in the genome. Overall, deciphering the genomes of organisms is a key step in being able to probe their biology. With the advent of high-throughput sequencing technologies this has become a simpler problem to solve. Yet it is still not trivial to finish a genome assembly without any gaps [38]. The genome sequence on its own, however, imparts very little functional information and requires annotation of genes, transcripts and regulatory regions to be scientifically useful [7]. Many gene annotation methods are dependent on either homology to related species [30,39] or via gene finding prediction algorithms [40,41] or ideally both. However, the first method will miss all unusual or species-specific genes, while both methods fall short of accurately predicting intron-rich genes, genes with alternative or non-canonical splice sites or genes with very short exons. The ability to generate a whole transcriptome for a given species and use it to empirically annotate the genome has the power to confirm and correct any errors introduced with other methods. This has been achieved with expressed sequence tags (ESTs) in the past [42], but now can be performed with RNA-Seq short read data [32]. This evidence-based methodology is non-trivial and is not perfect. There are examples where the data is not adequately represented in the final transcript set when interpreted by the human eye. In addition, PASA only defines protein-coding genes meaning that all non-coding RNAs (ncRNAs) will be ignored and will not be in the final annotation unless already identified in the reference. Identifying ncRNAs is difficult as they have no obvious products and well-defined sequence features [43]. This does not negate their importance or relevance to the Dictyostelia. Conclusion In this study, we present a de novo transcriptome assembly in four social amoeba species for the first time and with these data we have: Created a final set of of 11,523 (D. discoideum), 12,849 (P. pallidum), 12,714 (D. fasciculatum) and 11,315 (D. lacteum) transcripts. Substantially updated the existing transcript annotations by altering models for more than half of all the annotated transcripts. Identified changes to thousands of transcripts in the predicted gene models of P. pallidum, D. lacteum and D. fasciculatum many of which affect the protein coding sequence. Identified and validated six novel transcripts in D. discoideum. Putatively identified dozens to hundreds of novel genes in all four species. Identified errors in the genome sequence of at least two D. discoideum genes (clcD and DDB_G0279477). With the possibility of, at least, another 104 genes having sequence errors. Found hundreds of putatively alternatively spliced transcripts in all species, something which has not been identified before in P. pallidum, D. lacteum or D. fasciculatum. By combining methodologies we now have a better and more complete description of the transcriptome for these four species. This is not an end-point, however, but a further step towards fully finished genomes. More data and more manual refinement will be required to improve the annotations further. Fig. 8 Protein sequence for comp5787_c28_seq1 alignment with homologues from D. fasciculatum, D. purpureum and P. pallidum. a Jalview [45] multiple sequence alignment together with Jpred secondary structure prediction and its associated confidence, 'JNETCONF' [46]. Green arrows represent extended strands and red bars represent helical regions. In the alignment IBN_N (purple) and HEAT_EZ (red) domains are highlighted. b MrBayes [47] phylogenetic tree annotated with SMART [48] domain architectures determined. Each amino acid in the multiple alignment is coloured according to the clustalx [49] colour scheme
9,861.6
2017-01-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Co-interaction of nitrofuran antibiotics and the saponin-rich extract on gram-negative bacteria and colon epithelial cells Large-scale use of nitrofurans is associated with a number of risks related to a growing resistance to these compounds and the toxic effects following from their increasing presence in wastewater and the environment. The aim of the study was to investigate an impact of natural surfactant, saponins from Sapindus mukorossi, on antimicrobial properties of nitrofuran antibiotics. Measurements of bacterial metabolic activity indicated a synergistic bactericidal effect in samples with nitrofurantoin or furazolidone, to which saponins were added. Their addition led to more than 50% greater reduction in viable cells than in the samples without saponins. On the other hand, no toxic effect against human colon epithelial cell was observed. It was found that exposure to antibiotics and surfactants caused the cell membranes to be dominated by branched fatty acids. Moreover, the presence of saponins reduced the hydrophobicity of the cell surface making them almost completely hydrophilic. The results have confirmed a high affinity of saponins to the cells of Pseudomonas strains. Their beneficial synergistic effect on the action of antibiotics from the nitrofuran group was also demonstrated. This result opens promising prospects for the use of saponins from S. mukorossi as an adjuvant to reduce the emission of antibiotics into the environment. Supplementary Information The online version contains supplementary material available at 10.1007/s11274-023-03669-2. Introduction Nitrofurantoin (NFT), nitrofurazone (NFZ), furaltadone (FTD) and furazolidone (FZD), i.e. antibiotics from the nitrofuran group, are recognized as a group of the most popular antimicrobials all over the world. The nitrofurantoin only has been prescribed 2,957,359 times in 2019 in the USA, which is not an impressive result, but it can still be concluded that nitrofuran drugs are widely used (Nitrofurantoin Drug Usage Statistics, United States, 2013. NFT has been used in the treatment of the urinary tract, while FZD has been used to treat diarrhea, cholera and bacteremic salmonellosis. In 1995, their use as a food additive for farm animals was banned due to concerns about the carcinogenicity of the drug residues and their potentially harmful effects on human health (Vass et al. 2008). The increase in antibiotic intake is one of the main reasons for the increase in antimicrobial resistance (AMR). Increasing therapeutic doses, forced by greater drug resistance of pathogenic strains, contributes to the increasing presence of antibiotics in the environment. This phenomenon is a feedback loop, as it results in the spread of antibiotic resistance (Polianciuc et al. 2020). This process can be interrupted by increasing the bioavailability of the antibiotic, allowing the use of lower doses of the antibiotic while maintaining the expected effectiveness (Price and Patel 2022). Moreover, the World Health Organization (WHO) has published a list of bacteria that are characterized by high drug resistance so that effective treatments for infections caused by them are running out (Asokan et al. 2019). At the top of the list are bacteria of the genus Acinetobacter baumannii, Enterobacteriaceae, and also Pseudomonas aeruginosa (Ropponen et al. 2021). One of the possible methods of increasing drug bioavailability is to modify the permeability of the cell membrane with surface active properties (Smułek and Kaczorek 2022). One group of surfactants that is attracting increasing attention are saponins, natural surfactants of natural origin (Liao et al. 2021). The structure of their molecules is most likely responsible for the ability to integrate the hydrophobic part of their molecules into the structure of the cell membrane, while the hydrophilic part remains on the surface. There are also suggestions that saponin monomers can incorporate into the outer part of the membrane and increase the distance between membrane components on the surface, which leads to a positive membrane curvature and the formation of specific domains, whose size increases with time (Rojewska et al. 2023). The presence of sugar chains helps develop membrane defects and gradually increases the membrane permeability, even by creating holes in the membrane (Lorent et al. 2014;Rojewska et al. 2020). The aim of our study was to determine the influence of the co-action of saponin-rich Sapindus mukorossi extract and nitrofuran antibiotics: nitrofurantoin (NFT) and furazolidone (FZD), on the cellular response of gram-negative bacteria of the Pseudomonas genus. Pseudomonas aeruginosa is outside the action spectrum of nitrofurans, and therefore, we were able to observe the properties of living cells and the effect of drug-saponin interaction on the cell membrane. The toxicity of the saponin extract on human intestinal epithelial cells was also assessed. Intestinal cells are the first barrier limiting drug penetration into the body from gastrointestinal tract. Analysis of the influence of saponins on these cells may help establish ways of reusing nitrofuran antibiotics in combination with biosurfactants. As absorption of nitrofurans by the human body is severely restricted, it is important to check saponins' toxic effects on the cells constituting the intestinal barrier in the drug absorption process (Huttner and Harbarth 2017). Although there are many studies on the modification of the permeability of model membranes as well as the bactericidal properties of the saponins themselves, there have been few reports so far on the possible synergistic effect between antibiotics and saponins (Jurek et al. 2019;Lorent et al. 2014;Rojewska et al. 2020;Sreij et al. 2018). Such an effect may contribute to achieving a desired therapeutic effect at a lower dose of the drug. Thus, in our study we focused on modifying the properties of the outer membrane of gram-negative bacteria by employing the co-interaction of the antibiotic with saponins. Using Electrophoretic light scattering and Dynamic light scattering, and analysis of Congo Red adsorption and Crystal Violet permeability, we tried to more deeply understand the possible mechanism of interaction of antibiotics and surfactants on the biological membranes. For this purpose, we also performed lipidomic analysis. The performed basic analyses of the metabolic activity of both bacterial cells and human intestinal epithelial cells provide important information about the toxicity and safety of saponins derived from Sapindus mukkorosi, and also indicate a possible synergistic effect between chosen antibiotics and saponins. The obtained results provide new, important information on the possible interaction of surfactants with nitrofuran antibiotics and allow a better understanding of their interference in the lipid profile of biological membranes. The use of saponins may contribute to reduction of the growing bacterial resistance to antibiotics through the use of compounds of natural origin. Chemicals All chemicals used in the study, including two nitrofuran-NFT and FZD, were of the highest purity grade and were purchased from Merck KGaA (Darmstadt, Germany). The nutrient agar, nutrient broth and other microbiological supplements came from BTL sp. z o.o. (Łódź, Poland). Sapindus mukorossi nuts were obtained from Mohani (Psary, Poland). The saponins-rich extract was obtained via methanol extraction as described by Smułek et al. (2016). Cytotoxicity analysis The cytotoxicity of S. mukorossi extract combined with antibiotics was assessed using human CCD 841CoN (ATCC ® CRL-179 ™ ) cells derived from normal colon mucosa and obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). The cells were cultured in Dulbecco's Modified Eagle's Medium (Sigma-Aldrich, Poznań, Poland) supplemented with 10% fetal bovine serum (Gibco BRL, USA) and 1% nonessential amino acid solution 100 × (Sigma-Aldrich). They were grown at 37 °C in a humidified atmosphere (5% CO 2 , 95% air) and subcultured twice a week after reaching ca. 80% confluence. A trypsin-EDTA solution (0.25%) was used to harvest the CCD 841CoN cells. In the cytotoxicity experiments, the cells were grown in 96-well plates at the initial density of 1.5 × 10 4 cells cm −2 . Twenty-four hour cell cultures were treated with S. mukorossi extract at concentrations ranging from 0 to 1000 μg mL −1 with the addition of antibiotics at a final concentration of 5 μg mL −1 . After 48 h of treatment, cell viability and metabolic activity were assessed using the MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) test (Sigma-Aldrich), as described by Smulek et al. (2020). Briefly, the MTT solution was added to each well to obtain a concentration of 0.5 mg MTT mL −1 . The microplate was incubated at 37 °C for 3 h, and then formazan crystals were extracted with acidic isopropanol for 20 min at room temperature. Absorbance was measured at 570 and 690 nm using a Tecan M200 Infinite microplate reader (Tecan Group Ltd., Männedorf, Switzerland). Bacteria strain and culture conditions Three strains of bacteria of the genus Pseudomonas were used in the study: Pseudomonas plecoglossicida IsA (NCBI GenBank Accession No. KY561350), Pseudomonas sp. MChB (NCBI GenBank Accession No. KU563540), Pseudomonas sp. OS4 (NCBI GenBank Accession No. KP096512). The bacteria were stored on nutrient agar plates. For incubation, the bacterial biomass was suspended in a nutrient broth until it reached the exponential growth phase. The bacterial cultures were then centrifuged at 4500 rcf and re-suspended in a PBS (Phosphate-Buffered Saline) solution at a constant neutral pH. A growth curves of pure bacteria cultures and in presence of xenobiotics are presented in Supplementary Data with additional explanation. The measurements were performed with a microplate reader (Multiskan 152 Sky, Thermo Fisher Scientific, Waltham, MA, USA) and the 96-well clear bottom sterile microplates as described by Pacholak et al. (2023) 100 μL of the prepared bacterial cultures were transferred to the microplate wells. The plates were maintained at 30 °C with pulse shaking. The OD 600 of each well were read every 10 min for 24 h. In order to determine the action of 5 mg L −1 FZD and NFT antibiotics in combination with 10 mg L −1 of saponins or without saponins on bacterial cells, liquid cultures were prepared. The bacteria were centrifuged from growth medium and then re-suspended in the 1 mL of described mixtures for 24 h. The control sample consisted of the bacteria suspended in the PBS solution. After 24 h, the tests described in the further experiments were carried out. Fatty acids profile of bacterial strains The procedure of fatty acid methyl ester extraction, analysis and identification using gas chromatography, and data interpretation were analogous to the methodology described by Nowak and Mrozik (2016). The mean fatty acid chain length was expressed by the following equation: where: %FA is the percentage of fatty acid, and C is the number of carbon atoms. To prevent the alterations caused by fatty acids occasionally detected, the analysis of FAMEs included only fatty acids with a content of at least 1%. The obtained results were evaluated by analysis of variance, and statistical analyses were performed on three biological replicates of data obtained from each treatment. The statistical significance (p < 0.05) of differences was treated by one-way ANOVA, considering: (1) the effect of each treatment on tested bacterial strains and (2) the influence of NFT and FZD on each bacterial strain. Next, differences between particular samples were assessed by post-hoc comparison of means using the lowest significant difference (LSD) test. Cell surface properties measurements The above analyses were performed on cultures of bacteria suspended for 24 h in solutions containing the drug and saponins. The analysis of cell surface properties included evaluation of Congo red binding to microbial cells, according to Ambalam et al. (2012). Moreover, the cell membrane permeability test using crystal violet was performed, as well as an MTT enzymatic activity test, as described by Smulek et al. (2020). The zeta potential was calculated from the Smoluchowski equation after measurements of electrophoresis mobility using a Zetasizer Nano ZS instrument (Malvern Instruments Ltd. UK). Additionally, the cells' sizes were measured using a Mastersizer 2000 instrument (Malvern Instruments Ltd.) equipped with a Hydro 2000S unit that enables the analysis of samples in the form of a wet dispersion. The cells diameters were measured in the range of 0.02-2000 μm. For this purpose, an appropriate quantity of the material was dispersed in a water medium, and after establishing the instrument background, appropriate measurements were made. An atomic force microscope Park NX10 from Park Systems (Suwon, South Korea) was used to analyze changes in the cell topography of the bacteria as described by Pacholak et al. (2023). Statistical analysis The results presented in the study were calculated as an average value from at least three independent experiments. The variance analysis and t-Student test were applied to determine the statistical significance of differences between the average values. The differences were considered statistically significant at p < 0.05. All calculations were conducted using Mean fatty acid chain lenght = ∑ (%FA × C)∕100 Excel 2019 (Microsoft Office) software. The FAME profiles were also subjected to principal component analysis (PCA). All analyses were performed using the Statistica 13.3 PL software package. Cytotoxicity analysis Cytotoxicity of saponins from S. mukorossi and nitrofuran antibiotics was determined in normal human cells to evaluate their safety in antibiotic therapy. The effect of saponins-rich plant extract on the proliferation, viability, and metabolic activity of colon epithelial CCD 841CoN cells is shown in Fig. 1. The experiments revealed that S. mukorossi saponin extract at concentrations up to 10 μg mL −1 is not cytotoxic to the normal intestinal cells. The lowest cytotoxic dose, that is the concentration of the extract causing a decrease in cell viability by 10%, was calculated as 16.79 ± 2.87 µg mL −1 (Table 1). Figure 1 shows a dose-response relationship between saponin-rich extract and colon mucosa cells. Based on the experimental results and model fitting, a half maximal inhibitory concentration (IC 50 ) was determined at 32.72 ± 2.63 µg mL −1 . For comparison, the IC 50 of the antibiotics were estimated to be 1.28 µg mL −1 and 2.92 µg mL −1 for FZD and NFT, respectively (Table 1). Notably, the saponin extract combined at the concentration of 10 µg mL −1 with the antibiotics did not significantly affect their cytotoxicity; the extract supplementation did not change the cytotoxic potential of the antibiotics at every concentration tested (Fig. 2). The obtained results indicate that the use of S. mukorossi saponins at non-cytotoxic doses does not cause any additional cytotoxic effect on normal human colon cells. Therefore, the concentration of saponin extract of 10 µg mL −1 was chosen for further experiments. As follows from peruse of literature, the results presented in our study are one of the few concerning the impact of saponin-rich extracts on unmutated human cells. The vast majority of studies reported to date have described the cytotoxicity of saponins mainly against cancer cells to assess their antitumor potential. For example, Hashemi et al. (2021) have observed that ginsenosides belonging to the saponin group have the ability to induce apoptosis and arrest the cell cycle. Moreover, gypensapogenin H derived from Gynostemma pentaphyllum significantly inhibited the growth of human breast cancer cells (MDA-MB-231), while exhibiting low toxicity to normal human breast epithelial MCF-10a cells (Zhang et al. 2015). Zhang et al. (2022) have reported that asiaticoside saponin at concentrations of 20, 40, and 80 µg mL −1 is not toxic to human retinal pigment epithelium ARPE-19 cells. Duewelhenke et al. (2007) have indicated the suppression of the proliferation of primary human osteoblasts by antibiotics fluoroquinolones, macrolides, clindamycin, chloramphenicol, rifampin, tetracycline, and linezolid at doses up to 20 or 40 µg mL −1 . It has been suggested that a significant problem of the group of antibiotics, such as fluoroquinolones, is that they cause mitochondrial dysfunction induced by oxidative stress in human cells (Nadanaciva et al. 2010;Xiao et al. 2019). Bacterial cells viability The strains utilized in this study exhibit high resistance to both antibiotics. The Minimal Inhibitory Concentration (MIC) for NFT and FZD against the IsA strain reached 100 µg mL −1 , while for the OS4 and MChB strains, it exceeds 200 µg mL −1 . Given the solubility limitations of To investigate deeper, the saponins-antibiotics coaction, the enzymatic activity of Pseudomonas spp. was assessed by MTT assay ) and the results are presented in Fig. 3. For each tested strain, a decrease in the enzymatic activity was observed after the FZD antibiotic addition. Interestingly, a very slight increase in the activity of Pseudomonas sp. MChB strain exposed to NFT was observed, which may suggest no toxic effect in this case. However, the addition of saponins caused a significant decrease in the enzymatic activity relative to that of the samples without the saponins added. The highest decrease in enzymatic activity was observed for P. plecoglossicida IsA strain, of even up to 39% relative to that of the control sample. The addition of saponins to the NFT also caused a toxic effect (a decrease in activity of cells by 41% relative to that of the control sample) on the Pseudomonas sp. MChB strain, on which the drug itself did not cause such an effect. Such results may suggest the presence of a synergistic effect between nitrofuran antibiotics and saponins, which enhance the toxic effect of the antibiotic. Moreover, pure saponins also caused a decrease in the metabolic activity of P. plecoglossicida IsA strain of up to 52% compared to that of the control. Both phenomena may be related to the formation of holes and the incorporation of saponins into the outer cell membrane, which is the first barrier between the bacterial cell and the environment. Zaynab et al. (2021) have suggested that saponins have a toxic effect on both gram-positive and gram-negative bacteria as well as fungi. The results obtained in our study correlate well with those obtained by Khan et al. (2018). The saponins obtained from green tea seeds show antibacterial activity against Escherichia coli, Salmonella spp., and Staphylococcus aureus. For most of the tested strains, a decrease in OD 600 to a value close to 0 was observed for the saponin concentration equal to the minimal inhibitory concentration, as well as a decrease in OD 600 with increasing concentration of saponins. On the other hand, Zdarta et al. (2019) have observed no toxic effect of saponins derived from Hedera helix, and even their stimulating effect towards Raoultella ornithinolytica and Achromobacter calcoaceticus. A fairly large variety of saponins, as well as the presence of a sugar part in the molecule, which can be used as an alternative carbon source for bacteria, determine the possibility of obtaining different results of toxicity assessment for different bacteria strains. Regarding cell morphology (refer to AFM images in Supplementary Data) of the control samples, a regular structure without defects and uniformity in all directions can be observed. The dimensions of the cells for all strains are approximately 2 µm in length. Notably, exposure to NFT resulted in distinct changes such as visible furrows and irregularities, particularly prominent in the IsA and OS4 strains, and to a lesser extent in MChB. The addition of saponins does not appear to have a significant impact on cell topography, as the effects are primarily attributed to NFT itself. These changes may indicate cellular damage caused by the antibiotic; however, complete cell lysis is not evident, as supported by the obtained growth curves that indicate no cytostatic effect. Similar observations were made by Pacholak et al. (2023), who found that Stenotrophomonas acidaminiphila N0B, Pseudomonas indoloxydans WB, and Serratia marcescens ODW152 exhibit discernible alterations in cell structure after prolonged exposure to nitrofurantoin, with greater severity observed after 28 days. Bacterial fatty acid profile Because the study focuses on bacterial cells membrane and wall properties, the first step of the study was to determine the profile of membrane fatty acids (FA) and the reaction of the strains to the contact with antibiotics. Figure 4 and Table 2 present the share of particular groups of FA in the membranes. In Pseudomonas plecoglossicida IsA strain, the straight-chain and saturated FA dominated. The branched FA had the relatively smallest share in the total amount of FA. In the case of two other strains, Pseudomonas sp. MChB Pseudomonas sp. OS4, the branched FA accounted for over 90% and 50% of total FA, respectively. In contrast to IsA and OS4 in whose cell membranes hydroxy and cyclopropane FA can be found, the FA profile of P. MChB cell membrane is limited only to unsaturated and 12:0 aldehyde (Fig. 5). It should be emphasized that the addition of saponins significantly changed the membrane fatty acid profile for IsA and OS4 strains, but had rather limited influence on MChB strain membrane. After the addition of saponin, the percentage of branched fatty acids in OS4 strain membrane increased from 56 to 94% for Sap/FZD and to 97% for Sap and Sap/NFT, whereas the share of unbranched fatty acids decreased from 35 to 3.06% and 6.75% for Sap/NFT and Sap/FZD, respectively. For MChB strain cell membrane a small fluctuation can be noticed for unsaturated acids after treatment with saponins-the share of unsaturated acids increased from 0.79 to 2.29% for Sap/FZD and 2.9% for Sap and Sap/NFT, at the expanse of branched and straight chain fatty acids. Nonetheless, the share of unsaturated FA was about 95% for each sample of MChB strain. The IsA strain cell membrane has the most complex composition with 45% straight-chain, 10.5% hydroxylated, 11% cyclopropane, 5.5% branched, 8.5% 12:0 aldehyde, and 27.5% unsaturated FAs. In this case, the addition of pure saponin did not change the membrane FA profile. Interestingly, the synergistic effect between saponin and FZD/NFT might occur, and as a result, only in this combination, the branched FA share increased from 5.5% to 18.30% and 16.74% for Sap/NFT and Sap/ FZD, respectively. What should also be noted, after Sap/ FZD treatment the share of straight chain FA increased to 48%. This also confirmed that bacteria modify their cells biomembrane in response to a toxic environment. The modulation of the length, branching, and saturation of the FA acyl chains is one of the main bacteria mechanisms in response to stress conditions. The increased amount of unsaturated and branched FA increase membrane fluidity and enhance the diffusion processes through it (Willdigg and Helmann 2021). Such a change can be noticed for IsA and OS4 strains after treatment with saponins. Górny et al. (2019) have studied the interaction of naproxen with Bacillus thuringiensis B1 and observed, after contact with the pharmaceutic, a decrease in the content of unsaturated FA, which were replaced by hydroxy FA. Similarly, Pacholak et al. (2021) have found for Pseudomonas hibiscicola strain FZD2 that after its contact with NFT and furazolidone, the proportion of branched FA increased at the expense of unsaturated and straight-chain FA. Hence, the results obtained in our study indicate that the tested Pseudomonas strains This antibiotic seemed to have no significant effect on the cell membrane. What is more, no changes in FA profile after exposed to saponins were observed. Moreover, in P. plecoglossicida IsA cell membranes the share of straight-chain FA increased in response to the toxic environment. Cell membrane permeability Low permeability of the bacteria cell membrane is one of the key factors limiting the effectiveness of antibiotics. Lowering the membrane permeability is also one of the possible mechanisms of cellular response to stressful conditions. The cell membrane permeability was assessed using the absorption of crystal violet by bacterial cells, on a scale where 100% meant complete dye absorption, 0% meant no crystal violet absorption and thus, a complete stop of the transport process through the cell membrane. The effect of reducing the permeability of the cell membrane can be observed when treated with pure antibiotic solutions without the addition of saponins (Table 3). The greatest decrease in the dye absorption was obtained for the Pseudomonas sp. OS4 strain-from 59.9 to 36.2% for FZD. A subtle increase in cell membrane permeability was obtained for the P. pleocglossicida IsA strain from 65.6 to 66.4%. Such a small increase is within the limit of the measurement error and may indicate practically no effect. A significant increase in the permeability for each of the tested strains was observed after adding the extract of S. mukorossi. In the case of the P. plecoglossicida IsA strain, the obtained results suggested an increase in the permeability of the cell membrane by about 20-25 percentage points as compared to the control sample. Comparing the results for pure antibiotics, the addition of saponins increased the permeability by 23 percentage points for P. plecoglossicida IsA exposed to FZD or FZD and S. mukorossi extract, and even by 53 percentage points for the Pseudomonas sp. MChB strain and the NFT drug. Knudsen et al. (2008) have come to similar conclusions. In their study they observed that the presence of soyabean saponins increase the intestinal epithelial permeability as determined by both reduced transepithelial resistance and increased apparent permeability of [ 14 C] mannitol (Knudsen et al. 2008). The increase in permeability may be related, as suggested by Jacob et al. (1991) and Zheng and Gallot (2021), to the ability of saponins to solubilize cholesterol, thereby creating tears without disturbing the remaining structure of the biomembrane. Sudji et al. (2015) on the other hand, have suggested that it is the presence of cholesterol in the cell membrane that determines the action of saponins. The absence of cholesterol in the cell membrane means that saponins do not have the effect of increasing the membrane permeability (Sudji et al. 2015). Cell surface properties Zeta potential may be one of many determinants of cell behavior in stress conditions. Both antibiotics, NFT and FZD, cause an increase in electrokinetic potential accumulated on the surface of bacterial cells (Table 3). Only for Pseudomonas sp. MChB with FZD added a slight decrease, from − 16.2 to − 16.6 mV, in zeta potential was observed. In the case of treatment with pure antibiotics, the zeta potential changes slightly fluctuated by approx. 1-2 mV; the addition of saponins from Sapindus mukorossi caused a significant decrease in the zeta potential of bacterial cells. For each of the tested strains, there was a twofold decrease in the value of the zeta potential. Saponin molecules accumulated on the cell surface, embedding themselves in the outer cell membrane, may be responsible for such a large decrease. Amphiphilic saponin molecules can be incorporated into biological membranes by their hydrophobic (aglycon) part, while the glyconic part remains outside the outer membrane region, thus causing a decrease in the stability of the colloidal solution of bacterial cells, which causes a decrease in zeta potential (Lorent et al. 2014;Rojewska et al. 2020). Similar observations have been were made by Muniyan et al. (2017). The Congo red adsorption test permitted evaluation of the cell adhesivity (Table 3). Comparing the results obtained for the cells treated with NFT, even 72.03% of the dye was adsorbed on cells of MChB strain. The lowest cell adhesivity value was obtained for the OS.4 strain with NFT-51.44%. It is worth noting that in the case of this series of tests, the cell adhesivity value does not drop below 50%, which proves that the dye is incorporated into the structure of bacterial cells membrane ). The addition of the extract caused a significant drop in the Congo Red adsorption, even below 2%. The highest value of Congo Red adsorption, of almost 30%, was obtained for the samples of MChB strain exposed to saponins only. As noted by Rojewska et al. (2020), the decrease in the Congo Red adsorption due to the presence of saponins is most likely related to the incorporation of biosurfactant molecules into the structure of the cell membrane. This results in a restriction of the space for the incorporation of the dye molecules (Rojewska et al. 2020). Conclusions The results obtained in our study provide a broad perspective on the effects of nitrofuran derivatives and saponins from Sapindus mukorossi on the living cells. The aim of the experiments carried out was to check the possibility of enhancement of the effect of antibiotics through the supporting activity of natural surfactants on the bacterial membrane. As found, the synergistic effect was particularly evident in the systems containing both FZD and saponins, leading to a pronounced reduction in the metabolic activity of cells of s Pseudomonas sp. OS4 and P. plecoglossicida IsA strains. On the other hand, the strongest biocidal effect on Pseudomonas sp. MChB was observed in the samples with NFT and saponins. At the same time, the tested compounds were found to affect the fatty acid profile in the cell membranes in different ways. The cell membrane of Pseudomonas sp. MChB strain did not show significant modifications as a result of a contact with antibiotics and saponins, but in the cell membranes of Pseudomonas OS4 strain after exposition to antibiotics and surfactants were found to show dominant presence of branched fatty acids. The proportion of branchedchain fatty acids also increased significantly in the P. plecoglossicida IsA skeleton. The changes were evident at the level of cell surface properties. The saponins very strongly reduced the hydrophobicity of the cell surface and decreased its zeta potential. This result may indicate strong adsorption of saponins on the cell surface. In view of potential use of saponins as an antibiotic adjuvants in pharmaceutical preparations, the toxicity of antibiotics and/or saponins to colon epithelial cells was also investigated, which permitted determination of a safe dose of saponins from S. mukorossi at which they are not toxic and do not increase the toxicity of antibiotics. The results Competing interests The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Ctrl − 10.1 ± 0.5 a − 12.3 ± 0.6 a − 14.5 ± 0.7 a NFT − 9.8 ± 0.5 a − 9.7 ± 0.5 b − 13.6 ± 0.7 a FZD − 9.5 ± 0.5 a − 8.8 ± 0.4 b − 14.8 ± 0.7 a NFT + Sap − 36.1 ± 1.8 b − 27.2 ± 1.4 c − 32.3 ± 1.6 b FZD + Sap − 33.8 ± 1.7 b − 24.9 ± 1.2 c − 32.6 ± 1.6 b Sap − 17.7 ± 0.9 c − 16.2 ± 0.8 d − 17.6 ± 0.9 c
7,096.6
2023-06-05T00:00:00.000
[ "Environmental Science", "Medicine", "Biology" ]
Sensitivity of ice loss to uncertainty in flow law parameters in an idealized one-dimensional geometry Acceleration of the flow of ice drives mass losses in both the Antarctic and the Greenland Ice Sheet. The projections of possible future sea-level rise rely on numerical ice-sheet models, which solve the physics of ice flow, melt, and calving. While major advancements have been made by the ice-sheet modeling community in addressing several of the related uncertainties, the flow law, which is at the center of most process-based ice-sheet models, is not in the focus of the current scientific debate. However, recent studies show that the flow law parameters are highly uncertain and might be different from the widely accepted standard values. Here, we use an idealized flow-line setup to investigate how these uncertainties in the flow law translate into uncertainties in flow-driven mass loss. In order to disentangle the effect of future warming on the ice flow from other effects, we perform a suite of experiments with the Parallel Ice Sheet Model (PISM), deliberately excluding changes in the surface mass balance. We find that changes in the flow parameters within the observed range can lead up to a doubling of the flow-driven mass loss within the first centuries of warming, compared to standard parameters. The spread of ice loss due to the uncertainty in flow parameters is on the same order of magnitude as the increase in mass loss due to surface warming. While this study focuses on an idealized flow-line geometry, it is likely that this uncertainty carries over to realistic three-dimensional simulations of Greenland and Antarctica. and coastal protection cannot rely on the median estimate since there is a 50% likelihood that it will be exceeded. Rather, an estimate of the upper uncertainty range is crucial. The most recent IPCC Special Report on the Ocean and Cryosphere in a Changing Climate provides projections of sea-level rise for the year 2100 of 0.43 m (0.29−0.59 m) and 0.84 m (0.61−1.10 m) for RCP2.6 and RCP8.5 scenarios, respectively (Pörtner et al., 2019). Other studies find slightly different (Goelzer et al., 2011, 25 2016; Huybrechts et al., 2011) and partly wider ranges (Levermann et al., 2020). Such projections are typically performed with process-based ice-sheet models which represent the physics in the interior and the processes at the boundaries of the ice-sheet. In contrast to these processes at the boundaries of the ice sheet, many rheological parameters of the ice are typically not represented as an uncertainty in sea-level projections. The theoretical basis of ice flow, as implemented in ice-sheet models, has been studied in the lab and by field observations for more than half a century and is perceived as well established (Glen,30 1958; Paterson and Budd, 1982;Budd and Jacka, 1989;Greve and Blatter, 2009;Cuffey and Paterson, 2010;Schulson and Duval, 2009;Duval et al., 2010). Glen's flow law, which relates stress and strain rate in a power law, is most widely used in iceflow models. It is described in more detail in section 2.1. Some alternatives to the mathematical form of the flow law have been proposed: multi-term power laws like the Goldsby-Kohlstedt law or similar (Peltier et al., 2000;Pettit and Waddington, 2003;Ma et al., 2010;Quiquet et al., 2018) and anisotropic flow laws (Ma et al., 2010;Gagliardini et al., 2013) might be better suited 35 to describe ice flow over a wide range of stress regimes. However, they have not been picked up by the ice-modeling community widely, possibly because this would require introducing another set of parameters which are not very well constrained. Of all flow parameters, the enhancement factor is varied most routinely and its influence on ice dynamics is well understood (Quiquet et al., 2018;Ritz et al., 1997;Aschwanden et al., 2016). However, recent developments suggest that also the other parameters of the flow law are less certain then typically acknowledged in modelling approaches: A review of the original 40 literature on experiments and field observations shows a large spread in the flow exponent n (which describes the nonlinear response in deformation rate to a given stress), which can be between 2 and 4. New experimental approaches suggest a flow exponent larger than n = 3, which has been the most accepted value so far (Qi et al., 2017). Further, via an analysis of the thickness, surface slope and velocities of the Greenland Ice Sheet from remote sensing data, Bons et al. (2018) relate the driving stress to the ice velocities in regions where sliding is negligible, and can thus infer a flow exponent n = 4 under more 45 realistic conditions. The activation energies Q in the Arrhenius law (which describe the dependence of the deformation rate on temperature), which also can vary by a factor of two (Glen, 1955;Nye, 1953;Mellor and Testa, 1969;Barnes et al., 1971;Weertman, 1973;Paterson, 1977;Goldsby and Kohlstedt, 2001;Treverrow et al., 2012;Qi et al., 2017) Here we assess the implications of this uncertainty in simulations with the thermomechanically coupled Parallel Ice Sheet Model (the PISM authors, 2018; Bueler and Brown, 2009;Winkelmann et al., 2011), showing that variations in flow parameters 50 have an important influence on flow-driven ice loss in an idealized flowline scenario. This paper is structured as follows: in Section 2 we recapitulate the theoretical background of ice flow physics and describe the simulation methods used. The results of the equilibrium and warming experiments in a flow-line setup with different flow parameters are presented in Section 3. Section 4 discusses the results and the limitations of the experimental approach and draws conclusions and suggests possible implications of these results. Theoretical background of ice flow physics The flow of ice cannot be described by the equations of fluid dynamics alone, but needs to be complemented by a materialdependent constitutive equation which relates the internal forces (stress) to the deformation rate (strain rate). Numerous laboratory experiments and field measurements show that the ice deformation rate responds to stress in a nonlinear way. Under the 60 assumptions of isotropy, incompressibility and uni-axial stress this observation is reflected in Glen's flow law, which gives the constitutive equation for ice, where˙ is the strain rate, τ the dominant shear stress, n the flow exponent and A the softness of ice (Glen, 1958). Both, the flow exponent and the softness are important parameters which determine the flow of ice. Usually, the exponent n is assumed to be constant through space and time. Until today, there is no comprehensive understanding of all the physical processes determining the softness A. It may depend on water content, impurities, grain size and anisotropy as well as temperature of the ice, among other things. Within the scope of ice-sheet modeling A is typically expressed as a function of temperature 70 where A 0 is a constant factor, Q is an activation energy, R is the universal gas constant and T the temperature relative to the pressure melting point (Greve and Blatter, 2009;Cuffey and Paterson, 2010). Due to pre-melt processes the softness responds more strongly to warming at temperatures close to the pressure melting point, which is often described by a piece-wise adaption of the activation energy Q ( Barnes et al., 1971;Paterson, 1991), with a larger value of Q at temperatures T > −10 • C. When using these piece-wise defined values for Q for warm and for cold ice 75 in the functional form of the flow law, the respective factors A 0 ensure that the function is continuous at T = −10 • C. A 0 is therefore dependent on the values of the flow exponent n and both values of Q for cold and for warm ice. The scalar form of Glen's flow law (Equation (1)) is only valid for uni-axial stresses, acting in only one direction. For a complete picture the stress is described as a tensor of order two. The generalized flow law readṡ 80 where˙ jk are the components of the strain rate tensor and τ jk are the components of the stress deviator, τ e is the effective stress, which is closely related to the second invariant of the deviatoric stress tensor: Each component of the strain rate tensor depends on all the components of the deviatoric stress tensor through the effective stress τ e . Ice flow model PISM The simulations in this study were performed with the Parallel Ice Sheet Model (PISM) release stable v1.1. PISM uses shallow 90 approximations for the discretized, physical equations: The shallow-ice approximation (SIA) (Hutter, 1983) and the shallowshelf approximation (SSA) (Weis et al., 1999) The simulations performed here use mostly the SIA mode: The geometry of a two dimensional ice sheet sitting on a flat bed and the SIA mode serve to study the effects of changes in flow parameters onto internal deformation and to separate those 100 effects from changes in sliding, etc. Including the shallow shelf approximation reproduces and even enhances the effect of changes in the activation energies Q (see section 3.5). Uncertainty in flow exponent and activation energies The flow exponent n and the activation energies for warm and for cold ice, Q w and Q c , determine the deformation of the ice as a response to stress or temperature. A recent review (Zeitz et al. submitted, see also literature in the introduction above) reveals 105 a broad range of potential flow parameters n, Q w and Q c . The activation energy for cold ice Q c is varied between 42 kJ/mol to 85 kJ/mol (a typical reference value is Q c = 60 kJ/mol). The activation energy for warm ice Q w is varied between 120 kJ/mol to 200 kJ/mol (a reference value is Q w = 139 kJ/mol). For the flow exponent n, values as low as 1 have been reported, but since many experiments and observations confirm a nonlinear flow of ice, n has been varied between 2 and 4, with a reference value of n = 3. The reference values above correspond to the default values in many ice-sheet models (the PISM authors, 2018; 110 Greve, 1997;Pattyn, 2017;Larour et al., 2012;de Boer et al., 2013;Fürst et al., 2011;Lipscomb et al., 2018). Adaption of the flow factor A 0 The flow factor A 0 in the flow law must be adapted to fulfill the following conditions: First, the continuity of the piece-wise defined softness A(T ) must be ensured for all combinations of Q w , Q c and n. Secondly, a reference deformation rate˙ at a reference driving stress τ 0 and a reference temperature T 0 (the PISM authors, 2018) should be maintained regardless of the 115 parameters. This is because the coefficient and the power are non-trivially linked when a power law is fitted to experimental Ice sheet Ocean Bedrock data. These conditions give: If the reference temperature is T 0 < −10 • C the values for cold ice A 0,c and Q c are used in the equation above, or else A 0,w 120 and Q w are used. The corresponding A 0,new for cold and warm ice respectively is calculated from the continuity condition at Here we choose τ 0 = 80kPa as a typical stress in a glacier and T 0 = −20 • C. Choosing another τ 0 on the same order of 125 magnitude has only little effect on the differences in dynamic ice loss. Choosing another T 0 on the other hand influences how the softness changes with the activation energy Q, see Supplemental Figure S1. With T 0 closer to the melting temperature, the difference in softness at the pressure melting point decreases thus the ice loss is less sensitive to changes in the activation energy Q. Experimental design 130 The study is performed in a flow-line setup, similar to Pattyn et al. (2012), where the computational domain has an extent of 1000km in x-direction and 3km in y-direction (with a periodic boundary condition). The spatial horizontal resolution is 1 km. The ice rests on a flat bed of length L = 900 km with a fixed calving front at the edge of the bed, such that no ice shelves can form ( Figure 1). In contrast to Pattyn et al. (2012), the temperature and the enthalpy of ice sheet are allowed to evolve freely. The model is initialized with a spatially constant ice thickness and is run into equilibrium for different combinations of 135 flow parameters Q c , Q w and n. The ice surface temperature is altitude dependent, T s = −6 • C/km · z − 2 • C, where z is the surface elevation in km. The accumulation rate is constant in space and time for each simulation. Similar to the MISMIP setup, there is no geothermal heat flux prescribed. In the warming experiments, for each ensemble member an instantaneous temperature increase of ∆T ∈ [1, 2, 3, 4, 5, 6] • C is applied to the ice surface for the duration of 15,000 years (until a new equilibrium is reached), while the climatic mass balance remains unchanged. That means, the temperature increase can lead to 140 an acceleration of ice flow, but is prohibited from inducing additional melt. This idealized forcing allows us to disentangle the effect of warming on the ice flow from climatic drivers of ice loss. The thickness profile of the equilibrium state is similar to the Vialov profile (see e.g. Cuffey and Paterson (2010); Greve and Blatter (2009)). However, in contrast to the isothermal Vialov profile, here the temperature of the ice is allowed to evolve freely, leading to a non-uniform softness of the ice (the PISM authors, 2018). The extent in x-direction is given by the geometry of 145 the setup, a flat bed with a calving boundary condition at the margin, and the height and shape of the ice sheet depend on the flow parameters n, Q w and Q c and the accumulation rate a. Effect of activation energies in model simulations compared to analytical solution In order to gain a deeper understanding of the influences of Q c and Q w on the equilibrium shape of ice-sheets, we here compare 150 the simulated results to analytical considerations based on the Vialov profile. At a fixed accumulation rate of a = 0.5m/yr, each flow parameter combination leads to an equilibrium state with a thickness profile similar to the Vialov profile but differences in maximal thickness and volume (Figure 2 a). Overall, high activation energies increase ice-flow velocities and reduce the ice-sheet volume. The activation energy for warm ice, Q w , affects the volume and the velocities more strongly than the activation energy for cold ice, Q c . A high Q w leads to softer ice close to the 155 pressure melting point (supplemental Figure 1) and at the base of the ice sheet, which leads to higher velocities and a lower equilibrium volume of the ice sheet while a low Q w leads to stiffer ice close to the pressure melting point and at the base of the ice sheet and in consequence the velocities decrease and the volume increases (Figure 2, b and c). For a fixed Q w , the volume appears to decrease linearly with increasing Q c and the velocity appears to increase linearly with increasing Q c . The maximal thickness of an isothermal ice sheet can be estimated with the Vialov profile with the Glen exponent n, the ice sheet extent 2L, the pressure adjusted temperature T , the gravity g, and the ice density ρ (Greve and Blatter, 2009). The Vialov thickness of a temperate ice sheet (isothermal at the pressure melting point), where the softness is evaluated at the pressure melting point depending on the activation energies Q c and Q w (see Equation (2)) gives a lower bound to the thickness, given the same geometry and flow parameters. The simulated maximal thickness is larger than maximal thickness h m from the PISM simulation to the lower bound from the Vialov profile depends on both, Q w and Q c . The ratio increases with higher Q w and decreases with higher Q c (Figure 3 b). The ice-sheet thickness of the polythermal ice sheet, as simulated with PISM, matches well the Vialov thickness calculated with Equation (9), if an effective temperature T eff < 0 • C is assumed. The effective temperature T eff , which matches simulations best, varies for different Q w . For Q w = 170 120 kJ/mol, an effective temperature of T eff = −5 • C matches well the equilibrium thickness of the polythermal ice sheets. For Q w = 200 kJ/mol, an effective temperature of T eff = −3.3 • C matches well the equilibrium thickness of the polythermal ice sheets. These differences can be partly explained by the altitude-dependent surface temperature: The maximal thickness of the ice sheets varies by approximately 800 m, which leads to a difference in ice surface temperature of approximately 4.8 • C between the thickest and the thinnest ice and thus influences the temperature within the ice sheet. Ice-sheet initial states In order to keep the initial ice volume largely fixed (with variations of less than one percent) in the warming experiments, we 180 adapt the accumulation rate for each parameter combination of Q c and Q w . Since simulations with high activation energies Q w have a smaller equilibrium volume at the same accumulation rate than simulations with standard activation energies, the accumulation rate a is increased to maintain an equilibrium volume close to the reference value. Simulations with low activation energies Q c have a higher volume at the same accumulation rate, so the accumulation rate a is decreased. In the case of an isothermal ice sheet the maximal thickness and the volume can be computed 185 analytically as shown above in Equation (9). In our model simulations, however, the temperature distribution within the ice can evolve freely, thus the softness is not uniform and an analytical solution cannot be found. In order to find the right adaptation for the accumulation rates, we start from the ice profile from the isothermal approximation as a first guess and run the model into equilibrium. If the relative difference between the new equilibrium volume and the standard equilibrium volume exceeds 1%, we further change the accumulation rate and repeat the equilibrium simulation, 190 always starting from the same initial state. The final equilibrium states found via this iterative approach differ by max. 0.8% in ice volume (supplemental Figure S2) and the difference in maximal thickness is less than 100m (Figure 4, a and b). For the combination of high activation energies Q w and Q c , the relative differences d x = (x − x 0 )/x 0 of both, adapted accumulation rates a and mean surface velocities v, increase by more than 300% (Fig. 4, c and d) and for the combination of low activation energies Q c and Q w both, adapted accumulation rates a and surface velocities v are approximately 50% lower 195 compared to the case with standard parameters. Both, the accumulation rate and the velocities, change in the same way since they balance each other in equilibrium. A change in accumulation rates controls the vertical velocity profile and thus influence the thermodynamics in the ice, which leads to differences in the temperatures of the ice sheet (pressure adjusted temperature distributions shown in Supplemental Figure S4 a). The change in temperature is most prominent at the top of the ice sheet, where higher accumulation rates (associated with high activation energies) lead to lower temperatures and vice versa. Thus the 200 temperature change introduced from increased accumulation counteracts the effect of increased softness. In order to estimate how changed temperature on the one hand and changed flow parameters on the other hand impacts the resulting ice softness, either one was kept fixed. The effect of the temperature changes on the ice softness is negligible, compared to parameter changes (see supplemental Figure S4 b,c and d). The maximal thickness of the polythermal simulated ice sheet is approximately 13-16% larger than the lower bound esti-205 mated with a temperate ice sheet ( Figure 5, a and b) with the same flow parameters and accumulation rates. Similar to the case with fixed accumulation rates, the simulated thickness matches the Vialov thickness well, if an effective temperature T eff < 0 • C is assumed. The effective temperature, that matches simulations best, varies for different Q w , from −5 • C for Q w = 120 kJ/mol to −3.6 • C for Q w = 200 kJ/mol. This difference can not be sufficiently explained by variations in surface temperature due to the difference in ice-sheet thickness. Rather the higher effective temperatures are linked to increased flow velocities of the ice, 210 which in turn might lead to strain heating. In simulations with a high Q w the simulated thickness has a higher discrepancy to the estimated lower bound (assuming a temperate ice sheet) than simulations with a low Q w . In contrast to the case with fixed accumulation rate ( Figure 3) the ratio between the estimated and the simulated thickness depends only very little on Q c . Flow-driven ice loss under warming Disentangling the purely flow-driven ice losses from the influences of melting, different initial temperature profiles and vari-215 ations in sliding requires several conditions: 1) The initial volume is fixed, which is here attained through adjustment of the accumulation rate for the different flow parameter combinations as explained in section 3.2. 2) The surface mass balance is fixed, i.e., we do not allow for additional melt, and the accumulation rate does not change with warming. 3) Sliding is effectively inhibited (which is here ensured by applying an SIA-only condition). The effect of the temperature increase is limited to warming at the ice surface which can propagate into the interior of the 220 ice sheet though diffusion and advection. Warming makes the ice softer thus accelerates the flow and ice discharge. Since temperature diffusion in an ice sheet is a very slow process, we apply the temperature anomaly for a total duration of 15,000 years. The total mass balance is evaluated and compared to the standard parameter simulation after 100, 1000 and 10,000 years of warming. A new equilibrium state is reached after 10,000 years for all parameter combinations (see longer time-series in Supplemental Figure S3). In the experiments, the ice sheet loses mass for all warming levels and all parameter combinations. However, the amount and rate of the ice loss is dependent on the flow parameters. Figure 6 shows the ice-sheet response to a warming of 2 • C. For a fixed flow exponent of n = 3 the fastest ice loss is observed for the flow parameter combination of Q c = 85 kJ/mol and Q w = 200kJ/mol and the slowest ice loss for Q c = 42 kJ/mol and Q w = 120kJ/mol. Simulations with Q w = 200kJ/mol reach a new, temperature adapted equilibrium already after 2,000 yrs, while simulations with lower Q w continue to lose mass. 230 The sensitivity to variations in flow parameters is measured via the relative differences for flow-driven ice loss d m = (∆m− ∆m 0 )/∆m 0 , where the reference ∆m 0 is always given by the simulation with standard parameters under the same temperature increase (Figure 7). While the long term response to warming, after 10,000 years, is not very sensitive to the particular choice of flow parameters, the rate of flow-driven ice loss is. The largest relative differences in ice loss is found in the first century after the temperature increase (Figure 7, a), indicating that high Q w speed up the flow-driven ice loss. Under 2 • C of warming, 235 ice loss after 100 years is enhanced more than two-fold (i.e. increased by up to 118%) in simulations with Q w = 200 kJ/mol, while low Q w reduces the relative ice loss by up to 37%. The effect of the flow parameters on flow-driven ice loss upon warming is robust for different temperature increases. Ice losses as well as the spread in flow-driven ice loss both increase for higher warming levels (see Figure 8). For a warming of ∆T = 1 • C the idealized ice sheet loses 0.09% after 100 yr and 0.35% of ice after 1000 yr for standard parameters. For a 240 warming of ∆T = 6 • C the ice sheet loses 0.46% after 100 yr and 1.89% of ice after 1000 yr for standard parameters (solid, red line). For comparison, the Greenland Ice Sheet has lost approximately 0.18% of its mass in the period between 1972 and 2018 , which includes all processes: increase in flow, melting, and sliding. The effect of flow parameter changes onto the purely flow-driven ice loss after 100 years is of the same order of magnitude as the effect of surface warming by several degrees. In particular the uncertainty ranges of ice loss for warming of 2 • C and 245 warming of 6 • C overlap (Figure 8 b), when solely considering the ice loss is driven by changes in flow and excluding surface mass balance changes. After 100 years for a temperature anomaly of ∆T = 2 • C a higher n seems to mitigate the effect of the activation energy on 250 differences in ice loss, while a lower n seems to enhance this effect ( Figure 9). However, the effect of variations in activation energy on the average surface velocity is almost independent of the choice for the flow exponent n ( Figure 10). The influence of the activation energies Q c and Q w on ice flow is similar even with different flow exponents n. This is robust for different warming scenarios from +1 to +6 • C. A higher flow exponent n, which leads to a more pronounced nonlinearity in ice flow, does not enhance but reduce variations in dynamic ice loss. Compared to the nonlinear stress dependency τ n in 255 the flow law the temperature dependent softness A(T ) = A 0 · exp(−Q/RT ) becomes less important with increasing flow exponent n. Robustness of results to changes in accumulation and sliding The overall effect of uncertainties in the activation energies Q remains robust, even if an additional driver of ice loss is taken into account. In a simulation where in addition to warming of 2 • C we also reduce the accumulation rate by 50%, the ice losses The ice sheet has reached a new equilibrium after 10,000 years. Relative difference for 2 • C warming with an additional reduction of the accumulation rate of 50% (squares) are compared to the results without changes in the accumulation rate (lines, also see Figure 7). the relative increase of mass loss mounts from 118% to 190%. On longer time scales, the spread in ice loss is reduced (after 10,000 years of forcing, when the ice sheet has reached a new equilibrium, the relative spread is below ±10%). 265 When sliding is taken into account via the shallow shelf approximation for sliding ice (see the PISM authors (2018)) the uncertainty in flow parameters leads to relative changes in ice loss from -30% to +470% after 100 years, which is a considerably larger spread than without sliding. The relative differences decrease with time, but remain ranger than without sliding. After 1000 years the ensemble member with low activation energies lost 40% less ice than the standard parametization and high activation energies alsmost double the ice loss (+90%). After 10,000 years, when the ice sheets have reached a new equilibrium, 270 the relative differences still range from -16% to +40% (see Figure 12). Discussion and Conclusion In this study we present a first attempt to disentangle and quantify the effect of uncertainties in the flow law parameters, in particular the activation energies Q and the flow exponent n, onto ice dynamics. The effect of ice rheology in ice-sheet models has been adressed in several studies with different experimental setups and 275 different time frames. In particular the effect of the enhancement factors, which are often used to approximate the change in ice flow due to anisotropy, has been explored (Ritz et al., 1997;Ma et al., 2010;Humbert et al., 2005;Quiquet et al., 2018). In addition, the effect of the initial conditions (Seroussi et al., 2013;Nias et al., 2016;Humbert et al., 2005) of the mathematical form of the flow law itself (Quiquet et al., 2018;Peltier et al., 2000;Pettit and Waddington, 2003) have been studied. These studies have been crucial for the understanding of different enhancement factors in the shallow ice and the 280 shallow shelf approximation (Ma et al., 2010), for the reconciliation of the aspect ratios of the Greenland Ice Sheet and the Laurentide Ice Sheet during the last glacial maximum (Peltier et al., 2000) and the ice flow in Antarctica and the Greenland Ice Sheet (Ritz et al., 1997;Seroussi et al., 2013;Quiquet et al., 2018;Nias et al., 2016;Humbert et al., 2005). However, the approach presented in this manuscript is different in two important aspects: Firstly, the systematic study of not only the flow exponent n but also the activation energies Q has not been performed so far. Secondly, the idealized experimental 285 setup, as presented in this study, allows us to disentangle the effects of the flow itself from other drivers and other sources of uncertainty. Several conditions need to hold to this end: The ice sheet is sitting on a flat bed and its maximal extent is determined by a calving front at the borders of the bed, thus no ice-ocean interactions or impacts of the bed geography influence the ice flow. Sliding is generally inhibited (the ice dynamics is described by the shallow ice approximation, with zero basal velocity), no changes in sliding velocity influence the ice flow. The accumulation rate is fixed and independent of the temperature change, 290 so that the ice loss is only driven by changes in flow and not by melting. These idealizations allow a clear understanding of the impact of the flow exponent and the activation energies on ice flow. In addition, they allow us to compare the simulations of the polythermal ice sheet to the analytically solvable limit of an isothermal ice sheet by using the Vialov approximation. In this setup the largest effect of the uncertainties in the flow parameters is observed in the first century after warming, while the effect of the uncertainties on ice loss becomes less important as the ice approaches a new equilibrium. Uncertainties in same order of magnitude as the effects of increased temperature forcing, under fixed surface mass balance. This effect remains robust, even if changes in the surface mass balance are taken into account. Reducing the surface mass balance by 50%, which is comparable to the changes in total surface mass balance of the Greenland Ice Sheet from 1972 to 2012 , increases the effect of the flow parameters on a timescale of 100 years and remains comparable on a timescale of 1000 300 years. Only as the ice sheet approaches its new equilibrium, the effect of the flow parameters becomes negligible. Allowing for not only flow but sliding while keeping all other conditions equal increases the effect of flow parameters substantially, leading to up to a five-fold increase in ice loss after 100 years compared with standard parameters. Acknowledging the uncertainty in flow parameters might slightly shift the interpretation of previous studies. For instance, the effect of the initial thermal regime, as studied by Seroussi et al. (2013) could be enhanced if the the activation energies were 305 higher than assumed, by making the ice softness more sensitive to changes in temperature. The crossover stress in the multiterm flow law presented by Pettit and Waddington (2003), at which the linear and the cubic term are of the same importance, is highly sensitive to the values of the activation energies. The positive feedback through shear heating, as studied for example by Minchew et al. (2018), could also be enhanced if activation energies were higher than usually assumed. The uncertainty in the flow-law parameters may further provoke a re-evaluation of other parameters, e.g. concerning melting and basal conditions. In 310 particular, Bons et al. (2018) thorough analysis of observational data of the Greenland Ice Sheet supports a flow exponent of n = 4, not the standard value of n = 3, which is is in line with recent laboratory experiments which also find n > 3 (Qi et al., 2017). Assuming a higher flow exponent n = 4 has shown to significantly reduce the previously assumed area where sliding is possible (Bons et al., 2018;MacGregor et al., 2016). Moreover, both the flow exponent n and the activation energies Q feed into the grounding line flux formula (Schoof, 2007). In several ice-sheet models, this formula is used to determine the position 315 and the flux over the grounding line in transient simulations (Reese et al., 2018). A change in the flow parameters n and Q has thus implications for the advance and retreat of grounding lines in simulations of the Antarctic Ice Sheet and possibly the onset of the marine ice sheet instability, a particularly relevant process for the long-term stability of the Antarctic Ice Sheet. On the Greenland Ice Sheet increased ice flow might drives ice masses into ablation regions, where the ice melts. A possible effect of uncertainty in flow parameters on this particular feedback remains to be explored. Aschwanden et al. (2019) have found that 320 uncertainty in ice dynamics plays a major role for mass loss uncertainty during the first 100 years of warming. While their study attributes the uncertainty mostly to large uncertainties in basal motion and only to a lesser extent to the flow via the enhancement factor, the uncertainties of the flow law and of the basal motion are not independent, as suggested by e.g. Bons et al. (2018). While the conclusions from the idealized experiments presented here cannot be transferred directly to assessing uncertainty 325 in sea-level rise projections, they are an important first step which helps to inform choices about parameter variations in more realistic simulations of continental-scale ice sheets. Code and data availability. Data and code are available from the authors upon request.
7,989.8
2020-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Application of Daubechies Wavelet Transformation for Noise Rain Reduction on the Video Currently, the use of digital video in the field of computer science is increasingly widespread, such as the process of tracking objects, the calculation of the number of vehicles, the classification of vehicle types, vehicle speed estimation and so forth. The process of taking digital video is often influenced by bad weather, such rain. Rain in digital video is considered noise because it is able to block objects being observed. Therefore, a rainfall noise reduction process is required in the video. In this study, the reduction of rain noise in digital video is using Daubechies wavelet transformation through several processes, namely: wavelet decomposition, fusion process, thresholding process and reconstruction process. The threshold value used in the thresholding process is VishuShrink, BayesShrink, and NormalShrink. The result of the implementation and noise reduction test show that Daubechies db2 level 3 filter gives the result with the biggest PSNR value. As for the type of threshold that provides optimal results is VishuShrink. I. INTRODUCTION C URRENTLY, the use of digital cameras is not only limited to shooting but also on digital video recording. In addition to indoor use, digital video is also often used outdoors such as traffic control, vehicle speed estimation, vehicle type classification, mobile vehicle number calculation, and so on. However, bad weather often affects the resulting digital video, one of which is digital video capture in the rain so that the observed object becomes obscure because it is blocked by the rain. Rain in digital video is considered noise because it is able to block objects being observed. Therefore, a digital image processing application is required to reduce or even eliminate the presence of rain noise in the video. In previous research, many researchers have conducted rainwater noise reduction research on video. As has been done by Garg and Nayar [1] using Robust method. Then, Zhang [2] use K-Means cluster method and frame difference. Nikhil Gupta [3] also conducted a study to reduce noise in the form of Gaussian noise by using wavelet transform and the result is significant. While Chen Zhen [4] uses a multilevel wavelet decomposition method and states that the noise reduction results are better than the K-Means Cluster method and the frame difference method. In this study, the reduction of rain noise on video is conducted by applying wavelet and wavelet fusion transformation. Wavelets used in wavelet fusion are Daubechies wavelets coupled with a thresholding process to reduce rain noise. The wavelet transform is used to obtain rain detail on the video image, while wavelet fusion is used to clarify the rain noise that will be reduced through the thresholding process. II. PROCEDURE OF RAIN REDUCTION ON VIDEO BASED WAVELET TRANSFORMATION AND WAVELET FUSION A. Wavelet Decomposition Based on [5], wavelet transformation can decompose image into eeg sub-bands with different resolution, frequency characteristics and directional characteristics. With the inspiration of image decomposition and image reconstruction, we obtained a result of the division of image into a low-frequency and three high-frequency as shown in Fig. 1. Where C A j+1 is the low-frequency component of image j+1 is the high-frequency component in diagonal direction. Figure 1 shows that C A j+1 is a part of the coefficient obtained through the process of low pass filter on the rows continued by a low pass filter on the column, image of this section is similar and more subtle than j+1 is a part of the coefficient obtained through the process of low pass filter on the rows continued by a high pass filter on the column, CD is a part of the coefficient obtained through the process of high pass filter on the rows continued by a low pass filter on the column, and CD j+1 is a part of the coefficient obtained through the process of high pass filter on the rows continued by a high pass filter on the column. B. Wavelet Fusion Wavelet fusion is a combination of two images obtained of wavelet transformation. Based on [6], the wavelet transform method is very well used for fusion methods because wavelet transforms can divide images into high and low frequencies at the same resolution as shown in Fig. 2. In this research, wavelet fusion is conducted to combine two frames with the aim that the rain noise becomes clearer as shown in Fig. 4. Based on [7], the wavelet fusion rule is made by measuring activity-level using a window-based method in the form of spatial-frequency method. The spatial-frequency method is calculated based on the local gradient value and the local energy of each subband [6]. The fusion rules are based on the parameters resulting from the multiplication of the local gradient and the local energy of each component of the decomposition result. To calculate local gradient and local energy values using (1) and (2). where G is the local gradient, E is the local energy l, ∆x f (i, j) is the gradient of point (i, j) on horizontal direction, ∆y f (i, j) is the gradient of point (i, j) on vertical direction, and M, N is the dimension of image. C. Thresholding On the process of thresholding, the detailed coefficient of the decomposition at each decomposition level that has passed the fusion rule is compared with a threshold-t value with the soft-thresholding and hard-thresholding functions shown in (3) and (4):d threshold-t values are obtained based on [8], [9], [10], i.e. BayesShrink, VishuShrink and NormalShrink. D. Wavelet Reconstruction The reconstruction of the wavelet is inverse of the wavelet decomposition by combining an approximation coefficient and the three detailed coefficients that has been passed fusion rule and thresholding process. Process of wavelet reconstruction is shown in Fig. 3. E. Peak Signal to Noise Ratio (PSNR) PSNR in this study was used to compare each frame of the video of the reduction with each frame on the video before it was exposed to the rain noise. Based on [11], it is calculated using (5): where (2 M −1) 2 represents the maximum pixel value for the M bit of the video frame. MSE is the mean square error calculated using equation (6): where M, N is the image dimension. I(x, y) is the pixel value of the video frame before it rains. While I (x, y) is the video frame pixel value after the noise reduction process. A. Experimental Data The experiments are conducted on rain video with extension .avi with a speed of 25 fps with a resolution of 768 × 576, rain with characteristics 10000/sec, rain size 0.00004 mm, dept 5000, speed 3000. This experiment is conducted by applying wavelet transformation Daubechies db2 , db4 db6 and db8 with 3rd decomposition level. Type threshold uses BayesShrink, VishuShrink and NormalShrink. B. Results As mentioned before, the first step taken to conduct a rainwater noise reduction test on a video is to select a video to be tested. Next, we select the Daubechies wavelet type to decompose the frame, aiming to separate between rain detail and other component details according to the level of decomposition level. The next step is using the wavelet fusion to make the rain more clear. In this research, wavelet fusion uses 2 frames close together i.e. frame 1 and frame 2, frame 2 and so on up to n-th frame. The results of the wavelet fusion process are shown in Fig. 5. The final step is to select the type of threshold to reduce the rain noise. The result will be reconstructed and rendered back into a video that has been reduced to rain, as shown in Fig. 6. The results of rainfall noise reduction test based on decomposition level, threshold type and computation time are shown in Table I, Table II and Table III. 6. The result of rain noise reduction. The left part is the frame before reduction and the right part is the frame after reduction. Analysis of wavelet transform performance on wavelet fusion with different threshold type based on test result in Table I, Table II and Table III is as follows: • Daubechies wavelet transformation db2 is good for level 3 with PSNR ratio of 32.0888 dB with computation time 32.0390 seconds. • The type of threshold that is good to use is VishuShrink. IV. CONCLUSIONS Based on the results of the experiments, it can be concluded that Daubechies wavelet transformation in wavelet fusion with vishuShrink threshold type is good for reduction rain noise on video .avi with speed 25 fps with resolution 768 x 576, rain with characteristic 10.000 / seconds, rain size 0.00004 mm, dept 5000, speed 3000.
2,007.6
2019-02-21T00:00:00.000
[ "Computer Science" ]
Preparation and Pore Structure of Energy-Storage Phosphorus Building Gypsum In this study, the pore structure of a hardened phosphorous building gypsum body was optimised by blending an air-entraining agent with the appropriate water–paste ratio. The response surface test was designed according to the test results of the hardened phosphorous building gypsum body treated with an air-entraining agent and an appropriate water–paste ratio. Moreover, the optimal process parameters were selected to prepare a porous phosphorous building gypsum skeleton, which was used as a paraffin carrier to prepare energy-storage phosphorous building gypsum. The results indicate that if the ratio of the air-entraining agent to the water–paste ratio is reasonable, the hardened body of phosphorous building gypsum can form a better pore structure. With the influx of paraffin, its accumulated pore volume and specific surface area decrease, and the pore size distribution is uniform. The paraffin completely occupies the pores, causing the compressive strength of energy-storage phosphorous building gypsum to be better than that of similar gypsum energy-storing materials. The heat energy further captured by energy-storage phosphorous building gypsum in the endothermic and exothermic stages is 28.19 J/g and 28.64 J/g, respectively, which can be used to prepare energy-saving building materials. Introduction Energy consumption has increased with the rapid economic growth, and its main form is building energy consumption [1,2]. At present, heat-and energy-storage materials are widely used in energy-saving building materials to alleviate the problem of building energy consumption [3]. Phase-change materials can store and release a large amount of heat energy in the phase-change process [4,5], which can improve thermal comfort. With high heat-storage capacity and good thermal stability, they are considered to be among the most promising heat-storage and energy-storage materials [6][7][8]. Among them, paraffin phasechange materials have been widely used because of the high latent heat of the phase change, the absence of undercooling and chromatography, and being non-toxic, non-corrosive, and inexpensive, among other advantages. A new type of energy-storage building material that not only retains the advantages of the original material, but also inherits the properties of the phase-change material, can be obtained by combining the phase-change material with the traditional building material in a certain process. Phosphorus building gypsum has broad prospects for use as an energy-storage building material. Domestic and foreign scholars have studied many new energy-storage gypsum materials. The current growth rate of phosphorus gypsum is estimated to be 200 million tonnes per year, whereas the effective utilisation rate is only 10-15%, according to the relevant statistical prediction [9]. The storage capacity is still increasing, causing considerable environmental pollution [10,11]. Currently, one of the most effective methods is to prepare the influence of the environment and parameters, finding the appropriate ratio is difficult and the experimental process is long, which is not conducive to the experimental design. Therefore, in this study, after determining the influence of a single factor on the performance of the hardened phosphorous building gypsum body, the response method was used to design the experiment, and the optimal ratio of the gypsum skeleton was studied. In summary, this study puts forward a new idea of using the water-paste ratio and an air-entraining agent to optimise the pore structure of the hardened phosphorus building gypsum (PBG) body and use it to store phase-change materials. Through the central composite response surface model design and significance evaluation, the optimal process conditions are selected to prepare the porous phosphorus building gypsum (PPBG) used as the paraffin carrier to prepare the energy-storage phosphorus building gypsum (ESPBG) and further explore the pore structure of the porous phosphorus building gypsum to obtain a new energy-storage gypsum with the basic strength requirements. The microstructure, pore size distribution, cumulative pore volume, pore type, and pore structure of the porous phosphorous building gypsum and energy-storage phosphorous building gypsum are studied through scanning electron microscopy (SEM) and the Brunauer-Emmett-Teller (BET) method. The compatibility, chemical characterisation, and thermal properties of the stored phosphorous building gypsum are investigated through Fourier-transform infrared (FTIR) spectroscopy, X-ray diffraction (XRD), and differential scanning calorimetry (DSC) to ensure the feasibility of the optimised pore structure's application in the PBG. Raw Materials The phosphorous building gypsum produced by Yunnan Yantianhua Co., Ltd. (Kunming, China). was used in this study. The phosphorous building gypsum was made from phosphorus gypsum by water washing, citric acid treatment, lime neutralisation, and dehydration at 145 • C for 6 h. The chemical composition of the phosphorous building gypsum was characterised by X-ray fluorescence spectrometry. Table 1 presents the test results. The cellulose ether (foam stabiliser) was a 200,000-viscosity cellulose ether produced by Zhongshan Huizhong Chemical Technology Co., Ltd. (Zhongshan, China). The paraffin used was liquid paraffin manufactured by Dongguan Shengbang Plastic Co., Ltd. (Dongguan, China); the main elements in the paraffin were C, H, and O, and the main minerals were (CH 2 )x, C 46 H 94 , and C 16 H 34 O [16]. The air-entraining agent was rosin concrete airentraining agent (a pore-forming agent) produced by Shenyang Shengxinyuan Building Materials Co., Ltd., and the main elements in it were C, H, and O. Ordinary tap water was used as the mixing water. Preparation and Characterisation of Energy-Storage Phosphorous Building Gypsum 2.2.1. Single-Factor Experiment First, phosphorous building gypsum slurry was prepared according to the experimental mix proportions presented in Table 2. Following the specifications stated in the 'Determination of Mechanical Properties of Building Gypsum' (GB/T17669. , part of the phosphorous building gypsum slurry was poured into a 40 × 40 × 160 mm triple die to test its compressive strength. The other part was poured into a 20 × 20 × 20 mm six-joint cement paste die to test the paraffin absorption rate. The experimental samples were removed after 24 h and cured to a constant weight at a constant temperature of 50 • C. Following the specifications stated in the 'Determination of Mechanical Properties of Building Gypsum' (GB/T17669. , 40 × 40 × 160 mm test blocks with an optimal ratio in the single-factor experiment and the central composite RSM were tested using a cement compression machine. The loading rate was 0.8 kN/s. Adsorption Rate Test The adsorption process was adopted to determine the paraffin adsorption rate, as shown in Figure 1. The prepared 20 mm × 20 mm × 20 mm phosphorous building gypsum test blocks were introduced into the dryer and then vacuum-dried for 20 min with a closed phase-change paraffin valve. The paraffin valve was opened in the process of dipping the phase-change paraffin. The paraffin was then finally absorbed into the dryer with the test block under a negative pressure (−0.06 MPa) in a water bath at a constant temperature (50 °C ). The previous valve was kept closed. When the dryer was dipped for 30 min without obvious bubbles, the specimens were taken out and cooled to a normal temperature for moulding, consequently obtaining the ESPBG. The specimens were then baked in an oven at 50 °C for 4 h to release the excess paraffin from the test block pores and prevent leakage and damage to strength during use [22]. Equation (1) was used to calculate the paraffin absorption rate of the phosphorous building gypsum test block [34]: where, W is the mass (g) in the saturated state of the material absorption and W is The prepared 20 mm × 20 mm × 20 mm phosphorous building gypsum test blocks were introduced into the dryer and then vacuum-dried for 20 min with a closed phasechange paraffin valve. The paraffin valve was opened in the process of dipping the phasechange paraffin. The paraffin was then finally absorbed into the dryer with the test block under a negative pressure (−0.06 MPa) in a water bath at a constant temperature (50 • C). The previous valve was kept closed. When the dryer was dipped for 30 min without obvious bubbles, the specimens were taken out and cooled to a normal temperature for moulding, consequently obtaining the ESPBG. The specimens were then baked in an oven at 50 • C for 4 h to release the excess paraffin from the test block pores and prevent leakage and damage to strength during use [22]. Equation (1) was used to calculate the paraffin absorption rate of the phosphorous building gypsum test block [34]: (1) where, W 1 is the mass (g) in the saturated state of the material absorption and W 2 is the mass of the material in the dry state (g). Microstructure Test The microstructure of the test block containing PBG, PPBG, and ESPBG was analysed using a ZEISS Gemini 300 SEM in Japan. Prior to observation, the samples were dried and gold-plated in an ion sputter coater for 2-5 min, with an accelerating voltage of 3-20 kV. Thermal Performance Test The phase transition temperature and thermal energy of the paraffin and ESPBG blocks were measured via differential scanning calorimetry (DSC), where the block size should not be >3 mm in diameter and >2 mm in height. Paraffin wax needed to be frozen into solid paraffin wax before testing. In the test process, a standard crucible (pure aluminium) and a nitrogen atmosphere were used. The isothermal curves of the endothermic stage at 0-55 • C and the exothermic stage at 55-0 • C were maintained, and the heating and cooling rates were constant and decreased by 2 • C/min. The test time was 2 h. Pore Structure Test The pore structure data tests were conducted with a fully automatic BET specific surface and porosity analyser (model: MAC ASAP2460). The PBG, PPBG, and ESPBG test blocks were analysed. The test conditions were the data of all pores (i.e., specific surface area plus pore size distribution, including mesopores and micropores) and the adsorption-desorption curves of N 2 adsorbed at a degassing temperature of 120 • C for 4 h. Composition Test The phase composition of the PBG, PPBG, and ESPBG was analysed using an X-ray diffractometer. The samples were all in powder form, and the test was in the range of 10-100 • with a scanning speed of 10 • /min. Compatibility Test The absorbance of the paraffin and the PBG, ESPBG, ESPBG powders after 100 hot and cold cycles was measured in the wavenumber range of 400-4000 using Bruker's MPA and Tensor 27. Thermal Cycling Test The high-low-temperature alternating box (HD-E702-100) of Dongguan Haida Instrument Co., Ltd. (Dongguan, China), was used to test the stability of the hot cycle of ESPBG. The prepared high-low cycle samples were placed on a plate and then moved to the low-temperature alternating box, at temperatures of 20-65 • C. The heating temperature was stabilised and maintained for 20 min, followed by 100 cycles, and then removed. The functional groups were analysed via Fourier-transform infrared spectroscopy. The functional groups were analysed via Fourier-transform infrared spectroscopy. In the 0.5-0.60 range, the water-paste ratio was inversely proportional to the compressive strength and directly proportional to the absorption rate. The corresponding In the 0.5-0.60 range, the water-paste ratio was inversely proportional to the compressive strength and directly proportional to the absorption rate. The corresponding compressive strength was 14.34-10.61 MPa, and the corresponding absorption rate was 10.29-13.22%. Figure 2b depicts the relationship of the air-entraining agent, compressive strength, and adsorption rate of the phosphorous gypsum. In the 0-1% range, the content of the air-entraining agent was negatively correlated with the absolute dry compressive strength and positively correlated with the adsorption rate. The corresponding compressive strength was 10.61-4.37 MPa, and the corresponding adsorption rate was 13.22-27.62%. At the same time, within the range of satisfying the functional relationship, the 2 h strength of the two single factors was within the requirements of the building gypsum standard (GBT 9776-2008); hence, these factors can be used to prepare gypsum products. Central Composite Response Surface Model Design and Significance Evaluation The experimental factors of the central composite response surface were designed according to the single-factor experiment results and Equation (2). The horizontal level was set as 0. The two sides of the horizontal level were set as −1 and 1, with the compressive strength and the adsorption rate as the response values. Table 3 depicts the level table. Table 4 shows the experimental results of the central composite response surface. This experiment had 13 experimental points, 4 of which were zero points, while the other 9 were analysis factors. Response value Y 1 (Equation (3)) denotes the absolute dry compressive strength, while response value Y 2 (Equation (4)) represents the adsorption rate. The compressive strength and the adsorption rate in Table 3 were fitted using Design-Expert V8.0.5 software. The following regression equation was obtained: where β is the unknown coefficient; k is the number of design variables; Y is the predicted response value; and β 0 , β i , and β ii are the migration term, linear migration, and secondorder migration coefficients, respectively; β ij is the interaction coefficient. Tables 5 and 6 present the results of the variance analysis of the quadratic regression model obtained using Design-Expert V8.0.5. According to the analysis of variance in Tables 5 and 6, F 1 = 17.70 and P 1 = 0.0008 < 0.01 in the quadratic regression model of the compressive strength, while F 2 = 17.20 and P 2 = 0.0008 < 0.01 in the quadratic regression model of the adsorption rate. These results show that the relationship between Y 1 and Y 2 and the equations in the quadratic regression model are extremely significant in accordance with the statistical law. P 1 and P 2 in the misfitting items were >0.05; hence, the model had a certain reliability. In the quadratic regression model of Y 1 , the order of the absolute value of the first-term coefficient was X 2 > X 1 , showing the degree of influence that the two factors have on the absolute dry compressive strength, i.e., air-entraining agent > water-paste ratio. The order of the absolute value of the first-term coefficient in the quadratic regression model of Y 2 was X 2 > X 1 , showing the degree of influence that the two factors have on the adsorption rate, i.e., air-entraining agent > water-paste ratio. Analysis of the Three-Dimensional Surface and Contour in the Response Surface Figures 3 and 4 depict the response surface curves and contours, respectively, of the influence of the water-gypsum ratio and the air-entraining agent interaction on the compressive strength and absorption rate of the phosphorous gypsum according to the response surface analysis method [35]. showing the degree of influence that the two factors have on the adsorption rate, i.e., air-entraining agent > water-paste ratio. Analysis of the Three-Dimensional Surface and Contour in the Response Surface Figures 3 and 4 depict the response surface curves and contours, respectively, of the influence of the water-gypsum ratio and the air-entraining agent interaction on the compressive strength and absorption rate of the phosphorous gypsum according to the response surface analysis method [35]. Figures 3a and 4a illustrate that under the condition of an unchanged water-paste ratio, the compressive strength gradually decreased with the increase in the air-entraining agent content, and then tended to stabilise. The air-entraining agent content remained unchanged under these circumstances. With the increase in the water-paste ratio, the compressive strength gradually decreased and then stabilised. The figures also show the two influences on the compressive strength trend, i.e., air-entraining agent > water-paste ratio, in line with the law of regression equation Y 1 . Figures 3b and 4b show that under the interaction, the influence of the two factors on the adsorption rate presents a convex curved surface. Under the condition of a constant water-paste ratio, the adsorption rate presented a trend of first rising and then falling with the increase in the air-entraining agent content. With the increase in the water-paste ratio, the adsorption rate presented a gradual upward trend under the condition of a constant air-entraining agent dosage. The results showed that the influence trend of the air-entraining agent > water-paste ratio on the adsorption rate was in accordance with the law of regression equation, Y 2 . The closer the point was to the diagonal line, the closer the predicted value was to the experimental value. The two groups of data were close to one another, verifying the reliability of the model. In the design of the central composite response surface model using Design-Expert V8.0.5 software, the importance degree of the adsorption rate was designed as +++, whilst that of the absolute dry compressive strength was designed as ++. Figures 3 and 4 show the following optimal process conditions: water-paste ratio = 0.6; air-entraining agent content = 0.66%; predicted adsorption rate = 26.03%; and predicted compressive strength = 4.38 MPa. The experimental value of the adsorption rate was 26.06%, while that of the compressive strength was 4.40 MPa. The experimental values were in good agreement with the predicted values, proving that the model has a certain reliability and practicability. Compressive Strength of the PBG, PPBG, and ESPBG The hardened phosphorous building gypsum body is expressed as the PBG. The porous phosphorus building gypsum prepared using the optimal parameters determined via the central composite response surface model design method is expressed as the PPBG. This absorbs the energy-storage phosphorus building gypsum prepared with the paraffin, expressed as the ESPBG. Figure 6 depicts the compressive strength comparison of the PBG, PPBG, and ESPBG. The closer the point was to the diagonal line, the closer the predicted value was to the experimental value. The two groups of data were close to one another, verifying the reliability of the model. In the design of the central composite response surface model using Design-Expert V8.0.5 software, the importance degree of the adsorption rate was designed as +++, whilst that of the absolute dry compressive strength was designed as ++. Figures 3 and 4 show the following optimal process conditions: water-paste ratio = 0.6; air-entraining agent content = 0.66%; predicted adsorption rate = 26.03%; and predicted compressive strength = 4.38 MPa. The experimental value of the adsorption rate was 26.06%, while that of the compressive strength was 4.40 MPa. The experimental values were in good agreement with the predicted values, proving that the model has a certain reliability and practicability. Compressive Strength of the PBG, PPBG, and ESPBG The hardened phosphorous building gypsum body is expressed as the PBG. The porous phosphorus building gypsum prepared using the optimal parameters determined via the central composite response surface model design method is expressed as the PPBG. This absorbs the energy-storage phosphorus building gypsum prepared with the paraffin, expressed as the ESPBG. Figure 6 depicts the compressive strength comparison of the PBG, PPBG, and ESPBG. Under the same mixture ratio, the compressive strength of the PPBG decreased by 59.50% compared with that of the PBG due to the pore structure optimisation of the PBG by the water-paste ratio and the air-entraining agent, which resulted in an increase in porosity and a decrease in strength. Compared with those of the PPBG and the PBG, the compressive strength of the ESPBG increased by 75.65% and decreased by 28.85%, respectively, due to the phase transition temperature of the paraffin wax having a small change range. The PPBG's porosity was also reduced after filling into the pore structure at a normal temperature. It existed in solid form, as can also be seen from the microstructure in Figure 6. Compared with the PBG, part of the strength decreased after the paraffin filling. This was caused by some of the closed pores in the PBG, which could not be filled. However, compared with the existing energy-storage gypsum materials, the ESPBG has better compressive strength and a better decline rate. Figure 7a,b present the comparison results, showing that the ESPBG has a certain applicability. Under the same mixture ratio, the compressive strength of the PPBG decreased by 59.50% compared with that of the PBG due to the pore structure optimisation of the PBG by the water-paste ratio and the air-entraining agent, which resulted in an increase in porosity and a decrease in strength. Compared with those of the PPBG and the PBG, the compressive strength of the ESPBG increased by 75.65% and decreased by 28.85%, respectively, due to the phase transition temperature of the paraffin wax having a small change range. The PPBG's porosity was also reduced after filling into the pore structure at a normal temperature. It existed in solid form, as can also be seen from the microstructure in Figure 6. Compared with the PBG, part of the strength decreased after the paraffin filling. This was caused by some of the closed pores in the PBG, which could not be filled. However, compared with the existing energy-storage gypsum materials, the ESPBG has better compressive strength and a better decline rate. Figure 7a,b present the comparison results, showing that the ESPBG has a certain applicability. Under the same mixture ratio, the compressive strength of the PPBG decreased by 59.50% compared with that of the PBG due to the pore structure optimisation of the PBG by the water-paste ratio and the air-entraining agent, which resulted in an increase in porosity and a decrease in strength. Compared with those of the PPBG and the PBG, the compressive strength of the ESPBG increased by 75.65% and decreased by 28.85%, respectively, due to the phase transition temperature of the paraffin wax having a small change range. The PPBG's porosity was also reduced after filling into the pore structure at a normal temperature. It existed in solid form, as can also be seen from the microstructure in Figure 6. Compared with the PBG, part of the strength decreased after the paraffin filling. This was caused by some of the closed pores in the PBG, which could not be filled. However, compared with the existing energy-storage gypsum materials, the ESPBG has better compressive strength and a better decline rate. Figure 7a,b present the comparison results, showing that the ESPBG has a certain applicability. Figure 8 depicts the microscopic morphology of the PBG, PPBG, and ESPBG. 8a presents an SEM image of the PBG, illustrating that the PBG has a flat outline w obvious pores. When enlarged, the figure showed a small number of pores. The SEM images PPBG at different scales (Figure 8b,c) show that the PPBG has an obvious pore str and some cylindrical pores, which can only be seen when the figure is enlarged. paring Figure 8a with Figure 8b,c, the pores were found to increase. This indicat the PBG hole pattern changed and the pore structure was good after the optimisa the water-paste ratio and air-entraining agent. In other words, the PPBG is an ex porous skeleton, which is greatly significant for phase-change material storage. Figure 8c, we found alm visible pores after the paraffin impregnation. The swirling macropores were also with paraffin, indicating that paraffin was fully immersed in the pores of the However, some pores were still insufficiently filled. One of the main reasons for th that the PPBG contained some closed pores and the paraffin could not be filled. When enlarged, the figure showed a small number of pores. The SEM images of the PPBG at different scales (Figure 8b,c) show that the PPBG has an obvious pore structure and some cylindrical pores, which can only be seen when the figure is enlarged. Comparing Figure 8a with Figure 8b,c, the pores were found to increase. This indicates that the PBG hole pattern changed and the pore structure was good after the optimisation of the waterpaste ratio and air-entraining agent. In other words, the PPBG is an excellent porous skeleton, which is greatly significant for phase-change material storage. Figure 8d displays an SEM image of the ESPBG. Compared with Figure 8c, we found almost no visible pores after the paraffin impregnation. The swirling macropores were also filled with paraffin, indicating that paraffin was fully immersed in the pores of the PPBG. However, some pores were still insufficiently filled. One of the main reasons for this was that the PPBG contained some closed pores and the paraffin could not be filled. Figure 9a,b show the DSC curves of the paraffin wax and the ESPBG, respectively. The melting temperature of the paraffin wax was 15.6 • C, which increased to 18.50 • C in the ESPBG, showing an increase of 2.90 • C. For the freezing temperature, the paraffin crystallisation temperature was 29.264 • C, which increased to 27.14 • C in the ESPBG, showing a reduction of 2.124 • C. Previously, some researchers have linked the change in the phase transition temperature (T melt /T freeze ) of energy-storage materials with the combination of the carrier of the energy storage materials and the paraffin [36,37]. If the combination between the first two is cohesive, the result will be an increase in the phase transition temperature, and vice versa. Compared with paraffin, the melting and freezing temperatures in the ESPBG both increased, indicating that the combination of the PPBG and paraffin has a certain mutual attraction. Figure 9a,b also show that the DSC curve trends of the paraffin and the ESPBG were similar, indicating that T melt and T freeze were the inherent characteristics of paraffin phase-change materials. The heat energy of the paraffin was 108.17 J/g in the endothermic stage and 103.6 J/g in the exothermic stage. The heat energy of the ESPBG in the endothermic and exothermic stages was 28.19 J/g and 28.64 J/g, respectively, reflecting the paraffin heat energy retained in the ESPBG (26.06%). The energy-storage effect effectively reached the thermal performance of the energy-storage aggregate and the gypsum board in the literature [15,17,38], and can be used for building energy-saving materials, subsequent PPBG structure, energy storage aggregates, and gypsum board. The melting temperature of the paraffin wax was 15.6 °C, which increased to 18.50 °C in the ESPBG, showing an increase of 2.90 °C. For the freezing temperature, the paraffin crystallisation temperature was 29.264 °C, which increased to 27.14 °C in the ESPBG, showing a reduction of 2.124 °C. Previously, some researchers have linked the change in the phase transition temperature (Tmelt/Tfreeze) of energy-storage materials with the combination of the carrier of the energy storage materials and the paraffin [36,37]. If the combination between the first two is cohesive, the result will be an increase in the phase transition temperature, and vice versa. Compared with paraffin, the melting and freezing temperatures in the ESPBG both increased, indicating that the combination of the PPBG and paraffin has a certain mutual attraction. Figure 9a,b also show that the DSC curve trends of the paraffin and the ESPBG were similar, indicating that Tmelt and Tfreeze were the inherent characteristics of paraffin phase-change materials. The heat energy of the paraffin was 108.17 J/g in the endothermic stage and 103.6 J/g in the exothermic stage. The heat energy of the ESPBG in the endothermic and exothermic stages was 28.19 J/g and 28.64 J/g, respectively, reflecting the paraffin heat energy retained in the ESPBG (26.06%). The energy-storage effect effectively reached the thermal performance of the energy-storage aggregate and the gypsum board in the literature [15,17,38], and can be used for building energy-saving materials, subsequent PPBG structure, energy storage aggregates, and gypsum board. Temperature(℃) Exothermic stage of paraffin Endothermic stage of paraffin Figure 10a shows the N 2 adsorption-desorption isotherms of the PBG, PPBG, and ESPBG, reflecting the specific surface area and the pore structure of the materials [39]. At P/P 0 (relative pressure) of 0-0.2, the adsorption capacities of the PBG, PPBG, and ESPBG were stable with the increase in pressure, indicating that the microporous structure was rare or absent. For 0.2 < P/P0 < 0.9, the adsorption isotherm curve of the PBG slowly changed and did not coincide with the desorption curve. The main reason for this result may be the condensation of some capillary pores. Figure 10b depicts the pore structure mainly comprising slits pores (i.e., open on both ends). The slits pores were caused by the irregular overlap between the flake and the rod crystals in the PBG. Two types of slits pores For 0.2 < P/P 0 < 0.9, the adsorption isotherm curve of the PBG slowly changed and did not coincide with the desorption curve. The main reason for this result may be the condensation of some capillary pores. Figure 10b depicts the pore structure mainly comprising slits pores (i.e., open on both ends). The slits pores were caused by the irregular overlap between the flake and the rod crystals in the PBG. Two types of slits pores exist: open and closed. The adsorption isotherm curve of the PPBG quickly changed and did not coincide with the desorption curve. This type of curve reflected a typical cylindrical pore structure (i.e., opening and closing), as shown in Figure 10c. The main reason for this result is that condensation and decondensation may need to be completed within the effective hole radius; hence, the trend of the adsorption and desorption curves is inconsistent. The adsorption isotherm curve of the ESPBG coincided with the desorption curve, indicating that the pores in the PPBG were filled with the absorbed paraffin. When P/P 0 > 0.9, the adsorption curve of the phosphorous building gypsum increased in a small range and the slope was smaller, showing that the phosphorous building gypsum contained mesopores. The adsorption curve of the PPBG increased in a small range with a large inclination, indicating that it contained mesoporous and macroporous structures. Meanwhile, the adsorption curve of the ESPBG was almost unchanged, showing that the pore structure was difficult to form. Figure 10 also shows the specific surface areas of the PBG, PPBG, and ESPBG as 3.25 m 2 /g, 14.64 m 2 /g, and 0.22 m 2 /g, respectively; the total pore volumes were 0.031 cm 3 /g, 0.068 cm 3 /g, and 0.002 cm 3 /g, respectively. The micropore volume in the PBG was 0.000213 cm 3 /g, accounting for approximately 0.67% of the total volume. Figure 11a,b show that the cumulative pore volumes of the PBG, PPBG, and ESPBG in the pore diameter range of 0-1000 nm were between 0 and 0.022 cm³/g, 0 and 0.042 cm³/g, and 0 and 0.0019 cm³/g, respectively. The results in Figure 12a,b illustrate that the PBG was mainly mesoporous, with fewer micropores and macropores. Figure 11a,b show that the cumulative pore volumes of the PBG, PPBG, and ESPBG in the pore diameter range of 0-1000 nm were between 0 and 0.022 cm 3 /g, 0 and 0.042 cm 3 /g, and 0 and 0.0019 cm 3 /g, respectively. The results in Figure 12a,b illustrate that the PBG was mainly mesoporous, with fewer micropores and macropores. Figure 11a,b show that the pore size distributions of the PBG, PPBG, and ESPBG were 1.74-27.71 nm (1 Å = 0.1 nm), 2.19-100 nm, and 4.35-33.2 nm, respectively. The results in Figure 11a,b show that the cumulative pore volumes of the PBG, PPBG, and ESPBG in the pore diameter range of 0-1000 nm were between 0 and 0.022 cm³/g, 0 and 0.042 cm³/g, and 0 and 0.0019 cm³/g, respectively. The results in Figure 12a,b illustrate that the PBG was mainly mesoporous, with fewer micropores and macropores. The cumulative pore volume and the amount of paraffin that could be stored were small. The PPBG results showed that it was mainly composed of macropores and contained fewer mesopores. In their research, Yin et al. [40] pointed out that the pore size corresponding to the peak in the pore size distribution curve was more likely to appear. Moreover, the PPBG would have more 2.5 nm mesopores and 53.81 nm and 90.45 nm macropores in the pore size range of 2.19-100 nm under a better cumulative porosity. The analysis data of the PBG, PPBG, and ESPBG illustrated that the PPBG has an excellent pore structure. With the addition of paraffin, the total pore volume of the ESPBG, as The cumulative pore volume and the amount of paraffin that could be stored were small. The PPBG results showed that it was mainly composed of macropores and contained fewer mesopores. In their research, Yin et al. [40] pointed out that the pore size corresponding to the peak in the pore size distribution curve was more likely to appear. Moreover, the PPBG would have more 2.5 nm mesopores and 53.81 nm and 90.45 nm macropores in the pore size range of 2.19-100 nm under a better cumulative porosity. The analysis data of the PBG, PPBG, and ESPBG illustrated that the PPBG has an excellent pore structure. With the addition of paraffin, the total pore volume of the ESPBG, as well as the pore size distribution and the specific surface area, gradually decreased, indicating that paraffin flowed into the PPBG pores. Additionally, the pore type, pore size distribution, and cumulative pore volume of the hardened gypsum body can affect the relationship between the microstructure and the mechanical properties. The slits pores in PBG are dense, and the corresponding hole distribution interval and pore volume are small, according to Figures 8a, 11a, and 12a. These results sharply contrast with the optimal strength of PBG given in Figure 6. An increase in cylindrical pores in PPBG improved the pore size distribution and pore volume of PBG, making it possible to store more paraffin wax and reduce its strength. However, the pore structure was filled with the influx of 26% paraffin into the open cylindrical pores of the PPBG. Almost no pore structure was observed (Figure 8d). The appropriate paraffin storage amount in ESPBG overcame the poor mechanical properties of PPBG and, eventually, endowed it with better mechanical properties. Figure 13 illustrates the XRD patterns of the PBG, PPBG, and ESPBG. and O, and the main minerals are (CH2)x, C46H94, and C16H34O [16]; the rosin concrete air-entraining agent's (i.e., pore-forming agent) main elements are C, H, and O. After the rosin-based concrete air-entraining agent was added to the PBG slurry, stable microbubbles were introduced into the PBG slurry through physical action to form a PPBG pore structure, and then the ESPBG was prepared by adsorbing paraffin. The main components of PPBG are CaSO4·2H20 and a few quartz impurities; PPBG had strong diffraction peaks near 2θ = 8.55°, 19.6°, 31.02°, and 23.55. The main components of ESPBG are CaSO4·2H20 and a few quartz impurities; ESPBG had strong diffraction peaks near 2θ = 8.75°, 19.8°, 31.61°, and 24.01°. The ESPBG contained all of the peaks of PBG and PPBG, but the peak intensities were relatively lower in comparison with those of PBG and PPBG. The results showed that the addition of a small amount of air-entraining agent in PPBG only formed bubbles, making the PBG form a good pore structure, and the peak value of the main components changed little. However, due to the adsorption of 26.06% paraffin in ESPBG, the peak value of the material combined with paraffin was reduced [16]. Therefore, the peak The ESPBG contained all of the peaks of PBG and PPBG, but the peak intensities were relatively lower in comparison with those of PBG and PPBG. The results showed that the addition of a small amount of air-entraining agent in PPBG only formed bubbles, making the PBG form a good pore structure, and the peak value of the main components changed little. However, due to the adsorption of 26.06% paraffin in ESPBG, the peak value of the material combined with paraffin was reduced [16]. Therefore, the peak intensities were relatively lower in comparison with those of PBG and PPBG, but no new peaks were generated in ESPBG, meeting the energy-storage material requirements. FTIR Analysis of PBG, PPBG, and ESPBG FTIR spectroscopy was used to analyse the PBG, paraffin, ESPBG, and ESPBG after thermal cycling. Figure 14 shows the results. The spectra clearly show that the infrared peaks of the ESPBG and the ESPBG after thermal cycling retain some characteristic absorption peaks of PBG and paraffin, and the peak shapes are broadly similar. The characteristic absorption peaks and peak trends of the PBG, paraffin, ESPBG, and ESPBG after thermal cycling can be seen in Table 7. The peak of the paraffin is smooth, and the characteristic peaks of paraffin are shown in Table 7. These peaks represent O-H bonding (2953.55 cm −1 and 2911.87 cm −1 ) and C-O bonding (1458.11 cm −1 and 1326.50 cm −1 ). It can be seen from Figure 14 and Table 7 Table 7, the characteristic peak positions and shapes of the ESPBG and the ESPBG after thermal cycling are effectively the same, with common characteristic peaks at 3547.84 cm −1 , 2920. 13 Table 7 peaks of the ESPBG and the ESPBG after thermal cycling, there is a slight difference between 3547.84 cm −1 , 3408.20 cm −1 , 1459.22 cm −1 , 1328.62 cm −1 , and 1141.76 cm −1 , showing that ESPBG has good thermal stability. The characteristic peaks of ESPBG are shown in Table 7 The characteristic peaks of the raw materials (i.e., paraffin, PBG) were not changed in the ESPBG. A comparison of the characteristic spectra of the raw materials and ESPBG showed that there were no new peak shapes in the FTIR spectra of ESPBG. The peak locations were exactly superimposed on the PBG and paraffin [41]. Therefore, the phasechange energy-storage composites are stable. Conclusions In this study, the central composite response surface design was used to prepare the PPBG by selecting the optimal process parameters. The ESPBG was then prepared by using it as a paraffin carrier. The following conclusions were obtained according to the experimental results and basic analysis: 1. The application of the response surface design of the central composite improved the analysis rate of the experimental data. The optimal process conditions of the central composite response surface design in this study were as follows: water-paste ratio = 0.6; air-entraining agent dosage = 0.66%; experimental value of the absorption rate = 26.06%; and experimental value of strength = 4.40 MPa. The material had a good pore structure (cylindrical pore), and there were many mesopores of 2.5 nm and macropores of 53.81 nm and 90.45 nm in the range of 2.19-100 nm. After absorbing paraffin, it was filled tightly, giving the energy-storage phosphorous building gypsum a good strength. 2. The compatibility between ESPBG and paraffin was good, and the thermal energy in the endothermic stage and the exothermic stage was 28.19 J/g and 28.64 J/g, respectively, meeting the requirements of energy-saving building materials. Potential Applications and Prospects of PPBG and ESPBG Overall, the pore structure of the PPBG prepared in this work is worth popularising. The structure can be developed with light phosphorous building gypsum aggregate, light phosphorous building gypsum board, energy-storage phosphorous building gypsum aggregate, or energy-storage phosphorous building gypsum board. The research and development of these products will promote the utilisation of phosphorous building gypsum and realise the resource utilisation of phosphorous gypsum. The cost of gypsum products is very low, and the developed ESPBG is completely feasible among energy-saving materials. If utilised, it could alleviate the problem of the high energy consumption of building materials. The further problems of water resistance and the life cycle after cold and heat cycles in energy-storage gypsum materials need to be solved in future research. Then, in the future study of the pore structure of gypsum materials, mercury intrusion porosimetry (MIP) can be considered, which can enable the observation of more interesting pore structures.
9,144.8
2022-10-01T00:00:00.000
[ "Materials Science" ]
A new existence result for some nonlocal problems involving Orlicz spaces and its applications This paper studies some quasilinear elliptic nonlocal equations involving Orlicz–Sobolev spaces. On the one hand, a new sub-supersolution theorem is proved via the pseudomonotone operator theory; on the other hand, using the obtained theorem, we present an existence result on the positive solutions of a singular elliptic nonlocal equation. Our work improves the results of some previous researches. Problem (1.1) was proposed in [10] and generalizes some problems in [3,5,6,8,[17][18][19][20]. As the authors of [10] pointed out, there are some difficulties to study problem (1.1): (1) variational methods cannot be used directly because of the nonlocal terms; (2) the presence of the concave-convex nonlinearities leads to invalidness of the Galerkin method; (3) there is no ready-made sub-supersolutions method as in [2] and [7] because of the -Laplacian operator. In [10], for the first time, using monotone iterative technique, Figueiredo et al. obtained the sub-supersolution theorem for problem (1.1) in which they needed an important condition that h 1 , h 2 : [0, +∞) → R are nondecreasing. As its application, the authors discussed the following problem: (1. 3) with the assumption that α, β ≥ 0 with 0 < α + β < κ -1, and got the existence of a positive solution. Another interesting work appeared in [9], in which Dos Santos et al. studied the problem as follows: Note that h 1 and h 2 are not nondecreasing in this paper. Motivated by [10] and [9], we try to present the sub-supersolution approach for problem (1.1) without the assumptions that h 1 and h 2 are nondecreasing. Our paper is divided into four sections. In Sect. 2, some needed properties of Orlicz spaces and the main results are listed. In Sect. 3, we prove a new sub-supersolution theorem for problem (1.1) via the pseudomonotone operator theory and, using obtained theorem, we present a new existence result on positive solutions of problem (1.3) when α ≥ 0, -1 < β < 0, with 0 < α < κ -1. Our work complements the conclusions in [10] and [9]: (1) we obtain the existence of a nontrivial solution of problem (1.1) when h 1 and h 2 have no monotonicity; (2) problem (1.3) is studied when β ∈ (-1, 0). Preliminaries and main results Now we shall list some main definitions, properties, and conclusions in the setting of Orlicz-Sobolev spaces. For more information, please refer to the literature [1,4,13,15,16,22]. In (1.1), because of the existence of assumption (ρ 3 ) , it is easily to see that the 2 condition is true for (t) (see [10]). then w * (x) is called a subsolution of problem (1.1). For more information on L ( ) and its norm, please refer to the literature [10]. Let In addition, and are N-functions satisfying the 2 condition, and they are also nondecreasing on [0, +∞). For an N-function , the corresponding Orlicz-Sobolev space is defined as the Banach space Specially, For their properties, one can refer to the literature [10]. Lemma 2.4 ([10]) Let λ > 0, let be given by (1.2), and suppose ⊂ R N is an admissible domain. Consider the problem where z λ is the unique solution. Define Here C * > 0 and C * > 0 depend on n, s, N , and . For z λ which is defined in Lemma 2.4, it follows that z λ ∈ C 1 ( ) with z λ > 0 in . Proofs of the main results Proof of Theorem 2. 6 We consider We have the following claims: where ρ satisfies (ρ 1 ), (ρ 2 ), and (ρ 3 ). First, we want to show that B is continuous, bounded, and coercive. It is easy to see that the conditions on ρ and the continuity of h 1 and h 2 guarantees that B is bounded and continuous. According From the Lemma 2.3 and Lemma 2.1 in [12], we have It follows that Hence we can conclude that the operator B is coercive. According to Lemma 2.2.2 in [21], there is a u ∈ W 1, 0 ( ) ∩ L ∞ ( ) such that for ∀w ∈ W 1, 0 ( ), Therefore, we know that u is a (weak) solution of problem (3.1). Claim 2. We show that the solution u of problem (3.1) obtained above is a solution of (1.1). A similar argument shows that u ≥ w * . Therefore, (3.8) is true and thus u is a solution of problem (1.1). The proof is completed. Proof of Theorem 2.7 In order to get positive solutions of problem (1.3), we study the following problem: for n ≥ 1. We will use Theorem 2.6 to discuss problem (3.15). In view of , choose a ϑ 0 > 0 large enough such that Thus, in the case < d(x) < 2τ for ϑ > 0 large enough. (3) We consider the case d(x) > 2τ . Obviously, It is obvious that w * ≤ w * if M is large enough and μ is small enough. And (w * , w * ) is a sub-supersolution pair of problem (3.15). Now Theorem 2.6 guarantees that problem (3.15) has a solution u n which satisfies 0 < μη ≤ u n ≤ z λ + M. Now we consider the set {u n }. From Lemma 2.2 in [12], one has that u 1, and |∇u | L defined on W 1, 0 are equivalent. And from the proof of the coercivity of the operator B, we know that if |∇u | L > 1, then |∇u| ≥ |∇u | L , that is, |∇u| ≥ u 1, , when u 1, > 1. Denoting w = u nu, we have u n (u nu) = u n + 1 n β u n α L (u nu). Therefore, taking the limit as n → ∞ in (3.15), we have u = u β u α L . The limit value u is just the solution which we are looking for, and it satisfies w * ≤ u ≤ w * , obviously. Therefore, the proof is finished.
1,451.8
2022-09-03T00:00:00.000
[ "Mathematics" ]
CWH43 Is a Novel Tumor Suppressor Gene with Negative Regulation of TTK in Colorectal Cancer Colorectal cancer (CRC) ranks among the most prevalent forms of cancer globally, and its late-stage survival outcomes are less than optimal. A more nuanced understanding of the underlying mechanisms behind CRC’s development is crucial for enhancing patient survival rates. Existing research suggests that the expression of Cell Wall Biogenesis 43 C-Terminal Homolog (CWH43) is reduced in CRC. However, the specific role that CWH43 plays in cancer progression remains ambiguous. Our research seeks to elucidate the influence of CWH43 on CRC’s biological behavior and to shed light on its potential as a therapeutic target in CRC management. Utilizing publicly available databases, we examined the expression levels of CWH43 in CRC tissue samples and their adjacent non-cancerous tissues. Our findings indicated lower levels of both mRNA and protein expressions of CWH43 in cancerous tissues. Moreover, we found that a decrease in CWH43 expression correlates with poorer prognoses for CRC patients. In vitro experiments demonstrated that the suppression of CWH43 led to increased cell proliferation, migration, and invasiveness, while its overexpression had inhibitory effects. Further evidence from xenograft models showed enhanced tumor growth upon CWH43 silencing. Leveraging data from The Cancer Genome Atlas (TCGA), our Gene Set Enrichment Analysis (GSEA) indicated a positive relationship between low CWH43 expression and the activation of the epithelial–mesenchymal Transition (EMT) pathway. We conducted RNA sequencing to analyze gene expression changes under both silenced and overexpressed CWH43 conditions. By identifying core genes and executing KEGG pathway analysis, we discovered that CWH43 appears to have regulatory influence over the TTK-mediated cell cycle. Importantly, inhibition of TTK counteracted the tumor-promoting effects caused by CWH43 downregulation. Our findings propose that the decreased expression of CWH43 amplifies TTK-mediated cell cycle activities, thus encouraging tumor growth. This newly identified mechanism offers promising avenues for targeted CRC treatment strategies. Introduction Colorectal cancer (CRC) is the third most diagnosed cancer globally and stands as the second primary cause of cancer-associated fatalities [1].As of 2020, CRC was responsible for 10% of all new cancer cases and accounted for 9.4% of cancer-associated deaths [2].The onset of this disease is influenced by a multitude of factors, spanning genetic predispositions, environmental triggers, and lifestyle choices [3].Although strides have been made in multidisciplinary treatments, leading to enhanced patient survival rates, the five-year overall survival statistic for cases of metastatic CRC remains a disheartening 10.5% [4].Contemporary treatment approaches for advanced-stage CRC encompass chemotherapy, targeted therapeutic interventions, and immunotherapy [5].Nonetheless, the persistent issue of drug resistance serves as a significant barrier to successful treatment, often resulting in tumor relapse and metastasis.Hence, there is an urgent need for reliable markers for early detection and surveillance of disease progression in CRC, along with a more comprehensive understanding of the mechanisms propelling tumor growth to devise effective treatment options. Glycosylphosphatidylinositol (GPI) is a type of glycerophospholipid that functions as a lipid anchor, attaching to the C-terminus of proteins and facilitating their delivery to the external side of the plasma membrane.The foundational structure of GPI consists of inositol phospholipids, glycans made up of one glucosamine and three mannose units, finished with an ethanolamine phosphate (EtNP) [6].GPI anchoring plays a pivotal role in processes like mammalian embryogenesis, development, neurogenesis, fertilization, and immune response [7].The protein-coding gene Cell Wall Biogenesis 43 C-Terminal Homolog (CWH43) encodes for the PGAP2 (post-GPI attachment to proteins 2)-interacting protein and is believed to participate in GPI anchor synthesis [8].In the yeast Saccharomyces cerevisiae, the N-terminal portion of CWH43 showcases a sequence that is analogous to the mammalian PGAP2, which is instrumental in converting the lipid part of GPI anchors to ceramides [9,10].In humans, mutations impacting the remodeling of the GPI lipid component have been linked to hereditary spastic paraplegias, a group of neurodegenerative motor neuron conditions, identified through exome sequencing [11]. Phosphotyrosine-Picked Threonine-Protein Kinase (TTK), also recognized as Mps1, constitutes an essential component of the spindle assembly checkpoint (SAC), ensuring the precise segregation of chromosomes to daughter cells during cell division [12].TTK dysregulation has been closely associated with aneuploidy, chromosomal anomalies, and tumorigenesis [13].Heightened TTK expression has been consistently documented across diverse cancer types, encompassing lung [14], breast [15], liver [16], kidney [17], colon [18], and gastric cancer [19].Comprehending the multifaceted roles of TTK in cancer is imperative for the development of targeted therapeutic strategies and the enhancement of patient prognosis [20]. While the precise role of CWH43 in humans remains to be elucidated, its presence is notably enriched within the epithelial layers of the gastrointestinal system, including the stomach, colon, and rectum [21,22].Prior research has highlighted a reduction in CWH43 expression in cases of CRC [23].Another investigative endeavor utilized microarray gene expression profiles to create a predictor classifier.This classifier identified a declining trend in CWH43 expression from normal mucosa to adenoma, then to carcinoma, positioning CWH43 as a potential early-stage biomarker for CRC [24].Our study unearthed diminished levels of CWH43 expression in CRC tumor samples, sourced from public databases.Notably, these expression levels exhibited correlations with clinical outcomes for CRC patients.Both in vitro and in vivo assessments have illuminated CWH43's involvement in cell proliferation and invasion processes.Digging deeper, we found links between CWH43 functions, the epithelial-mesenchymal transition (EMT), and cell cycle regulation.Intervening with an inhibitor targeting a cell-cycle-related gene mitigated the impacts of CWH43 knockdown on CRC cell viability and migratory tendencies.This research underscores the potential of CWH43 not only as a diagnostic and monitoring tool but also as a focal point in pioneering new therapeutic approaches. CWH43 in CRC and Its Implications for Patient Outcomes In assessing the connection between CWH43 expression and CRC, the UALCAN and GEPIA databases revealed decreased CWH43 mRNA expression in CRC tissues compared to normal ones (Figure 1A,B).A corresponding reduction in CWH43 protein was also evident in primary CRC tumors (Figure 1C).Significantly, patients with higher CWH43 expression demonstrated better overall survival rates than their low/medium expression counterparts (Figure 2A,B).These data indicate that lower CWH43 expression in CRC correlates with suboptimal long-term survival, suggesting its critical role in CRC pathogenesis. CWH43 functions, the epithelial-mesenchymal transition (EMT), and cell cycle regulation.Intervening with an inhibitor targeting a cell-cycle-related gene mitigated the impacts of CWH43 knockdown on CRC cell viability and migratory tendencies.This research underscores the potential of CWH43 not only as a diagnostic and monitoring tool but also as a focal point in pioneering new therapeutic approaches. CWH43 in CRC and Its Implications for Patient Outcomes In assessing the connection between CWH43 expression and CRC, the UALCAN and GEPIA databases revealed decreased CWH43 mRNA expression in CRC tissues compared to normal ones (Figure 1A,B).A corresponding reduction in CWH43 protein was also evident in primary CRC tumors (Figure 1C).Significantly, patients with higher CWH43 expression demonstrated better overall survival rates than their low/medium expression counterparts (Figure 2A,B).These data indicate that lower CWH43 expression in CRC correlates with suboptimal long-term survival, suggesting its critical role in CRC pathogenesis.CWH43 functions, the epithelial-mesenchymal transition (EMT), and cell cycle regulation.Intervening with an inhibitor targeting a cell-cycle-related gene mitigated the impacts of CWH43 knockdown on CRC cell viability and migratory tendencies.This research underscores the potential of CWH43 not only as a diagnostic and monitoring tool but also as a focal point in pioneering new therapeutic approaches. CWH43 in CRC and Its Implications for Patient Outcomes In assessing the connection between CWH43 expression and CRC, the UALCAN and GEPIA databases revealed decreased CWH43 mRNA expression in CRC tissues compared to normal ones (Figure 1A,B).A corresponding reduction in CWH43 protein was also evident in primary CRC tumors (Figure 1C).Significantly, patients with higher CWH43 expression demonstrated better overall survival rates than their low/medium expression counterparts (Figure 2A,B).These data indicate that lower CWH43 expression in CRC correlates with suboptimal long-term survival, suggesting its critical role in CRC pathogenesis. CWH43's Influence on CRC Tumorigenesis and Cell Growth The CellExpress database (GSE36133) indicated that CWH43 expression is generally lower in most CRC cell lines (Figure S1).To further validate CWH43's influence on cancer cell proliferation, we engineered HT-29 cells with reduced CWH43 levels (CWH43-KD) and HCT116 cells with elevated CWH43 expression (CWH43ov) (See Figure 3A).Western blotting was used to confirm the changes in CWH43 levels.As illustrated in Figure 3B, growth activity was found to be elevated in CWH43-KD cells compared to their scrambled controls.Conversely, enhanced expression of CWH43 in HCT116 cells led to a decline in growth activity.In this context, our findings in both HT-29 and HCT116 cell lines underscore an inverse correlation between CWH43 expression and the rate of CRC cell proliferation (Refer to Figure 3B).In a xenograft experiment, DLD-1 cells with reduced CWH43 (CWH43-KD) showed quicker growth than their control counterparts (see Figure 3C).expression from the (A) UALCAN (specifically for colon cancer) and (B) GEPIA resources (encompassing both colon and rectal cancer). CWH43's Influence on CRC Tumorigenesis and Cell Growth The CellExpress database (GSE36133) indicated that CWH43 expression is generally lower in most CRC cell lines (Figure S1).To further validate CWH43's influence on cancer cell proliferation, we engineered HT-29 cells with reduced CWH43 levels (CWH43-KD) and HCT116 cells with elevated CWH43 expression (CWH43ov) (See Figure 3A).Western blotting was used to confirm the changes in CWH43 levels.As illustrated in Figure 3B, growth activity was found to be elevated in CWH43-KD cells compared to their scrambled controls.Conversely, enhanced expression of CWH43 in HCT116 cells led to a decline in growth activity.In this context, our findings in both HT-29 and HCT116 cell lines underscore an inverse correlation between CWH43 expression and the rate of CRC cell proliferation (Refer to Figure 3B).In a xenograft experiment, DLD-1 cells with reduced CWH43 (CWH43-KD) showed quicker growth than their control counterparts (see Figure 3C). Involvement of CWH43 in CRC Migration, Invasion, and EMT Regulation Our tests showcased CWH43's influence on the migration and invasion capabilities of CRC cells.Knockdown of CWH43 increased migration and invasion in DLD-1 cells, whereas its overexpression inhibited these traits in HCT116 cells (Figure 4A,B).Subsequent GSEA analysis of the TCGA dataset highlighted CWH43's negative association with several signaling pathways, most notably the epithelial-mesenchymal Transition (Figure S2A-C).Furthermore, CWH43's impact on the expression of key proteins like E-cadherin, β-catenin, Vimentin, and N-cadherin underpins its role in EMT regulation (Figure 4C). rates of the cells.(C) Xenograft model showed tumor size and weight of scrambled control and CWH43-KD groups.All experiments were conducted in triplicate, and all data were expressed as mean + SD.Two-tailed Student's t-tests were used to assess statistical significance.* p < 0.05, ** p < 0.005. Involvement of CWH43 in CRC Migration, Invasion, and EMT Regulation Our tests showcased CWH43's influence on the migration and invasion capabilities of CRC cells.Knockdown of CWH43 increased migration and invasion in DLD-1 cells, whereas its overexpression inhibited these traits in HCT116 cells (Figure 4A,B).Subsequent GSEA analysis of the TCGA dataset highlighted CWH43's negative association with several signaling pathways, most notably the epithelial-mesenchymal Transition (Figure S2A-C).Furthermore, CWH43's impact on the expression of key proteins like E-cadherin, β-catenin, Vimentin, and N-cadherin underpins its role in EMT regulation (Figure 4C). CWH43's Regulatory Impact on Threonine Tyrosine Kinase (TTK) in CRC To explore the enigmatic function of CWH43 in the development of cancer, we delved into its downstream regulatory mechanisms and linked genes in CRC.We employed RNA sequencing to identify potential target genes affected by CWH43 and hence, to gain insights into how it impacts the progression of CRC.Comparing CWH43-KD and overexpressed CWH43 HCT116 cells to a control group revealed differentially expressed genes (DEGs) with a log2 fold change of 1 or greater (as shown in Figure 5A,B).We found 251 DEGs in the CWH43-KD cells (Table S1) and 415 in the CWH43-overexpressing cells (Table S2). To pinpoint central genes among the DEGs, we used the STRING database to build a protein-protein interaction (PPI) network.This network was then analyzed using the Cytoscape software (Figure 5C,D).Our analysis identified 27 central genes in CWH43-KD cells (Table S3) and 28 in CWH43-overexpressing cells (Table S4).We found TTK was shown in the central genes in CWH43-KD and CWH43ov cells. CWH43's Regulatory Impact on Threonine Tyrosine Kinase (TTK) in CRC To explore the enigmatic function of CWH43 in the development of cancer, we delved into its downstream regulatory mechanisms and linked genes in CRC.We employed RNA sequencing to identify potential target genes affected by CWH43 and hence, to gain insights into how it impacts the progression of CRC.Comparing CWH43-KD and overexpressed CWH43 HCT116 cells to a control group revealed differentially expressed genes (DEGs) with a log2 fold change of 1 or greater (as shown in Figure 5A,B).We found 251 DEGs in the CWH43-KD cells (Table S1) and 415 in the CWH43-overexpressing cells (Table S2).To pinpoint central genes among the DEGs, we used the STRING database to build a protein-protein interaction (PPI) network.This network was then analyzed using the Cytoscape software (Figure 5C,D).Our analysis identified 27 central genes in CWH43-KD cells (Table S3) and 28 in CWH43-overexpressing cells (Table S4).We found TTK was shown in the central genes in CWH43-KD and CWH43ov cells. KEGG Pathway Analysis and Its Regulatory Influence on TTK Expression in the Cell Cycle Next, KEGG pathway analysis was carried out to determine the likely roles of these central genes in CRC.In CWH43-KD cells, the most enriched pathways included Cushing syndrome, Breast cancer, Oxytocin signaling, Cell cycle, and Resistance to platinum drugs (Figure 6A).For CWH43-overexpressing cells, the top pathways were Cell cycle, Oocyte meiosis, Progesterone-mediated oocyte maturation, DNA replication, and Cellular senescence (Figure 6B). KEGG Pathway Analysis and its Regulatory Influence on TTK Expression in the Cell Cycle Next, KEGG pathway analysis was carried out to determine the likely roles of these central genes in CRC.In CWH43-KD cells, the most enriched pathways included Cushing syndrome, Breast cancer, Oxytocin signaling, Cell cycle, and Resistance to platinum drugs (Figure 6A).For CWH43-overexpressing cells, the top pathways were Cell cycle, Oocyte meiosis, Progesterone-mediated oocyte maturation, DNA replication, and Cellular senescence (Figure 6B).Notably, the "Cell cycle" pathway appeared as a common enriched pathway for both CWH43 knockdown and overexpression groups.Further scrutiny of genes related to the cell cycle pathway showed that TTK expression increased in CWH43-KD cells but decreased in CWH43-overexpressing cells (Figure 6C).In summary, our data suggest that CWH43 is involved in the cell cycle and may serve as a regulatory factor for TTK expression in CRC. TTK Inhibitor Reversed Tumor-Promoting Effect in CWH43-KD Cancer Cells To determine if the tumor-promoting effects of CWH43-KD on CRC operate through TTK, we employed RT-qPCR assay.It was evident that TTK expression was markedly upregulated in CWH43-KD cells when contrasted with control cells (Figure 7A).Conversely, in HCT 116 cells where CWH43 was overexpressed, TTK mRNA levels declined.This suggests that CWH43 could function by negatively regulating TTK.To further investigate this mechanism, we introduced a TTK inhibitor, AZ3146.Our findings, illustrated in Figure 7B, show that introducing AZ3146 to CWH43-KD cells notably reduced the relative cell survival rate when contrasted with the vehicle control.However, the TTK inhibitor did not influence cell survival in the scrambled control cells.In line with this, the transwell migration activity also decreased post-AZ3146 treatment in CWH43-KD cells (Figure 7C).These observations suggest that CWH43's tumor-suppressing capability might operate by negatively impacting TTK.Notably, the "Cell cycle" pathway appeared as a common enriched pathway for both CWH43 knockdown and overexpression groups.Further scrutiny of genes related to the cell cycle pathway showed that TTK expression increased in CWH43-KD cells but decreased in CWH43-overexpressing cells (Figure 6C).In summary, our data suggest that CWH43 is involved in the cell cycle and may serve as a regulatory factor for TTK expression in CRC. TTK Inhibitor Reversed Tumor-Promoting Effect in CWH43-KD Cancer Cells To determine if the tumor-promoting effects of CWH43-KD on CRC operate through TTK, we employed RT-qPCR assay.It was evident that TTK expression was markedly upregulated in CWH43-KD cells when contrasted with control cells (Figure 7A).Conversely, in HCT 116 cells where CWH43 was overexpressed, TTK mRNA levels declined.This suggests that CWH43 could function by negatively regulating TTK.To further investigate this mechanism, we introduced a TTK inhibitor, AZ3146.Our findings, illustrated in Figure 7B, show that introducing AZ3146 to CWH43-KD cells notably reduced the relative cell survival rate when contrasted with the vehicle control.However, the TTK inhibitor did not influence cell survival in the scrambled control cells.In line with this, the transwell migration activity also decreased post-AZ3146 treatment in CWH43-KD cells (Figure 7C).These observations suggest that CWH43's tumor-suppressing capability might operate by negatively impacting TTK. Discussion In humans, previous studies have associated CWH43 with normal pressure hydrocephalus [25,26].Yet, its role in cancer remains undefined.Our research uncovers its significant function in the development of colorectal cancer.Initially, we observed a marked reduction in CWH43 expression in CRC tissues, linking its low levels to an adverse survival outcome in CRC patients (Figures 1 and 2).Furthermore, suppressing CWH43 amplified CRC cell growth and tumor expansion in mice, whereas its increased expression curtailed CRC cell viability (Figure 3).Additionally, reduced CWH43 heightened CRC cell Discussion In humans, previous studies have associated CWH43 with normal pressure hydrocephalus [25,26].Yet, its role in cancer remains undefined.Our research uncovers its significant function in the development of colorectal cancer.Initially, we observed a marked reduction in CWH43 expression in CRC tissues, linking its low levels to an adverse survival outcome in CRC patients (Figures 1 and 2).Furthermore, suppressing CWH43 amplified CRC cell growth and tumor expansion in mice, whereas its increased expression curtailed CRC cell viability (Figure 3).Additionally, reduced CWH43 heightened CRC cell migration, invasion, and epithelial-mesenchymal transition (Figure 4).These effects appeared to stem from TTK regulation (Figures 5 and 6).Importantly, inhibiting TTK reversed these detrimental cellular behaviors in CWH43-suppressed CRC cells.This suggests that CWH43's tumor-suppressing potential might act via TTK modulation.Given the persistent challenges in treating metastatic CRC and drug resistance, there is an urgent need for early-detection biomarkers.Our findings propose CWH43's pivotal role in CRC development, presenting it as a potential preventive measure and therapeutic target. The epithelial-mesenchymal transition (EMT) is a developmental program allowing stationary epithelial cells to acquire migratory and invasive capabilities.Tumor cells often exploit EMT to undergo molecular changes, transitioning partially from an epithelial to a mesenchymal phenotype [27].EMT is intricately linked to numerous malignant traits in tumor cells, encompassing migration, invasion, stemness, and chemo-radiotherapy resistance [28].Despite its pivotal role in tumor metastasis and sustaining hallmark features, the EMT signaling network remains incompletely understood, presenting challenges for potential clinical trials targeting EMT in cancer therapy [29].We noted augmented migratory and invasive capabilities in CWH43 knockdown cells concomitant with the upregulation of epithelial-mesenchymal transition (EMT) markers.This implies that the tumor-associated downregulation of CWH43 may instigate cancer cells towards EMT, consequently fostering metastasis.Additional investigations are merited to elucidate the intricate regulatory mechanisms governing the crosstalk between CWH43 and EMT.These insights hold promise for advancing the development of targeted therapies to inhibit EMT. In yeast, CWH43 is crucial for lipid transformations to ceramides, facilitating ceramide integration into GPI-anchored protein [30].A lack of CWH43 disrupts GPI expression on yeast cell walls that is vital for their growth and survival [31].Similarly, in human cells, CWH43 governs GPI-anchored protein targeting [25].Yet, its function concerning cancer remains elusive.One meta-analysis spotlighted CWH43 as one of the key genes displaying differential expression between CRC and standard mucosa using cDNA microarrays [23].Another pinpointed CWH43 as a central gene in gastric cancer via weighted gene coexpression network analysis [32].Aligning with these findings, our research corroborates CWH43's tumor-suppressing role, backed by both in vitro and in vivo tests. Chromosomal separation during cell division relies on the spindle assembly checkpoint.Here, TTK (also labeled as MPS1) emerges as a critical regulator [33,34].TTK's role in preserving genomic integrity is crucial, with its irregularities associated with various cancers such as breast, liver, and lung cancers [14,16,[35][36][37].For colon cancer, one investigation deduced that TTK expression is notably elevated, correlating with adverse patient prognosis and heightened cell proliferation [18].Another asserted that increased TTK disrupts the spindle assembly checkpoint, fostering genome instability and tumor growth in colon cells [38].Echoing these findings, our study posits that CWH43 acts by negatively influencing TTK, influencing cancer development, invasion, and long-term patient outcomes. Though we have unveiled CWH43's tumor-suppressing role in CRC and its potential interaction with TTK, the exact mechanisms remain elusive.TTK's role might extend beyond spindle assembly checkpoint maintenance.For instance, one study found TTK expression peaking in stage II clinical CRC tissues rather than in later stages [39].Another highlighted TTK's unique regulatory role in tumor cell viability, mediated by its interaction with mitochondria.Other research found CRC with microsatellite instability (MSI) presented TTK frameshift mutations [40], with TTK expression heightened in MSI-high status cancers versus those MSI-low [18].In recent years, an expanding corpus of research has spotlighted TTK as a promising target for cancer therapy [41,42].The inhibition of TTK prompts a premature exit of cancer cells from mitosis, culminating in heightened chromosome segregation errors and the genesis of aneuploid cells.Through successive rounds of cell division, cumulative chromosome segregation errors can ultimately trigger apoptosis in cancer cells [43].Consequently, TTK has garnered substantial attention as a pivotal focus in cancer research, with TTK inhibitors undergoing escalating evaluation in clinical trials [44].This underscores the need to further investigate the intricate CWH43-TTK relationship in CRC development. In summary, our research indicates that decreased CWH43 expression may contribute to CRC progression by activating TTK.This highlights the potential of CWH43 as a promising target for CRC treatment, warranting more in-depth studies. Gene Expression Level, Protein Expression Level, and Patient Survival Related to CWH43 in Colorectal Cancer The University of Alabama at Birmingham cancer data analysis portal (UALCAN) is an online interactive portal which enables easy exploring and analysis of gene expression, cancer proteomics, and patient survival data obtained from The Cancer Genome Atlas (TCGA) database [45] and the Clinical Proteomic Tumor Analysis Consortium (CPTAC) database.UALCAN is accessible at http://ualcan.path.uab.edu(accessed on 5 June 2023).In addition, Gene Expression Profiling Interactive Analysis (GEPIA) provides differential expression analysis of tumor versus normal tissue, as well as functions for analysis by cancer type or pathological stage and patient survival analysis, based on TCGA and The Genotype-Tissue Expression (GTEx) project [46].GEPIA is available at http://gepia.cancer-pku.cn/(accessed on 24 July 2023).These tools were used to identify the relationship between CWH43 and colorectal cancer. Functional Enrichment Analysis The RNA-seq data for the TCGA COADREAD (colorectal adenocarcinoma) project, processed using the STAR workflow, were acquired from the TCGA database (https: //portal.gdc.cancer.gov,accessed on 16 August 2022).We utilized the edgeR [v3.38.2] package to perform differential gene expression analysis between the high and low expression groups of CWH43 in TCGA COADREAD data [48].Gene Set Enrichment Analysis (GSEA) and Kyoto Encyclopedia of Genes and Genomes (KEGG) were performed using the clusterProfiler [v4.4.4] package [49].The generated results were then visualized using the ggplot2 [v3.3.6] package. Transfection and Generation of Stable Colonies To knock down CWH43 expression, short hairpin RNA (shRNA; TRCN0000417738 and TRCN0000429495) targeting human CWH43 (NM_025087) was obtained from the National RNAi Core Facility at Academia Sinica in Taiwan.CWH43-shRNA and nontarget shRNA were transfected into HCT 116, DLD-1, and HT-29 cells, and stably transfected cells were selected using puromycin for 2 weeks.The expression level of CWH43 was determined through quantitative reverse transcription polymerase chain reaction (RT-qPCR).To overexpress CWH43, the pCMV6-Entry-CWH43 (CAT#: RC224386, OriGene Technologies, Inc., Rockville, MD, USA) was transfected into HCT 116 cells through electroporation.Stably transfected cells were selected after adding G418, and the cells were used for subsequent experiments after confirming CWH43 overexpression through RT-qPCR assay and Western blotting. Examination of Cell Viability The cytotoxicity assay sulforhodamine B (SRB) assay was used to examine cell viability.Cells at a density of 2 × 10 4 with vector control, CWH43 knockdown, and CWH43 overexpressed cells were seeded into 24-well plates (Falcon, Munich, Germany) and incubated at 37 • C in a 5% CO 2 humidified incubator attached overnight.After incubation for 48 h, the cells were fixed with 10% (wt/vol) trichloroacetic acid at 4 • C overnight and then stained with 0.4% w/v protein-bound SRB for 30 min at room temperature.The stained cells were washed twice with 1% acetic acid and air-dried overnight.The protein-bound dye was dissolved in 10 mM Tris base solution, and the optical density (OD) was measured at 515 nm using a microplate reader (Bio-Rad Laboratories, Hercules, CA, USA).The baseline was defined as cells treated with control, while fold changes were calculated as the OD values of CWH43 overexpressed or knockdown cells relative to the baseline. Transwell Migration and Invasion Assay The BD Falcon cell culture insert and BD BioCoat Matrigel invasion chambers precoated with BD Matrigel matrix (BD Biosciences, Franklin Lakes, NJ, USA) were used, respectively, for in vitro cell migration and invasion assays.We seeded aliquots of 1 × 10 5 cells suspended in 500 µL of serum-free RPMI medium into the upper compartment of each chamber, and the lower compartments were filled with 1 mL of RPMI medium containing 10% fetal bovine serum and 1% penicillin and streptomycin.After incubation for 48 h at 37 • C in a 5% CO 2 incubator, each well and chamber was washed once with 1 mL of 1× PBS.The cells were fixed in less than 1 mL of methyl alcohol solution for a few seconds.The cells in the top chamber (non-migrated) were mechanically removed with cotton swabs.The cells on the reverse side were stained with 0.1% crystal violet.After the plate was incubated at room temperature for 8 h, the crystal violet was removed, and the number of stained cells was counted using a microscope (Olympus IX; Olympus, Tokyo, Japan) at 10-fold magnification.The number of migrated cells was counted using a handheld cell counter. Animal Model CB17 severe combined immunodeficient (SCID) male mice were randomly divided into experimental and control groups (n = 6 per group).The mice were inoculated with 2.5 × 10 7 DLD-1 cells resuspended in a 50% mixture of Matrigel (BD Biosciences) in HBSS (Life Technologies, Carlsbad, CA, USA) into the right flank subcutaneous tissue.Tumor dimensions and body weights were measured twice a week using the following formula: tumor volume (mm 3 ) = length × (width 2 )/2.After 4 weeks, the mice were sacrificed and the nodules of the tumor were counted and weighed.All animal use protocols were approved by the Institutional Animal Care and Use Committee of Taipei Medical University (LAC-2019-0340). RNA Extraction, cDNA Synthesis, and Quantitative Polymerase Chain Reaction (qPCR) Analysis Total RNA was isolated from fresh-frozen colorectal cancer cell lines using RNAzol ® RT following the manufacturer's protocol (Molecular Research Center, Inc., Cincinnati, OH, USA).Subsequently, 8 µg of total RNA was subjected to reverse transcription (RT) reactions in a 20 µL reaction volume using a cDNA Synthesis Kit (Invitrogen Life Technologies, Carlsbad, CA, USA).The resulting cDNA was utilized for quantitative RT-PCR analysis of gene expression employing the Power SYBR-Green real-time RT-PCR system and the ABI 7500 FAST™ detection system (Applied Biosystems, Foster City, CA, USA).The quantification of target gene expression was normalized to the expression levels of the GAPDH gene.The primer sequences used for the analysis are presented in Table 1. RNA Sequencing RNA sequencing was performed as described before [19].Total RNA was extracted from colorectal cancer (CRC) cells using Trizol (Invitrogen, Carlsbad, CA, USA) following the manufacturer's protocol.Biotools biotech Co., Ltd.(New Taipei City, Taiwan) performed the RNA sequencing.In brief, ribosomal RNA was depleted from the RNA samples using the EpicentreRibo-Zero rRNA Removal Kit (Illumina, San Diego, CA, USA), after which cDNA synthesis, adaptor ligation, and enrichment steps were executed in accordance with the instructions provided by the NEBNext ® Ultra™ RNA Library Prep Kit for Illumina (NEB, Ipswich, MA, USA).The resultant library products were assessed using Illumina NovaSeq 6000 (Illumina, San Diego, CA, USA).with paired-end 150 bp sequencing.Raw reads obtained from sequencing were subjected to quality filtering using Trimmomatic to obtain a set of clean reads.Subsequently, the clean reads were aligned to the reference genome using HISAT2, and the raw read counts for each gene were determined using the feature Counts.For the normalization of expression levels, RLE/TMM/FPKM methods were employed.Differentially expressed genes were identified utilizing a two-fold change threshold with an adjusted p-value below 0.05.Expression data from RNA-seq analysis can be found in Table S5 for the CWH43-knockdown condition, and in Table S6 for the CWH43-overexpression condition. PPI Network Analysis PPI networks were established using the STRING database (http://www.string-db.org/, accessed on 29 December 2022) [50].This resource offers both confirmed and predicted protein interactions derived from multiple sources, including genomic contexts, co-expressions, high-throughput experiments, and prior knowledge.A significance threshold of 0.4 (medium confidence) was chosen for screening.The resulting PPI pairs were imported into the Cytoscape software (version 3.8.2),and subsequent analysis was conducted using the CytoNCA plugin (http://www.cytoscape.org,accessed on 29 December 2022) [51].Hub genes, which represent highly interconnected genes, were identified by calculating their degree value (number of edges connecting the genes), employing a cutoff of ≥5 for the CWH43 knockdown group and ≥88 for the CWH43 overexpression group. Figure 1 . Figure 1.Comparison of CWH43 expression in normal and CRC samples.Analysis of CWH43 mRNA expression in colon adenocarcinoma (COAD) and standard tissues from (A) UALCAN and (B) GEPIA resources (Dots represent jittered points).(C) Protein levels of CWH43 in colon cancer as seen via UALCAN, sourced from the Clinical Proteomic Tumor Analysis Consortium database.The Z-value represents the standard deviation from the median across samples.*** p < 0.001. Figure 2 .Figure 1 . Figure 2. Correlation between CWH43 expression levels and survival in CRC patients.Kaplan-Meier survival plots depicting overall survival of CRC patients based on high versus low CWH43 Figure 1 . Figure 1.Comparison of CWH43 expression in normal and CRC samples.Analysis of CWH43 mRNA expression in colon adenocarcinoma (COAD) and standard tissues from (A) UALCAN and (B) GEPIA resources (Dots represent jittered points).(C) Protein levels of CWH43 in colon cancer as seen via UALCAN, sourced from the Clinical Proteomic Tumor Analysis Consortium database.The Z-value represents the standard deviation from the median across samples.*** p < 0.001. Figure 2 . Figure 2. Correlation between CWH43 expression levels and survival in CRC patients.Kaplan-Meier survival plots depicting overall survival of CRC patients based on high versus low CWH43 Figure 2 . Figure 2. Correlation between CWH43 expression levels and survival in CRC patients.Kaplan-Meier survival plots depicting overall survival of CRC patients based on high versus low CWH43 expression from the (A) UALCAN (specifically for colon cancer) and (B) GEPIA resources (encompassing both colon and rectal cancer). Figure 3 . Figure 3. Impact of CWH43 on CRC cell growth.Protein levels of CWH43 in (A) upregulated HT29 cells and downregulated HCT116 cells.(B) The SRB assay was utilized to gauge the relative survival Figure 3 . Figure 3. Impact of CWH43 on CRC cell growth.Protein levels of CWH43 in (A) upregulated HT29 cells and downregulated HCT116 cells.(B) The SRB assay was utilized to gauge the relative survival rates of the cells.(C) Xenograft model showed tumor size and weight of scrambled control and CWH43-KD groups.All experiments were conducted in triplicate, and all data were expressed as mean + SD.Two-tailed Student's t-tests were used to assess statistical significance.* p < 0.05, ** p < 0.005. Figure 4 . Figure 4. Role of CWH43 in restraining CRC cell metastasis.Evaluating the migration and invasion capacities of standard versus CWH43-altered cells in (A) HCT116 and (B) DLD-1 samples (100× magnification).(C) Western blot analysis was used to determine expression levels of beta-catenin, vimentin, N-cadherin, and E-cadherin in CWH43-suppressed DLD-1 cells.Statistical significance was evaluated using two-tailed Student's t-tests on all experiments conducted in triplicate.* p < 0.05, ** p < 0.005. Figure 4 . Figure 4. Role of CWH43 in restraining CRC cell metastasis.Evaluating the migration and invasion capacities of standard versus CWH43-altered cells in (A) HCT116 and (B) DLD-1 samples (100× magnification).(C) Western blot analysis was used to determine expression levels of betacatenin, vimentin, N-cadherin, and E-cadherin in CWH43-suppressed DLD-1 cells.Statistical significance was evaluated using two-tailed Student's t-tests on all experiments conducted in triplicate.* p < 0.05, ** p < 0.005. Figure 6 . Figure 6.KEGG pathways influenced by core genes.A bubble chart visualizes KEGG pathways for (A) CWH43-KD and (B) CWH43ov cells.(C) The hsa04110 pathway (pertaining to the cell cycle) emerged as a consistently influenced pathway.Red signifies upregulated genes, blue denotes downregulated genes, while grey represents no notable change. Figure 6 . Figure 6.KEGG pathways influenced by core genes.A bubble chart visualizes KEGG pathways for (A) CWH43-KD and (B) CWH43ov cells.(C) The hsa04110 pathway (pertaining to the cell cycle) emerged as a consistently influenced pathway.Red signifies upregulated genes, blue denotes downregulated genes, while grey represents no notable change. Figure 7 . Figure 7. Counteractive role of TTK on CWH43-manipulated cells.(A) Quantitative PCR (qPCR) assessed TTK expression in cells with CWH43-KD and CWH43ov cells.(B) The TTK inhibitor diminished growth in the CWH43-KD cells but left the scrambled control group unaffected.(C) The TTK inhibitor markedly curtailed migration in CWH43-KD cells (100× magnification).The experiments were conducted independently in triplicate.Statistical significance was determined by a twotailed Student's t-test.** represent p < 0.005, respectively. Figure 7 . Figure 7. Counteractive role of TTK on CWH43-manipulated cells.(A) Quantitative PCR (qPCR) assessed TTK expression in cells with CWH43-KD and CWH43ov cells.(B) The TTK inhibitor diminished growth in the CWH43-KD cells but left the scrambled control group unaffected.(C) The TTK inhibitor markedly curtailed migration in CWH43-KD cells (100× magnification).The experiments were conducted independently in triplicate.Statistical significance was determined by a two-tailed Student's t-test.** represent p < 0.005, respectively.
7,665
2023-10-01T00:00:00.000
[ "Biology", "Medicine" ]
Spurious ergodicity breaking in normal and fractional Ornstein–Uhlenbeck process The Ornstein–Uhlenbeck process is a stationary and ergodic Gaussian process, that is fully determined by its covariance function and mean. We show here that the generic definitions of the ensemble- and time-averaged mean squared displacements fail to capture these properties consistently, leading to a spurious ergodicity breaking. We propose to remedy this failure by redefining the mean squared displacements such that they reflect unambiguously the statistical properties of any stochastic process. In particular we study the effect of the initial condition in the Ornstein–Uhlenbeck process and its fractional extension. For the fractional Ornstein–Uhlenbeck process representing typical experimental situations in crowded environments such as living biological cells, we show that the stationarity of the process delicately depends on the initial condition. Introduction The Ornstein-Uhlenbeck process is one of the most fundamental physical processes, originally devised to describe the velocity distribution and relaxation of a Brownian particle under the influence of a velocity-dependent friction. The Ornstein-Uhlenbeck process belongs to the class of Gaussian and Markovian processes, and it is described in terms of the stochastic Langevin equation [1][2][3] 4 dx + λxdt = σdB t · (1) Here dB t is the increment of the well-known Brownian motion (Wiener process) B t , and λ and σ are positive constants. 1/λ defines a natural dynamic time scale, and σ is the intensity of the fluctuations. Under certain conditions discussed below the Ornstein-Uhlenbeck process is the only non-trivial process in the class of Gauss-Markov processes that has a stationary solution [5]. Physically, overdamped Brownian particles in an optical tweezers trap [6] or tethered to an anchor by a flexible polymer [7] are adequately described in terms of an Ornstein-Uhlenbeck process. The Ornstein-Uhlenbeck process is also used as a phenomenological model for the confinement observed in the tracer diffusion in critical random environments [8]. A wide field of applications of the Ornstein-Uhlenbeck process lies in finance. The Ornstein-Uhlenbeck process was adopted in 1970s by Vašíček to model the evolution of the interest rate of financial markets [4]. Extending this Vašíček model, Hull and White took into account explicitly time dependent drift μ and λ [9]. There are other variants of the Vašíček model, for instance, the jump-extended Vašíček model in which an exponential jump noise following a Poisson distribution is added to equation (1) [10]. There also exist extensions of the Ornstein-Uhlenbeck process to non-Gaussian processes with applications in finance [11], including option pricing [12], commodity derivative pricing [13] and electricity pricing [14]. Such models have also been utilised to model neural activity [15] or to study the statistics of neuron spikes [16]. The Ornstein-Uhlenbeck process corresponds to the continuous-time analogue of a discrete-time autoregressive AR(1)-process [17][18][19]. In a direct extension of the Ornstein-Uhlenbeck process (1) one replaces the white Gaussian noise dB t by power-law correlated fractional Gaussian noise [20]. In absence of the damping term this so-called fractional Brownian motion captures the motion of diffusive particles in viscoelastic environments, such as artificially crowded media [21][22][23], lipid bilayer membranes [24][25][26], or the cytoplasm of living biological cells [27][28][29]. The correlations in the noise effect anomalous diffusion of the form x 2 (t) t α [30,31]. Combined with a Hookean restoring force exerted by optical tweezers a tracer particle in a biological cell [28,32] then follows the fractional Ornstein-Uhlenbeck process [33]. Formally, the fractional Ornstein-Uhlenbeck process is still Gaussian and stationary, yet it is strongly non-Markovian. As we will see this causes fundamental differences. We note that in finance power-law correlations are frequently observed in the dynamics of stock structure and price dynamics [34], commodity prices [35], and return of the closing values of the financial indices [36][37][38]. There exist several studies modelling such long-ranged correlated processes with ARFIMA, GARCH, and FIARCH processes, and to quantify the cross-correlation of mutually dependent processes [38,39]. With modern microscopic techniques it is possible to track single sub-micron tracer particles and even single molecules through complex media such as live biological cells [6,40]. The time-series extracted from such single-particle trajectories are typically evaluated in terms of time-averaged physical observables [41,42]. To address the motion of a Brownian or fractional Brownian particle under the action of an external potential by analysing a single trajectory of its movement, it is essential to understand whether the physical process governing the motion of the particle is ergodic, or not [31,43,44]. To infer the ergodic property of a given Gaussian process it is sufficient that the associated two-time covariance function solely depends on the difference of the two times [45]. This property rests on the fact that for Gaussian processes all properties can be deduced from the mean and covariance function [46,47]. An indirect approach to deduce the ergodic property of the process is to compare the behaviour of the mean squared displacement (MSD) and the time averaged MSD [31,[48][49][50]. We here scrutinise the exact ergodic and stationary behaviour of the regular and fractional Ornstein-Uhlenbeck processes and show that they fundamentally differ in some of their behaviour, despite of the fact that both are ergodic. In particular, we elucidate the precise role of the initial condition and invalidate the general belief that the assertion of an equilibrium initial condition necessarily recovers the stationary property of the process. We first analyse the detailed statistical properties from the covariance of the Ornstein-Uhlenbeck process in section 2, including the ensemble-and time-averaged MSDs and the effect of the initial condition. Section 3 provides an analogous analysis for the fractional Ornstein-Uhlenbeck process. In section 4 we discuss our results and conclude. Some mathematical details are deferred to the appendix. Ornstein-Uhlenbeck process We define the Ornstein-Uhlenbeck process in terms of the stochastic differential equation (1), in which dB t is the increment of Brownian motion B t with the covariance function [51] Cov In this formulation, Gaussian white noise corresponds to the time derivative of the increment, dB t /dt. After solving the stochastic differential equation (1), x(t) is formally obtained as where x 0 = x(t = 0) defines the initial condition. Since B t is a continuous process, via integration by parts the above equation is recast into with B 0 = 0. The MSD for a random process x(t) is defined as where The MSD can also be written in terms of the covariance function, For the Ornstein-Uhlenbeck process the MSD then assumes the following expression, where Var(X) stands for the variance of a random variable X. Note that in the limit λ → 0 of free Brownian motion this notation leads to the MSD lim λ→0 Ω 2 (t) = σ 2 t. The time-averaged MSD (TAMSD) is defined as [31,41,48] where T is the total measurement time and Δ is called the lag time. For the Ornstein-Uhlenbeck process, the TAMSD yields in the form Properties of the Ornstein-Uhlenbeck process Since the Ornstein-Uhlenbeck process is a Gaussian process, it suffices to know the covariance function and mean to infer its properties. The mean of x(t) according to equation (4) is Recall that a Gaussian process is stationary and ergodic if the covariance function at two times exclusively depends on the time difference, that is, Cov(x(t 1 ), x(t 2 )) = G(|t 2 − t 1 |) in terms of the continuous function G. The covariance (10) satisfies the requirements of stationarity if (i) t 1 or t 2 are significantly larger than 1/λ, or (ii) if Var(x 0 ) = σ 2 2λ . The first condition is asymptotic with respect to 1/λ: the process loses the memory of its initial condition after the correlation time 1/λ. The second condition is valid for all times, it corresponds to starting the process with the equilibrium distribution. The equilibrium stationary distribution can be deduced from the Fokker-Planck equation of the Ornstein-Uhlenbeck process [52] ∂P(x, t) where P(x, t) is the probability density function of the process. The solution for P(x, t) is [3,52] P(x, t) = λ πσ 2 In the stationary limit t 1/λ the stationary probability density function is given by P(x) = [λ/(πσ 2 )] −1/2 exp(−λx 2 /σ 2 ), for which the variance becomes Assume that the distribution of x 0 satisfies the stationary distribution, Var(x 0 ) = σ 2 /(2λ), from equations (7) and (9) one arrives at The fact that MSD and TAMSD are equivalent for an equilibrium stationary initial distribution is the direct consequence of the stationary property of the process, that can be directly inferred from the covariance (10). Thus, MSD and TAMSD indeed coincide. Yet there exists an intrinsic problem regarding the way MSD and TAMSD are defined, and the equivalency between the two is only valid under the strict conditions that the equilibrium initial condition is met and that the process is both Gaussian and Markovian, see the next section for the non-Markovian fractional Ornstein-Uhlenbeck process. Compare the pairs of equations (7) and (9) as well as (14) and (15). In the first pair, (7) and (9), we note the existence of two time scales in the TAMSD, Δ and T, the latter of which does not exist in the definition of the MSD. This effects a disparity in inferring the stationary state in a consistent way from MSD and TAMSD. From the MSD, the stationary state is reached when the expression ceases to depend on t, that is when t 1/λ. In contrast, for the TAMSD the stationarity condition depends on the interplay between lag time Δ and measurement time T. The observation time T identifies the total time the process has been monitored to evolve, and one identifies the stationary state of the process when T 1/λ. Yet Δ signifies the magnitude of the time window in the sliding average, comparing two instances of the process. Necessarily, Δ < T, however, also the lag time Δ needs to be compared with the natural dynamical time scale imposed by 1/λ. There exist two distinct regimes: (i) if Δ is much smaller than 1/λ the fluctuations present in the system during this time interval have not relaxed. Therefore any statistical inference cannot be justified although the overall process has reached stationarity for T 1/λ. (ii) Stationarity is reached when T 1/λ and Δ 1/λ, as long as Δ T is simultaneously fulfilled. Obviously, for the trivial case T < 1/λ the process cannot be stationary. When the initial condition is chosen to be the equilibrium distribution, Var(x 0 ) = σ 2 /2λ, we see from equations (14) and (15) that the situation is different: here stationarity is reached once t 1/λ for the MSD and Δ 1/λ for the TAMSD. Note that the signature of T disappears (an indication of stationarity). The caveat here is that, for the MSD, asserting the equilibrium initial condition, which implies the stationary property of the process, does not imply the independence of the MSD of t, in contrast to the case of the TAMSD, in which the dependency on T disappears. This discrepancy also manifests itself in the asymptotes of MSD and TAMSD when the equilibrium initial condition is not asserted. The asymptotes of MSD and TAMSD in the stationary state read Indeed, for the MSD the stationary value asymptote depends on the variance Var(x 0 ) of the chosen initial distribution. This contradicts the common intuition that, once the process reaches its stationary state, any trace of the initial condition must have vanished. In contrast, Var(x 0 ) is absent from the limiting value of the TAMSD. Knowing that the Ornstein-Uhlenbeck process is stationary and ergodic, these observables, suggesting non-ergodic behaviour, are thus unsuitable. In particular, the above difference could potentially lead to wrong conclusions for the ratio of noise strength σ 2 and trap strength λ depending on which measure is chosen for the evaluation of an experiment. This discussion elucidates the fundamental difference between the generic definitions of the MSD and the TAMSD, essentially quantifying different properties of a random process. Thus, while the MSD quantifies the dispersal of an ensemble of walkers at a given time instant t with respect to the initial condition, the TAMSD quantifies how increments of the process evolve as function of the lag time. We now embark for modified definitions of these most widely used physical observables for stochastic processes for the case of the Ornstein-Uhlenbeck process. Generalised definitions of the ensemble-averaged MSD We propose to recalibrate the definition of the MSD in the generalised form where the subscript Δ indicates the generalisation. This modified MSD describes the dispersal of the process from time t to t + Δ; in other words, the dispersal of increments in which the mean effect of the initial condition and the drift are removed. We can rewrite expression (18) in terms of the covariance function in the form In the limit λ → 0 of free Brownian motion, this definition produces lim λ→0 Ω 2 Δ (t) = σ 2 Δ, which is the same expression as obtained for the classical definition, albeit with t replaced by Δ. In this generalised formulation the integrand of the TAMSD (8) is exactly the generalised expression of the MSD given by equation (18), that is, (20) which readily yields equation (9). We observe that for equilibrium initial conditions, Var(x 0 ) = σ 2 /2λ, the generalised expressions for MSD and TAMSD yield exactly the same result (σ 2 /λ) [ Fractional Ornstein-Uhlenbeck processes The fractional Ornstein-Uhlenbeck process is the extension of the normal Ornstein-Uhlenbeck process (1), in which the increments of Brownian motion are substituted by the increments of fractional Brownian motion, B H t . Here H is the Hurst exponent, which is allowed to vary in the interval H ∈ (0, 1] [20]. The fractional Ornstein-Uhlenbeck process is therefore given by the stochastic differential equation [33] Here d B H t is the increment of fractional Brownian motion B H t . 7 The tilde is introduced here to denote the extension of fractional Brownian motion to the negative time domain, such that Fractional Brownian motion with Hurst parameter H ∈ (0, 1], is a continuous centred Gaussian process defined by the covariance function [20] Cov For H = 1/2, B (21) is Since fractional Brownian motion is a continuous process this integral exists [59]. Integrating by parts the equation above can be rewritten in terms of B H t in the form Then the covariance function becomes We note that while free fractional Brownian motion, corresponding to the limit λ → 0, is ergodic [49,60,61], transient non-ergodicity occurs when the process is confined. Namely, for an harmonic external confinement (the Ornstein-Uhlenbeck process, that is) it was shown analytically and experimentally that the relaxation of the MSD is exponential while a slower power-law relaxation is observed for the TAMSD [23,62]. When x 0 is fixed or the distribution of x 0 is independent of the fractional Brownian motion all terms involving x 0 B H t vanish. Therefore the covariance function simplifies to After calculating the integrals (see appendix A for details) the covariance of the fractional Ornstein-Uhlenbeck process reads where M (a, b, z) is Kummer's function of the first kind (the confluent hypergeometric function of the first kind [63]). The integral representation of this function is given by For H = 1/2 the covariance function (28) consistently reduces to expression (10) of the regular Ornstein-Uhlenbeck process (note that M(2, 3, x) = 2(1 − e x + xe x )/x 2 ). On closer inspection of the covariance function, unlike for the case of the regular Ornstein-Uhlenbeck process above, in which the equilibrium distribution of the initial condition yields a stationary covariance function (see equation (10)), we notice that there is no possible form for Var(x 0 ) such that the covariance function (28) would exclusively depend only on the time difference between the two time points of the process. In other words, there is no initial condition, such that Cov(x(t 1 ), x(t 2 )) = G(|t 2 − t 1 |) for any given t 1 and t 2 . Asserting an equilibrium initial condition does not fulfil the requirement of an ergodic and stationary process for any t 0. Indeed, let us assume that the x 0 have an equilibrium distribution corresponding to the normal distribution N (0, ξ 2 ) with variance [33,53,59], Here the integral is calculated in appendix D. This result can also be obtained by recalling that Integration by part leads to the expression which yields the result (30). Observe that by substituting the variance of x 0 in equation (28) the covariance function would still depend on the absolute times t 1 and t 2 . To provide a hint why this is the case, recall our earlier assumption on x 0 . Our assumption that x 0 and B H t are not correlated yielded a covariance function which is not stationary for finite t 1 and t 2 . It asymptotically approaches the stationary covariance function when t 1 and t 2 tend to infinity. Furthermore, observe that x 0 and B H t are correlated in the case of fractional Ornstein-Uhlenbeck process, since the driving noise has a long-range memory. This is also reflected in the generalised MSD and TAMSD. Since the closed analytical expressions for the generalised MSD and TAMSD are too cumbersome to be presented here, we refer to Appendix B and observe that indeed the generalised expressions for MSD and TAMSD differ from one another. As we show now, in the stationary state ergodicity is indeed fulfilled. To proceed, we note that the fractional Ornstein-Uhlenbeck process has the stationary solution [53] x indicated by the subscript s. Note that to achieve this stationary solution the domain of t has been changed to t ∈ (−∞, ∞). For this case from which it is inferred that every stationary solution x s (t) of the Langevin equation (21) has the same distribution as x(t) in the long-time limit. Consequently, we deduce that the covariance function for the stationary solution is given by (see appendix C for details) Obviously, the covariance function for the stationary solution depends only on the time difference between the two time points. With the use of equations (19) and (35) the generalised MSD and TAMSD are given by From this equivalency we conclude that the fractional Ornstein-Uhlenbeck process is ergodic in the sense of the generalised MSD. Figure 1 details the functional behaviour of the different MSDs. In the left panels for the non-stationary case, as expected the disparity between the generic MSD (5) and the TAMSD (8) is distinct. In contrast, using the generalised MSD (18) for the stationary solution the expected ergodic behaviour is restored. For completeness, figure 2 shows how the two different versions of the MSD and the TAMSD approach the plateau value for different values of H. As can be seen for normal diffusion with H = 1/2 the relaxation is always exponential. In contrast, we recover a power-law relaxation for the TAMSD and for the generalised definition of the MSD. While this power-law form for the TAMSD was discussed earlier [62] and verified experimentally [23], the full agreement between the TAMSD and the generalised MSD is a distinct behaviour following from our definition (18) here. Conclusions It is commonly assumed that asserting equilibrium initial condition is sufficient and necessary for a confined stochastic process to remain stationary at all times t 0. We here demonstrated that for the case of the fractional Ornstein-Uhlenbeck process this is in fact not true. Generally, for any process which is not a Markov process one should bear in mind that due to long range correlations the assumption that the process is stationary requires one to take into account the entire history of the system. Therefore, asserting any assumption on the initial condition of the process would perturb the stationary state of the process, even in the case when this initial condition is the equilibrium distribution. Moreover, we revealed another subtle point on how to define the stationary state of the process based on generalised definitions of the MSD and the TAMSD. While it is often believed that the sufficient condition to infer that the process has reached its stationary state is given when in the TAMSD the observation time tends to infinity. In this statement, though, it is neglected that Δ needs to be considered, as well. Indeed, while the lag time should remain significantly below the observation time, Δ T, the lag time needs to be much larger than the natural dynamic time scale of the process, Δ 1/λ. The Ornstein-Uhlenbeck process and its fractional extension are essential in modelling physical systems in the presence of an external potential. They are Gaussian processes with the difference that in the former case, the correlations are short-lived (Markov process) while in the latter case the correlations are long-ranged. It was further demonstrated that the Ornstein-Uhlenbeck process is stationary for all t 0 if the equilibrium initial condition is asserted. In contrast, this does not hold true for the fractional Ornstein-Uhlenbeck process due to the fact that the process is not Markovian. These results will also be important for the correct analysis of measured trajectories of generic processes driven by fractional Gaussian noise in terms of the TAMSD, for instance, under confinement [64]. Moreover, the finite-time ergodic properties of the normal Ornstein-Uhlenbeck process as studied in [65,66] should be considered in view of the generalised definitions of the MSD and TAMSD provided here. Acknowledgments We acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG), Grant Number ME 1535/7-1. RM acknowledges the Foundation for Polish Science (Fundacja na rzecz Nauki Polskiej, FNP) for support within an Alexander von Humboldt Honorary Polish Research Scholarship. We acknowledge the support of the German Research Foundation (DFG) and the Open Access Publication Fund of Potsdam University. Appendix A. Covariance function of the non-stationary fractional Ornstein-Uhlenbeck process To calculate the covariance function given by equation (27) two types of integrals need to be calculated. One are the single integrals with respect to either t 1 or t 2 . The other is the double integral with respect to the t 1 and t 2 . Throughout the integrations, it is always assumed that t 2 > t 1 for simplicity. Whenever the difference between the two times is relevant, the result is written in terms of the modulus. The single integral with respect to t 1 is given by Following the same procedure one arrives at a similar expression for the integral with respect to t 2 The second type of the integrals appearing in the covariance of fractional Ornstein-Uhlenbeck process is given by Hence we arrive at the following expression for the double integral, Appendix B. MSD and TAMSD of the non-stationary fractional Ornstein-Uhlenbeck process After arriving at the covariance function for the non-stationary solution (28) the MSD of the fractional Ornstein-Uhlenbeck processes is deduced from Before calculating the TAMSD it is worthwhile checking that in the long time limit the expression above coincides with the earlier equation (36). In the limit t → ∞ the expression in the last square brackets remains unchanged while for the first and second square brackets it is only the second term which contributes to a non-zero value, namely, (t + Δ) 2H+1 M(2H + 1, 2H + 2, −λ(t + Δ)) and t 2H+1 M(2H + 1, 2H + 2, −λt). Considering the latter in the aforementioned long-time limit, where γ(s, x) is the lower incomplete Gamma function. Therefore, in the limit t → ∞ one indeed consistently recovers the expression of the generalised MSD for the stationary solution of the fractional Ornstein-Uhlenbeck process, equation (36). The complexity in the integration of the generalised MSD for the TAMSD is due to terms of the kind t 2H+1 M(2H + 1, 2H + 2, −λt) and t 2H+1 e −2λt M(2H + 1, 2H + 2, λt). The integration of such terms can be achieved as follows, After changing the order of the integration, Similarly, the second type of integration can be performed along the following steps, Analogously, And lastly, The final result for the TAMSD is then given by While the expressions for the TAMSD and the generalised MSD differ, both share the same asymptote in the long-time limit t, T → ∞. Moreover, the disparity between both is expected when the system has not yet reached the stationarity. Appendix C. Covariance of the stationary fractional Ornstein-Uhlenbeck process For the derivation of the covariance (35) we calculate the following integrals Following the same procedure one obtains the expression below for the integral with the differential dt 1 , The second type of integrals appearing in the covariance function is given by e λ(t 1 +t 2 ) (t 2 − t 1 ) 2H dt 1 dt 2 · The first two integrals are easily evaluated. In the last two integrals, by changing of variable t 1 − t 2 = q one arrives at The evaluation of the first double integral is straightforward and it yields e 2λt 1 2λ 2H+2 Γ(2H + 1). Evaluating the second double-integral requires changing the order of integration. For the last integral one arrives at the following, − e 2λt 2 2λ(2H + 1) (t 2 − t 1 ) 2H+1 M(2H + 1, 2H + 2, λ(t 1 − t 2 )) + e 2λt 1 2λ(2H + 1) (t 2 − t 1 ) 2H+1 M(2H + 1, 2H + 2, λ(t 2 − t 1 )). Summing up all the calculations, we obtain
5,961.2
2020-05-20T00:00:00.000
[ "Physics" ]
A CMA‐ES Algorithm Allowing for Random Parameters in Model Calibration In geoscience and other fields, researchers use models as a simplified representation of reality. The models include processes that often rely on uncertain parameters that reduce model performance in reflecting real‐world processes. The problem is commonly addressed by adapting parameter values to reach a good match between model simulations and corresponding observations. Different optimization tools have been successfully applied to address this task of model calibration. However, seeking one best value for every single model parameter might not always be optimal. For example, if model equations integrate over multiple real‐world processes which cannot be fully resolved, it might be preferable to consider associated model parameters as random parameters. In this paper, a random parameter is drawn from a wide probability distribution for every singe model simulation. We developed an optimization approach that allows us to declare certain parameters random while optimizing those that are assumed to take fixed values. We designed a corresponding variant of the well known Covariance Matrix Adaption Evolution Strategy (CMA‐ES). The new algorithm was applied to a global biogeochemical circulation model to quantify the impact of zooplankton mortality on the underlying biogeochemistry. Compared to the deterministic CMA‐ES, our new method converges to a solution that better suits the credible range of the corresponding random parameter with less computational effort. 2 of 23 known a priori, but are often only roughly constrained by laboratory or in-situ experiments, and integrate over many groups of organisms. Inverse model studies, that adjust parameters to fit the model to more easily observable quantities, such as nutrients or oxygen, can help to define credible intervals (e.g., Kane et al., 2011;Schartau et al., 2017). In practice, BGC parameters of global ocean models are typically adjusted manually in a given circulation, that is, the BGC parameter values are adjusted until simulated tracers yield a good match with observed data. Systematic calibrations of biogeochemical model parameters need to apply an automated optimization procedure in order to minimize an objective model-data misfit measure, but require a large number of model evaluations. However, the high computational demand of global BGC models impedes this approach, which is therefore the exception rather than the rule; moreover, parameters adjusted in a given circulation may not perform well in a different one, thereby requiring a re-calibration whenever the circulation changes (Kriest et al., 2020). The computational demand may be reduced by either applying computationally cheaper surrogate circulations (Khatiwala et al., 2005;Prieß et al., 2013), or by using efficient optimization algorithms. Several parameter optimization studies invoke gradient information of the misfit function to iteratively approach a locally optimal parameter vector from some initial estimate. The gradient information is mostly obtained by the-computationally efficient-adjoint method. Lawson et al. (1995) introduced it in the context of biogeochemical models. It is used to determine a direction of descent, for example, by a quasi-Newton method as in Friedrichs (2002); Spitz et al. (1998); Tjiputra et al. (2007); Xiao and Friedrichs (2014), and a step size to change the parameter vector. In the face of complex BGC ocean models (including discontinuous functions), the deployment of associated adjoint models is often difficult. Consequently, gradient-based methods are often unstable. Further, the convergence speed of gradient-based methods can suffer from poorly conditioned Hessian matrices, and finding good pre-conditioners can be difficult or time-consuming, nullifying a possible gain in convergence speed. Stochastic derivative-free search algorithms (e.g., used by Hurtt & Armstrong, 1996;Kidston et al., 2011;Kaufman et al., 2018) are known to be more robust. They allow for a thorough yet efficient scan of the parameter space and can also avoid to get stuck in a (first) local optimum (cf. Schartau & Oschlies, 2003;Vallino, 2000). This is of particular importance given the complex topography of the model-data misfit measure typical for complex BGC models (see, e.g., Faugeras et al., 2003;Hurtt & Armstrong, 1996). The covariance matrix adaption evolution strategy (CMA-ES, Hansen, 2006) is a stochastic search algorithm that is also applicable to poorly conditioned problems. It is popular for its competitiveness concerning effectivity and efficiency (cf., Hansen et al., 2010). Because of its advantages concerning robustness, efficiency and the balance between exploration and exploitation of the search space, Kriest et al. (2017) applied the CMA-ES algorithm in one of the first optimizations of a global ocean BGC model of intermediate complexity in combination with a sufficient model spin-up time (3,000 years) to approach annual tracer equilibria. In this study, we modified the CMA-ES algorithm in order to enhance the concept of parameter adjustment, in the view of uncertainties that arise from insufficiently resolved real-world processes. When applying parameter optimization, the decision about appropriate model complexity is often accompanied by the question which BGC parameter values should be tuned (i.e., are changed during optimization in order to obtain a better model fit to observations), and which parameters should remain fixed at some reasonable, constant value during optimization. In order to save computational effort, it seems natural to exclude parameters from optimization that are of minor importance for the research question on hand and/or are unimportant with regard to the applied model-data misfit measure. Indeed, Kriest et al. (2017) observed the most insensitive parameter subject to their optimization experiment to converge the slowest. The same behavior was recently observed by Oliver et al. (2021) who applied an even faster (but less explorative) search algorithm to the same coupled BGC model setup, using twin experiment data. However, fixing some parameters from the start might impact the optimization of the other parameters. Actually, it may not always be the most realistic but pragmatic assumption that model parameters are constant at all. Optimal constant parameters can result in unexpected model responses, when the modeling purpose is changed. Our model processes are simplifications of reality. Therefore, a single model process might integrate over several relevant real-world processes, which may change in space and time. An example for unresolved processes is the impact of higher trophic levels on biogeochemical cycling. In general global coupled BGC models resolve only the first two to three trophic levels (of plankton), with zooplankton mortality as upper closure term. Here, zooplankton mortality parameterizes not solely the natural zooplankton mortality but in addition also the feeding pressure of higher trophic levels (HTL), such as fish, on zooplankton. Given that the spatial and temporal distribution of fish populations is variable, a constant mortality parameter of zooplankton (as currently applied in many BGC models) may not be well justified. SAUERLAND ET AL. 10.1029/2022MS003390 3 of 23 Therefore, we target an optimization approach that allows to declare some parameters to be random parameters while optimizing those that are assumed to take fixed values. In this paper, a random parameter is drawn from a probability distribution for every singe model simulation (but stays constant within a model simulation). Solutions that are obtained this way might be preferable, since. 1. they are optimal with regard to a large range of (uncertain) parameters, and thereby may render the model applicable in a wider range of contexts (e.g., when coupled to fish models imposing spatially varying feeding pressures on zooplankton), so as to minimize the deterioration of the performance of the biogeochemical model. 2. setting a parameter to the one or the other credible value may bias the optimization of the other parameters toward local optima, with consequences for the biogeochemical cycling (as, e.g., in Kriest et al., 2017). However, exploring the model-data misfit across a large range of randomly varying parameters would drastically increase the computational demand of the misfit evaluation. Indeed, global ocean BGC models are a paragon for the problem, as single model simulations already have a high computational demand. We provide an efficient way to deal with the given problem by introducing the new R-CMA-ES (a modification of the CMA-ES where R stands for random) in this study. We will examine the effects of the proposed optimization procedure by applying it to the same global BGC model setup as used in Kriest et al. (2017) and Oliver et al. (2021). The paper is organized as follows. In Section 2 we briefly describe the CMA-ES algorithm as well as our modifications in order to efficiently solve problems with random parameters. While Section 2 is conceptual, the technical descriptions of the algorithms and their pseudo codes can be found in Appendix A and Appendix B, respectively. Further, in Appendix C we test our CMA-ES variant on a set of mathematical benchmark instances and selected random parameters in order to provide evidence that the algorithm serves the intended purpose. In Section 3 we introduce the global ocean biogeochemical model which we choose as our sample application along with an RMSE type model-data misfit measure, which has been used by Kriest et al. (2017) to calibrate the model. Based on an analysis of the calibration experiments of that former study, we declare one parameter a random parameter and derive the associated integrated model-data misfit measure (the expected model-data misfit measure with regard to the random parameter), which we want to optimize, here. In Section 4 the results of our BGC model calibrations with one random parameter are compared to the calibrations carried out by Kriest et al. (2017). We discuss the convergence behavior of both optimization approaches as well as scientific findings by the new calibration experiment and close with some conclusions. Methods For our approaches to optimization throughout this paper, we distinguish three types of model parameters: • fixed parameters: parameters stay fixed at a single, scalar value throughout the optimization. These are not further considered in this paper. • non-random parameters: parameters whose values are to be optimized and change through optimization. The parameter values are drawn from a probability distribution with changing standard deviation (see below, Section 2.1), and are hereafter denoted by p. • random parameters: parameters whose values are drawn at random from a (wide) probability distribution. In contrast to non-random parameters, the standard deviation of the probability distribution is kept constant throughout the optimization. However, equivalent to non-random parameters, random parameters remain constant during each simulation. Their value can only change for each new simulation. Random parameters are hereafter denoted as q and characterized in detail in Section 2.2. A suitable division of the parameters into random parameters and non-random parameters requires some preliminary considerations. For instance, all parameters that are relevant for the research question on hand can be analyzed with regard to their covariances and/or their impact on the model-data misfit function, using multiple model runs (e.g., the simulations of a parameter sensitivity analysis or a deterministic model calibration experiment). Based on this analysis, an appropriate parameter (set) can then be declared random. SAUERLAND ET AL. The CMA-ES Algorithm Here, we adapt the Covariance Matrix Adaption Evolution Strategy (CMA-ES) to allow for an efficient model calibration with random parameters. More precisely, we build upon the (μ/μ w , λ)-CMA-ES (see Hansen, 2006;Hansen, 2016). From now on, we will simply write CMA-ES to refer to the (μ/μ w , λ)-CMA-ES. We start with a brief recapitulation of the CMA-ES. A detailed description and the pseudo code (reduced by some subtleties) of the original algorithm can be found in Appendix A. The CMA-ES algorithm allows the optimization of n model parameters without the requirement to calculate model derivatives and the associated derivatives of a cost function with respect to the parameters. It is popular for its robustness and efficiency in solving optimization problems that are characterized by a difficult topography of a misfit function f(p), ∶ ℝ → ℝ (cf., e.g., Hansen et al., 2010). The algorithm maintains a multi-variate normal distribution over the n-dimensional parameter space of ∈ ℝ . Similar to the definition of a uni-variate normal distribution  ( ) by its mean m and its standard deviation σ, a multi-variate normal distribution  ( , ) is uniquely defined by a mean vector m and a positive definite covariance matrix C. Figure 1 illustrates the situation for the uni-variate and the bi-variate case. The algorithm's normal distribution is initialized such that it covers a sufficient part of the parameter space. The algorithm then iteratively samples a population, that is, a set of λ candidate solutions from the normal distribution. From the λ samples the = ⌊ 2 ⌋ best-ranked samples with respect to f are selected. The subset of selected samples is used to calculate an empirical normal distribution (i.e., an empirical mean vector and an empirical matrix of covariances), using rank-dependent weights w which (usually) sum up to 1. The suitable choice of λ, μ, w, and many other operational constants of the algorithm are mainly determined empirically (cf. Hansen, 2016) and details are given in Table A1 in Appendix A. The normal distribution that has been estimated from the sampled parameter vectors is in turn used to update the algorithm's normal distribution to a region with better misfit values. This procedure of sampling and updating the normal distribution is repeated until convergence (or for as many iterations as necessary/desired), that is, until f becomes sufficiently small and all parameter variances vanish. Figure 2 illustrates the convergence behavior of . The associated probability densities are indicated by the gray curve and mesh-grid, respectively. We also highlight the areas of one standard deviation which is an interval on the x-axis (limited by the blue vertical lines) in the uni-variate case, and an ellipse in the plane (blue ellipse) in the bi-variate case. In both cases, we realized some random samples (black dots) from the respective normal distribution. Figure 2. Convergence examples of the CMA-ES algorithm for the uni-variate Griewank function (upper 6 panels) and the bi-variate Griewank function (lower 3 panels). In both cases we draw λ = 10 samples (indicated as dots) per iteration from the normal distribution. The better half of μ = 5 samples (black dots) is used to update the distribution. In the uni-variate case, the function is represented as black curve and the normal distribution is indicated as gray curve. In the bi-variate case, the function values are represented by the gray-scale color scheme with increasing values from dark to light shades. Here, like in Figure 1, blue ellipses denote the standard deviation of the normal distribution in the two-dimensional parameter space. SAUERLAND ET AL. 10.1029/2022MS003390 5 of 23 CMA-ES for a uni-variate misfit function (upper six panels) and a corresponding bi-variate misfit function (lower three panels), respectively. We see that the variance of the distribution can increase as long as the good samples show a wide spread (e.g., iteration 3 in the one-dimensional case) but decrease if the good samples are close together like in the vicinity of the optimum (iterations 22 and 28 in the one-dimensional case). The general applicability of CMA-ES to large-scale, real-world problems has been shown by, for example, Kriest et al. (2017Kriest et al. ( , 2020. These studies also benefited from additional features of CMA-ES, that help to accelerate shifts of the distribution's mean into directions of descent and also support reliable updates of the distribution with a small population size λ. These features are given in more detail in Appendix A. The R-CMA-ES Algorithm The CMA-ES provides a unique set of n optimal model parameters p*, that minimize a given misfit function. However, as mentioned above, and shown earlier (cf. Kriest et al., 2017), a given misfit function can be rather insensitive to some parameters, thereby slowing down the convergence, and resulting in poorly defined optimal parameter values. Moreover, the parameters may represent a number of unresolved processes, which are only vaguely defined; therefore, these parameters are associated with a large inherent uncertainty. We now aim to account for this uncertainty by introducing a random element into optimization, by extending and modifying CMA-ES as follows: Let n be the number of non-random parameters p, ∈ ℝ , whose values are to be optimized by a given misfit function, in analogy to CMA-ES described in Section 2.1. We consider these parameters to represent processes that are well-understood and well-defined by the model, and that can be constrained well by the given misfit function. Let further r be the number of random parameters q, ∈ ℝ , that represent unresolved processes, and whose values can not be constrained by the given misfit function. We may further assume without loss of generality an overall parameter vector (p; q), whose first n components consist of the non-random parameters and the following r of the random components: ( 1, . . . , , 1, . . . , ) ∈ ℝ + . Our intention behind R-CMA-ES is to efficiently optimize p given randomly varying q. In other words, we seek for a parameter vector p* that is optimal over the entire range of q, where the mean value over the range of q can be generalized by applying a probability density function pdf(q). Such an optimal solution would then be minimal with respect to ∶ ℝ → ℝ , defined by which is a weighted integral over a given misfit function ∶ ℝ + → ℝ . For example, if we deal with only one random parameter q (r = 1) and allow q to take uniformly distributed values within a credible interval [a, b], then Equation 1 becomes When the evaluation of f is already computationally demanding, its computation over a wide range of q is prohibitive. Therefore, rather than applying the normal CMA-ES directly to F, we seek to design a modified CMA-ES that operates on the cheaper (point-based) misfit measure f(p; q), in order to converge to a parameter vector p* that minimizes F. The next two subsections describe our corresponding algorithmic modifications including a final sentence of justification at the very end of each subsection. Distribution Handling for Random Parameters The basic idea is to modify the update procedure of the maintained multi-variate normal distribution of parameters such that it gets only updated for the non-random parameters but retains its initial distribution property with regard to the random parameters. More precisely, we desire that the standard deviation interval of the marginal distribution w.r.t. the random parameter always remains the entire credible interval of that parameter. Figure 3 6 of 23 sketches the intended behavior for a two-dimensional test function (Himmelblau's function): the top row shows the convergence of the CMA-ES algorithm; the bottom row illustrates the corresponding behavior of R-CMA-ES if the first parameter is random and the second parameter is optimized. In the example, the CMA-ES algorithm converges to a point (parameter vector p*) that minimizes Himmelblau's function, that is, the mean of the normal distribution approaches the optimal point while all variances approach zero. In the modified algorithm, the normal distribution is supposed to "converge to a line," leaving the variance of the first (random) parameter unchanged while forcing the variance of the second parameter to approach zero. The intended adaption with regard to the normal distribution over random and non-random parameters is justified by the following fact: if the normal distribution converged in the sense that the variances of all non-random parameters approached zero, then the expected f((p; q))-value of any sample (p; q) is exactly the integrated value, F(p). However, in order to ensure that R-CMA-ES does actually converge to a parameter vector p that yields a small integrated misfit value F(p), we also have to adapt the sampling and selection procedure of the algorithm. Modification of the Sampling and Selection Procedure In an iteration of the CMA-ES, the good μ samples that are used to update the probability distribution are selected with regard to an f-value ranking. The same kind of selection is, however, not preferable if we want to update the normal distribution toward a solution with a small value of the integrated misfit function F defined by Equation 1. As an example, which is depicted graphically in Figure 4, we consider the two-dimensional linear function f(p, q) = −p − q on the feasible domain A = [0,1] 2 and declare q to be random. The misfit function f(p, q) takes its minimum in the upper right corner (1,1) T of A and the integrated misfit function F, defined by is minimized for q = 1, that is, on the right boundary of A. With the selection criterion that is based on the ranking according to the f-values, the μ selected samples can often occur quite balanced on both sides of the current distribution mean, implying too small distribution updates toward the right boundary of A, where the actual minimum (line) of A would be found. At the same time, the selected samples might not cover the full credible interval of the random parameter, as f pushes the selection toward its minimum in the upper right corner. Using this kind of selection (sketched in Panel (a) of Figure 4) for the new algorithm, it will likely not converge to the Figure 4. We therefore prescribe the values of the random parameter to stay independently normally distributed with regard to the subset of μ selected samples (like it is the case with regard to all λ samples). One possibility to attain the desired independency of selected parameter vectors is mirrored sampling and pairwise selection. Brockhoff et al. (2010) introduced the idea of mirrored sampling as a derandomization technique which can improve the convergence speed of evolution strategies . Auger et al. (2011) combined mirrored sampling with pairwise selection in order to avoid premature convergence caused by canceling effects on cumulative step-size adaptions. Recently, Wang et al. (2019) proposed mirrored orthogonal sampling which further improved the convergence behavior of CMA-ES. We adapt the idea of mirrored sampling to our situation and generate pairs of samples that are mirrored at the axis (hyper plane) of the random parameter(s). From each pair of mirrored samples we select the one with smaller f-value. An example for this kind of mirrored sampling is sketched in panel (c) of Figure 4. There, the combined sampling and selection procedure forces the distribution to approach the right boundary of the search space as desired (Figure 4,panel (d)). To summarize, applying the adapted mirrored sampling to the CMA-ES -in conjunction with the afore described distribution handling for random parameters -supports the convergence to a parameter vector p* that actually minimizes F instead of f. Boundary Handling Biogeochemical model parameters are usually confined to a reasonable range, for example, negative growth or mortality rates are biologically meaningless. Therefore, when optimizing these parameters, usually credible boundaries are defined. Yet, during the selection of discrete parameter values from the assumed distribution, the algorithm might select parameters slightly outside this range. Similar to Kriest et al. (2017), we exclude these values by adding a penalty term to the misfit. For the current study we apply a simpler boundary handling, which can be applied, without the need of further adaptions, to the case with random parameters. The procedure is as follows: We assume that every parameter vector ∈ ℝ must be contained in the set where each interval [a i , b i ] is the credible interval of parameter p i . We consider these constraints by adding to the objective function a penalty term, which depends on the distance of p to its own best approximation ϕ(p) in A. The best approximation ϕ(p) ∈ A of p is the vector in A which has (amongst all vectors in A) the smallest distance to p. It can be calculated component-wise by setting (ϕ(p)) i = min(b i , max(a i , p i )) for all i = 1, …, n. Thus, for p ∈ A we have ϕ(p) = p and for p ∉ A the best approximation ϕ(p) lies on the boundary of A. The re-definition of f with regard to the penalty term is given by where p is a non-random parameter to be optimized, and q is a random parameter. The gray-scale color scheme represents the function values f(p, q) with increasing values from dark to light shades. We draw λ = 20 samples per iteration (shown as dots) and select μ = 10 samples (black dots) for the distribution update. Blue ellipses denote the standard deviation of the normal distribution. We also show the principal axis of the ellipses that corresponds to the random parameter. Panel (a) shows iteration three of an optimization using independent samples that are ranked and selected with regard to their f-values. Panel where c is a (large) constant. We apply this kind of boundary handling with both, the classical CMA-ES and R-CMA-ES. Since we deal with boundary constraints of the form p i ∈ [a i , b i ], we can restrict CMA-ES to operate on the unit cube [0,1] n . In this case, we obtain a parameter vector p ∈ A (with A as defined above) from a sample x ∈ [0,1] n by scaling and shifting every component: We tested R-CMA-ES on a set of mathematical benchmark functions. The results of our test-bed confirmed the intended behavior of the algorithm. Our tests are summarized in Appendix C. They encouraged us to use R-CMA-ES in a real-world application, and compare its effects on algorithm performance, optimal model parameters and potential effects on biogeochemical turnover. Kriest et al. (2017) applied the CMA-ES to optimize six parameters of the BGC Model of Oceanic Pelagic Stoichiometry (MOPS; Kriest & Oschlies, 2015). The approach was facilitated by the Transport Matrix Method (TMM; Khatiwala, 2007Khatiwala, , 2018Khatiwala et al., 2005) as a numerically efficient tool to represent the global ocean circulation. The TMM represents advection and mixing in terms of transport matrices which are precomputed from an online ocean circulation model simulation. Here, as in Kriest et al. (2017) and also in Oliver et al. (2021), we apply monthly mean transport matrices from a 2.8° global configuration of the MIT general circulation model (MITgcm), having 15 depth levels (Marshall et al., 1997). MOPS coupled to the TMM simulates globally the concentrations and biogeochemical turnover of seven tracer components, namely phyto-and zooplankton, dissolved and particulate organic matter, phosphate, nitrate and oxygen. The Global Ocean Biogeochemical Model Setup To showcase the impact and performance of R-CMA-ES, we consider the same model setup as in Kriest et al. (2017), and optimize biogeochemical model parameters against a misfit function that includes global observations of nutrients and oxygen. Each biogeochemical model setup is simulated for 3,000 years, after which the model tracers approach a steady annual cycle. The Misfit Function f rmse The misfit function considered in Kriest et al. (2017) is a weighted root mean squared error between simulated annual mean tracer concentrations of oxygen, phosphate, and nitrate and their observed equivalents after being mapped to the model grid with N = 52, 749 ocean grid boxes. For each grid box and each tracer the corresponding misfit term is weighted with regard to both, the volume V i of the grid box divided by the total ocean volume V T , and the global average observed tracer concentration (j = 1 for phosphate, j = 2 for nitrate, and j = 3 for oxygen). Denoting the model output for a given parameter vector as m i,j and the corresponding observation as o i,j , i = 1, …, N, j = 1, 2, 3, the misfit function is 9 of 23 Zooplankton Mortality as Random Parameter We here diverge from the setup by Kriest et al. (2017) and apply R-CMA-ES to consider zooplankton mortality κ Zoo (see Kriest & Oschlies, 2015, equations 9-11) as a random parameter. In particular, we aim to optimize the remaining five parameters (the light and nutrient affinity of phytoplankton, maximum zooplankton grazing rate, the oxygen demand of remineralization, and the exponent describing the particle flux curve) such that the model shows a good fit to the misfit function f rmse of Equation 2 over a wide range of zooplankton mortalities. The choice of this parameter can be justified from a modeling point of view. As noted above, zooplankton mortality does not only represent the natural mortality but also the predation by higher trophic levels which is likely to vary, because, for example, of different fish populations preying upon zooplankton. Further, the model assumes a single zooplankton functionality, while in nature many different zooplankton organisms contribute with probably different mortalities. Moreover, the parameter optimization experiment OBS-NARR by Kriest et al. (2017) identified κ Zoo to be most insensitive among the six parameters that are optimized. Table 1 depicts the optimal parameter values of experiment OBS-NARR when only simulations which deviate from the best fit f rmse (p*) by less than 5% (according to (f rmse (p)/f rmse (p*) − 1)*100) are included. Parameter κ Zoo has the widest range of variability ([0.772,5.315]), exceeding both boundaries of the credible interval [1.6,4.8] used by Kriest et al. (2017). Relative to the credible interval, the range of variability is 142%. Further, together with the half-saturation constant for PO 4 uptake, κ Zoo was the only parameter that showed a strong and long-lasting trend during the optimization experiment OBS-NARR, while the other four parameters approached their optimum value much earlier (cf. Figure 5). Therefore, we decided to optimize the other five parameters considered by Kriest et al. (2017) such that the expected model-data misfit with regard to κ Zoo ∈ [1.6, 4.8] is minimized. More precisely, we aim to obtain five optimal parameters In this contribution we assume normally distributed parameter values and define κ Zoo to be  (3.2, 1.6) distributed, that is, normally distributed with mean 3.2 (the middle of the credible interval) and standard deviation 1.6 (half interval length). Thus, we compare two model calibration experiments. The first one is the reference experiment OBS-NARR by Kriest et al. (2017). Our new calibration experiment is referred to as OBS-RAND. It deviates from OBS-NARR by defining κ Zoo to be random and  (3.2, 1.6) distributed, aiming to minimize the integrated misfit function F rmse instead of the parameter point-based misfit function f rmse . The model configuration and the computing facility are the same for both experiments. Results and Discussion We note that both model calibration approaches have a common set of five non-random parameters to optimize. In order to compare their performance, following optimization we carried out 100 model simulations with the five optimal parameters over the range of zooplankton mortality rates. We then selected the minimum value of Equation 2 as representative for f rmse , and the average over all 100 simulations as representative for F rmse . Note that for OBS-NARR, the former was the target of optimization with CMA-ES, whereas for OBS-RAND F rmse (the integrated misfit) was the implied target. Table 2 shows the results of this analysis, together with the optimal parameter sets.The optimized model-data misfit values of experiment OBS-RAND are quite close to the optimal model-data misfit value of experiment OBS-NARR. Indeed, we can see that the optimizations with R-CMA-ES (experiment OBS-RAND) improved the model-data misfit averaged over the entire range of zooplankton mortalities from 0.4566 to 0.4537 (see Table 2), while the best misfit of 0.4499 for a single (six-)parameter vector has been found by CMA-ES in the reference experiment, OBS-NARR by Kriest et al. (2017). Compared to experiment OBS-NARR by Kriest et al. (2017), our global ocean biogeochemical model calibrations with random quadratic zooplankton mortality parameter p Zoo hardly affect parameters related to longterm and/or large scale model processes which are affected by circulation and latitude, such as I c , − 2 ∶ , and Note. For CMA-ES, the f rmse -value is the misfit value of the algorithm's solution. For R-CMA-ES, we state the empirically best f rmse -value which is obtained a posteriori by fixing the non-random parameters of the solution vector but choosing 100 different κ Zoo -values such that each value covers an area of probability 1 100 . The same 100 κ Zoo -values were also used to approximate the F rmse -values of both algorithms' solution vectors. Table 2 Optimized Parameters and Model-Data Misfit Values of Experiment OBS-NARR and Our New Experiment With Random Zooplankton Mortality Parameter, OBS-RAND SAUERLAND ET AL. 10.1029/2022MS003390 11 of 23 the sinking speed parameter b*. On the other hand, the half saturation constant for nutrient uptake K Phy is not pushed to the upper bound of its credible interval any more, but is closer to its center. Also, the maximum grazing rate μ Zoo of zooplankton drops by 10%. A detailed view of the optimization trajectory is provided in Figure 5, which shows the convergence of the six free model parameters obtained by the reference experiment OBS-NARR while Figure 6 shows the corresponding result of experiment OBS-RAND. Figure 7 illustrates the exploration of the parameter space by R-CMA-ES, as well as its convergence toward five optimal parameters of p*. Further we find that R-CMA-ES converged faster compared to CMA-ES, as indicated by the lower number of iterations required for convergence (cf. Table 2). Both of these responses might be a result of the modifications in the optimization procedure, as R-CMA-ES optimizes the expected misfit with regard to the most misfit-insensitive parameter. Figure 8 shows the model-data misfit measure, f rmse , for 100 parameter sets with varying κ Zoo given optimal vector of the non-random parameters p. In OBS-NARR the minimum misfit is obtained at the corresponding κ Zoo -value of 4.57 (blue dot). On the contrary, by minimizing the expected misfit F rmse (p) using R-CMA-ES, we obtain a lower minimum misfit over a wider range of κ Zoo . When partitioning f rmse with respect to its three components, namely the misfit to observed oxygen, phosphate and nitrate (Figure 9).we see that the improvement is mainly caused by a uniformly better RMSE of oxygen, while the RMSE of nitrate and, to a smaller extend, the RMSE of phosphate increased. Because the oxygen misfit comprises about 45% and both nutrients together only about 55% of the total misfit (cf. the right panel of Figure 9), the increased RMSEs of both nutrients could not compensate the better agreement with oxygen. Verifying the bias of the three tracers we find that the oxygen bias improves by 4 to 8 mmol m −3 for all κ Zoo -values in the credible interval [1.6,4.8] in experiment OBS-RAND ( Figure 10, red lines in lower right panel) compared to the values of the reference experiment OBS-NARR (black lines). Averaging over κ Zoo in [1.6,4.8] the oxygen bias improved from +10.1 mmol m −3 to +5.6 mmol m −3 , that is, we have a smaller overestimation of the observed global oxygen content (172.9 mmol m −3 ), which is also recognizable in the spatially resolved, depth integrated oxygen bias of Figure 11. Concerning the nutrient bias, there is almost no change for nitrate and phosphate between experiments OBS-RAND and OBS-NARR (also cf. Figure 11), that is, their higher RMSEs appear to be caused by higher pattern errors, only. Increasing Zooplankton Mortality Decreases Global Oxygen Inventory To understand the reasons for the improved oxygen misfit across the range of zooplankton mortalities, we have to investigate the role of those two parameters (μ Zoo and K Phy ) that changed significantly between the two optimization approaches. We therefore set up a set of sensitivity simulations applying the optimal values for μ Zoo (hereafter dubbed MU) and K Phy (KP) derived by the two optimization procedures in different combinations (see Table 3). As for OBS-NARR (hereafter called NA) and OBS-RAND (RA) for each sensitivity set-up, we ran a set of 100 simulations for different κ Zoo values. By doing so, we hope to disentangle the effects of μ Zoo , K Phy and κ Zoo on oxygen and biogeochemical cycling. Figure 10 shows the change of the most important tracer concentrations and fluxes with regard to the four scenarios. We find considerable effects of the mortality rate on plankton concentrations. Increasing zooplankton mortality leads to a strong decline in zooplankton concentrations, less grazing on phytoplankton and thus an increase in phytoplankton concentration. Because phytoplankton mortality is a linear function of its concentration, it increases with increasing zooplankton mortality. Egestion decreases in line with zooplankton concentration (and grazing; not shown here). The decline in zooplankton concentrations is, however, not strong enough to counteract the increase in its mortality rates, leading to an increase in the mortality flux. Zooplankton mortality contributes mostly to the production of sinking detritus, followed by phytoplankton mortality and egestion. The antagonistic responses of detritus production through egestion and the respective mortality fluxes cause a net increase in export production by about 8% for scenario RA. Larger export of organic matter to deep waters, where it is respired and thus consumes oxygen, in turn causes a decline in global average oxygen by about 8% (16.6 Pmol in terms of global inventory or 14.2 mmol m −3 global average oxygen) for scenario RA after 3,000 years of simulation. The effects of changes in mortality rate are similar for all four model setups that apply different combinations of μ Zoo and K Phy . Changes in these two parameters affect the oxygen inventory only by about 1%-2%, but we note that here we investigate a smaller range of variation in the parameters. A smaller K Phy (higher nutrient sensitivity) as in MU causes a more efficient nutrient uptake and larger growth of phytoplankton; yet, this is not reflected in its concentration or mortality, but propagated into zooplankton grazing (not shown), egestion and mortality. The resulting increase in export production in turn causes a decline in oxygen; thus, for MU the decrease in oxygen is caused through zooplankton egestion and mortality. A smaller zooplankton grazing rate in KP causes a decline in its concentration, egestion and mortality. However, it also relieves the grazing pressure on phytoplankton, thereby enhancing its concentration and mortality. The resulting increase in export production, which decreases the average oxygen concentration, is thus caused by phytoplankton mortality. Setup RA combines both parameter changes (small K Phy and μ Zoo ) which eventually add up to the highest export production and lowest global average oxygen concentration. Thus, the side effects of introducing a random zooplankton mortality in OBS-RAND lead, to some extent, to a modification in the model's biogeochemical cycling, which eventually results in a global oxygen inventory that is about 6 mmol m −3 (about 3%) lower than in OBS-NARR, which relied on a single, fixed value of zooplankton mortality. Nevertheless, the effects of zooplankton mortality on global oxygen content are about two to three times as large. Conclusions We introduced R-CMA-ES as a new variant of the Covariance Matrix Adaption Evolution Strategy. R-CMA-ES allows us to declare one or more parameters to be random, that is, the algorithm seeks to adjust only the other (non-random) parameters in order to minimize a new misfit function, which is the expectation of the former misfit function, integrating over all values of the random parameters. Such a calibration can be more reasonable than searching for single optimal parameter values only. An example are situations where (e.g., due to model simplifications) it is not clear which natural processes are actually covered by a certain parameter and to what extent. Tests with mathematical benchmark functions confirm an efficient convergence behavior of R-CMA-ES as compared to its deterministic counterpart. We applied R-CMA-ES to a global ocean biogeochemical model setup, which inspired us to develop the algorithm. The model has been considered in a former optimization study by Kriest et al. (2017), who optimized six BGC parameters and observed that two of the parameters-the quadratic loss rate of zooplankton and the phytoplankton half-saturation constant for PO 4 -showed a long lasting drift during optimization. The quadratic loss of zooplankton parameter has high uncertainty because it mimics many different processes (e.g., cannibalism within the highly aggregated zooplankton compartment; predation by fish and higher trophic levels; density-dependent population control through viral infection), as well as many different zooplankton species. Therefore, this parameter is an ideal candidate to declare random during optimization. Allowing the quadratic loss of zooplankton to vary randomly over a credible interval, R-CMA-ES converged faster than CMA-ES did in the reference experiment of the former study. Moreover, the optimization now also reflects the potential spatio-temporal variability of this parameter, for example, due to higher trophic levels such as fish, which might be of relevance when a BGC model is coupled to a model of higher trophic levels (Getzlaff & Oschlies, 2017;Hill-Cruz et al., 2022). Furthermore, while optimization OBS-NARR seems to cause a bias of the half-saturation constant of phytoplankton for phosphate toward its upper limit, the same parameter is more in the center of its credible interval when we declare the quadratic zooplankton loss random. Another significant change is observed for the zooplankton maximum grazing rate. Our model results suggest that after 3,000 years of simulation with climatological forcing, the uncertainty in zooplankton mortality causes a variation of oxygen inventory by 8% (16.6 Pmol or 14.2 mmol/m 3 average). Smaller changes in maximum zooplankton grazing rate or nutrient affinity of phytoplankton have a smaller effect on biogeochemical fluxes and the long-term global oxygen inventory, but suffice to improve the applied modeldata misfit measure over a wide range of zooplankton mortalities. The change in oxygen induced by mortality is of the same order of magnitude as changes induced by anthropogenic climate change (Oschlies, 2021), but it occurs after a considerably longer time span. Also, the oxygen variation across the spectrum of mortality rates is as large as the deviation of many global Earth system models from observations (e.g., Bopp et al., 2013). Our optimizations have shown that it is difficult, if not impossible, to constrain zooplankton mortality with a misfit function that targets at the RMSE of dissolved inorganic tracers , which is a common practice when tuning global biogeochemical ocean models. A more flexible tuning strategy as presented here could potentially help to account for this large uncertainty, and may also help to provide a sound and reliable upper closure term and interface for biogeochemical models coupled to higher trophic level (HTL) models. In this study we used an ocean biogeochemical model to showcase the potential advantages of the R-CMA-ES to carry out model calibrations in the view of some uncertain model parameters. Of course, there are many more fields, that face parameter uncertainty, and where a model calibration with random parameters could be useful, such as the personalization cardiac (Elshall et al., 2015;Lykkegaard et al., 2021) or morphodynamic models of a curved channel (Shoarinezhad et al., 2020). All these fields face the issue of parameter uncertainty. In general, a suitable partition of the model parameters into random parameters and non-random parameters needs some problem-dependent pre-considerations. For example, all parameters of interest can be analyzed w.r.t. their covariances and their impact on the model-data misfit function, using multiple model runs (e.g., the simulations of a parameter sensitivity analysis or a deterministic model calibration experiment). Depending on the research question and the observed misfit-sensitivities and covariances of the model parameters, a suitable parameter (set) can be declared random during an optimization, which calibrates the model in the face of (random) parameter uncertainty. Appendix A: Details of the Classical CMA-ES As illustrated in Section 2.1 CMA-ES iteratively samples a population of λ candidate solutions from a multi-variate normal distribution  ( , 2 ) , defined by a mean vector m, a positive definite covariance matrix C and an overall scaling factor s. A new normal distribution is empirically re-estimated from the better half of = ⌊ 2 ⌋ samples, and the new probability distribution is used for a smooth update of the former distribution, which in turn is sampled in the next iteration. A1. Sampling the Normal Distribution Sampling a parameter vector ∈ ℝ from a multi-variate normal distribution  ( , 2 ) is practically realized by choosing n independent samples from the uni-variate standard normal distribution  (0, 1) (e.g., using the Box-Muller transform) to be the components of a vector ∈ ℝ and defining of C, that is, the columns B (i) of B are orthogonal eigenvectors of C, and D 2 is a diagonal matrix of corresponding eigenvalues. Geometrically (cf. Hansen, 2016), B and D can be identified with the so called standard deviation ellipsoid, which is a surface of equal probability density of the normal distribution (see, e.g., the ellipses in Figures 3, 4 and 7). The orientations of the n principal axes of the standard deviation ellipsoid are given by the eigenvectors B (i) of C, and the lengths of its principal axes are given by the roots of the corresponding real and positive eigenvalues D i,i . Like in the uni-variate case, the probability that a sample lies within a given area of the search space is obtained by integrating the density function of the probability distribution (pdf) over that area. For a multi-variate normal distribution  ( , ) , the corresponding probability density function is given as A2. Updating the Distribution: Basic Principle Given any set S = {p (1) , …, p (λ) } of λ samples, empirical (re)estimates m emp and C emp of the distribution parameters can be calculated such that the expectation of m emp is m and the expectation of C emp is C. Clearly, the estimates become more reliable the larger λ is. We may assume that the population S is increasingly ordered (ranked) with respect to the considered objective function ∶ ℝ → ℝ , that is Now, by involving only the better half of = ⌊ 2 ⌋ samples, their distribution estimate  ( , ) with corresponding parameters m μ and C μ will be modified to reproduce that μ samples with higher probability than the other λ − μ samples. CMA-ES uses values w 1 ≥ w 2 ≥⋯ ≥ w μ with ∑ =1 = 1 to give solutions a rank dependent weight in the updating process of both, m μ and C μ (a more general version allows to involve all solutions, applying negative weights for the poor ranks). The new mean is, thus, calculated as = ∑ =1 ( ) . A subtlety is the choice of the reference mean value used for estimating C μ . Instead of the new empirical mean m μ , the mean m of the former distribution is chosen and yields It has the effect that the new distribution is elongated into directions of descend (cf., e.g., iteration 2 in the right example of Figure 2). A3. Updating the Distribution: Working With Small Populations As mentioned above, reliable distribution estimates require a sufficiently large number of samples. But, for a competitive computational performance CMA-ES should get along with a rather small number of samples. Therefore the information of former populations is involved by updating the covariance matrix C to be a (convex) combination of both the current C and its estimate C μ , that is Using this formula with c μ as in Table A1, it can be shown that 37% of the current matrix C's information dates back at least ⌊ 1 ⌋ generations, that is, the choice of the smoothing factor c μ decides about the backward time horizon of the update procedure. Another feature that facilitates small population sizes λ is to calculate and update a vector p c that represents iteration averaged changes of the distribution mean and to use p c for a so called rank-one estimate = of the covariance matrix. The idea behind this approach is that, using C μ , distribution elongations into directions of descend do not distinguish the sign of the directions. The use of the vector p c (called evolution path) mitigates this effect: consecutive changes of the distribution mean into opposite directions would cancel out each other. Similar Selection and recombination Step size control Covariance matrix adaption to the smoothing with factor c μ in the update of C in Equation A2, the update of p c is done with a smoothing factor c c . With a further smoothing factor c 1 for the rank-one estimate C 1 , the combined covariance matrix update reads ← (1 − − 1) + + 1 1. While C μ efficiently involves information from the current population into the update process, C 1 exploits correlations between generations. The former is important in large populations, the latter is particularly important in small populations. A4. Step Size Control Finally, there is an additional explicit adaption of the overall scale (the step size) of the distribution by adapting a scaling factor σ, actually using  ( , 2 ) instead of  ( , ) . Similar to the evolution path p c for the rank-one covariance matrix estimates above, the adaption of the scale σ involves an evolution path p σ that mirrors cumulative changes of the mean. The difference between the update formulas of both evolution paths p σ and p c is that for p σ all step sizes are re-scaled with respect to the isotropic normal distribution  ( , ) , where I is the identity matrix and 0 is the zero vector. The expected step size between the mean vectors of two consecutive iterations is therefore the expected length χ of a sample of  ( , ) , which is Set p σ = p c = 0, C = B = D = I and σ = σ 0 5: while stopping criterion is not met do 6: Sample probability distribution: 7: for k = 1, …, λ do 8: Sample z (k) from  ( , ) 9: Set y (k) = BDz (k) 10: Set p (k) = m + sy (k) 11: end for 12: Sort samples by (penalized) objective function values 13: Update probability distribution: 14: Update mean: 15: 17: Update evolution paths: 18: Update covariances and scaling: Derive B and D according to (A1) 25: end while SAUERLAND ET AL. 10.1029/2022MS003390 19 of 23 evolution path p σ longer than χ indicates consecutive distribution drifts into correlated directions which justifies a larger overall scale of the distribution. A5. Operational Constants and the Pseude Code The classical CMA-ES is (reduced by some subtleties) outlined as Algorithm 1 (cf. Hansen, 2016). This pseudo code may serve to verify our adaptions to the new situation, which we will elucidate in detail in Appendix B and summarize as Algorithm 2. Here, we denote the identity matrix with I, the all-ones vector (1,…,1) T with 1, and the zero vector with 0. Note, that in Algorithm 1 the sampled candidate parameter vectors p (k) are always assumed to be increasingly ordered with respect to the (penalized) objective function, that is, the best μ out of λ parameter vectors are used for updating both, m and C. Appendix B: Details of R-CMA-ES As already explained in Section 2, we will modify both, the distribution and the sampling and selection procedure of the CMA-ES algorithm in order to efficiently optimize the integrated objective function F defined by Equation 1 instead of its point-based counterpart f. Set μ, w, μ eff , χ, c σ , d σ , c c , c μ , c 1 , s 0 according to Table A1 (but λ = 2μ) and σ = σ 0 3: Set = 1 2 ⋅ ∖∖ (dimension n + r) 4: Set p σ = p c = 0, C = B = D = I ∖∖ (all with dimension n) 5: while stopping criterion is not met do 6: Sample probability distribution: 7: for k = 1, …, μ do 8: Sample z (k) from  ( , ) 9: Set ( B1. Modification of the Distribution We will use the following notations, most of which were already introduced in Section 2.2: we denote by n be the number of non-random parameters which we want to optimize and by r the number of random parameters. We can restrict to the case that the first n components of the parameter vectors are the non-random parameters and the last r components are the random parameters. For ∈ ℝ we write p n for the subvector ( 1, . . . , ) ∈ ℝ and p r for the subvector ( +1, . . . , + ) ∈ ℝ . Vice versa, if a vector ∈ ℝ and a vector ∈ ℝ are given, then we denote by (p; q) the vector ( 1, . . . , , 1, . . . , ) ∈ ℝ + . In order to incorporate the variability of a parameter p i in the procedure, we must fix both, the mean and the variance of the random parameter, that is, ∼  (0.5, 0.5) in our case, as we operate on [0,1] n+r . This can be done by using some suitable modification C′ of the scaled covariance matrix s 2 C (and additionally by keeping m i = 0.5). A natural modification is to overwrite the ith column of s 2 C with 0.25 · e i and the ith row of s 2 C with 0.25 ⋅ , where e i is the ith unit vector. It implies that e i becomes an eigenvector of the modified covariance matrix C′ with eigenvalue 0.25 and, thus, that one of the principal axes of the standard deviation ellipsoid is parallel to the ith coordinate axis and has length 0.5. Essentially the same result can be achieved by restricting the maintained multi-variate normal distribution to the set of non-random parameters, that is, ∈ ℝ × , calculating from each sample ∼  ( , ) the corresponding vector of non-random coordinates y n = BDz n (where B and D describe the eigendecomposition of C according to Equation A1), and the corresponding vector of random coordinates = 1 2 . Finally, the new scaled and shifted sample (cf. lines 12-13 in Algorithm 2) can be defined by B2. Modification of the Sampling and Selection Procedure Let ∶ ℝ + → ℝ be the objective function in the deterministic optimization case and ⊆ ℝ be the subspace of all random components. By pdf we denote the probability density function of the random components' (r-dimensional) normal distribution  (0.5 ⋅ , 0.5 ⋅ ) . We wish to optimize the integrated objective function ∶ ℝ → ℝ defined by Equation 1: ( ) = ∫ ∈ pdf( ) ⋅ (( ; )) d . As reasoned in Section 2.2 we want the values of a random component i to stay independently  (0.5, 0.5) distributed with regard to the μ selected samples (like it is the case with regard to all samples). For this purpose, we adapt the idea of mirrored sampling (Auger et al., 2011) to our situation and generate pairs of samples that are mirrored at the axis (hyper plane) of the random parameter(s), meaning that we sample the vectors z (k) only for k ∈ [μ] and set ( + ) ∶= ( − ( ) ; ( ) ) (cf. lines 8-9 of Algorithm 2). An example for this kind of mirrored sampling is sketched in panel (c) of Figure 4. From each pair of samples we select exactly one sample for the update of the normal distribution, namely the better one w.r.t. f. B3. Pseude Code We summarize R-CMA-ES as Algorithm 2. For the update formulas in lines 18-19 as well as for the calculation of C μ , we assume that (by sorting) the first μ samples have been selected according to the procedure described above. Appendix C: R-CMA-ES Test-Bed We consider an ensemble of mathematical benchmark functions on which we compare CMA-ES to R-CMA-ES. Similar to our real-world application, we restrict R-CMA-ES to deal with a single random component and choose n = 6 as problem dimension. Additionally, we use n = 10. Our test cases invoke 5 benchmark functions; one linear function, one bowl-shaped ("sphere") function, one valley-shaped ("Rosenbrock") function, and two functions ("Griewank" and "Rastrigin") with many local minima. The details are listed in Table C1. For problem dimension 6 we set the number of samples per iteration λ = 10 and pose a maximum iteration number of 200; for problem dimension 10 we use λ = 12 and pose an iteration limit of 300. As a second stopping criterion a standard deviation of at most 10 −4 must be satisfied for all parameters. Table C2 represents the results that we obtained by applying both, a Matlab implementation of CMA-ES (Hansen, 2012, cmaes.m, Version 3.61. beta) and our R-CMA-ES algorithm (also implemented in Matlab). Note. We used 10 optimization trials (with different random numbers) for both algorithms and each instance. In each case we show the mean result over all trials. The f-values and F-values have been calculated like in Table 2 but using 10,000 (instead of 100) values for the random parameter. Eight out of our ten test instances have the property that the optimal solution to the deterministic optimization task does also belong to the set of optimal solutions to the optimization task with a random parameter. For example, the all-ones vector 1 is the global optimum of the benchmark function f of the test instances Linear1 and Linear2, but also a solution to the problem min ∈[0,1] ( ). Nevertheless, benchmark functions that exhibit the mentioned property serve well in proving a good convergence behavior of R-CMA-ES in comparison to CMA-ES, if it has similar (or even smaller) f-values and F-values. This is indeed the case for the eight respective test instances. The situation is different for both Rosenbrock instances. Here, the optimal solutions with regard to f and F differ. Indeed, CMA-ES often finds solutions that have a better f-value than the solutions of R-CMA-ES; but vice versa, all solutions found by R-CMA-ES have significantly better F-values. For the well-shaped test instances Linear1, Linear2, Sphere1, and Sphere2, CMA-ES converged after less function evaluations than R-CMA-ES did; the factor was about 3 for the linear instances and about 1.5 for the sphere instances. However, R-CMA-ES required less function evaluations for both valley-shaped instances Rosenbrock1 and Rosenbrock2 and for the jagged instances Rastrigin1 and Rastrigin2. Data Availability Statement The implementation of our optimization algorithm R-CMA-ES and the benchmark function test-bed in MATLAB as well as the C++ implementation for model calibration on HPC platforms can be found on GitHub. The permanent version of the code which we used for the experiments of this article is archived in a public zenodo repository (Sauerland, 2023). For compilation, usage, and further notes, we refer to the README contained in that repository. The BGC ocean model code and the observational data we used for this study is the same as used by Kriest et al. (2017) and also available on (Sauerland, 2023) (in the folder "MOPS"). The basic TMM and MOPS code as well as required input data for forcing, geometry, and initialization of the model are available to download from (Khatiwala, 2018).
13,748
2023-08-01T00:00:00.000
[ "Geology", "Environmental Science", "Mathematics" ]
Family Astroviridae Astroviruses are unsegmented, positive-sense RNA viruses with ~7–9 kb genomes. The family name derives from astron, meaning star, in Greek. These small unenveloped viruses have spikes that project about 41 nm from the surface of the capsid, giving them a star-like appearance. Human astroviruses cause gastroenteritis in children and adults. Symptoms include diarrhea, nausea, vomiting, fever, malaise, and abdominal pain; for the most part disease is self-limiting. References 128 After reading this chapter, you should be able to discuss the following: • What are the major structural and replicative features of astroviruses? • What disease is most frequently associated with astrovirus infection? • How are human astroviruses (HAstVs) most often transmitted? • How is the astrovirus capsid protein processed to produce the proteins found in the mature virion? Astroviruses (from the Greek, astron meaning star) were discovered in 1975, in association with an outbreak of diarrhea in humans. Since that time they have been isolated from many other mammals including pigs, cats, minks, dogs, rats, bats, calves, sheep, and deer as well as from marine mammals such as sea lions and dolphins. Astroviruses have also been isolated from birds; they can cause significant disease in turkeys, ducks, and chickens. Astroviruses are unsegmented, positive-sense RNA viruses with B7À9 kb genomes. These small, unenveloped viruses have spikes that project about 41 nm from the surface of the capsid, giving them a star-like appearance ( Fig. 14.1 and Box 14.1). HAstVs cause gastroenteritis in children and adults. Symptoms last 3À4 days and include diarrhea, nausea, vomiting, fever, malaise, and abdominal pain. For the most part, disease is self-limiting. GENOME ORGANIZATION Astrovirus genomes are unsegmented positivestrand RNA; genomes are not capped but have a 3 0 poly(A) tail. Genomes have three long overlapping reading frames (ORFs) that encode polyproteins ( Fig. 14.2). There are short untranslated regions at the 5 0 and 3 0 ends of the genome. Similar to other positivestrand RNA viruses (for example togaviruses and coronaviruses), two ORFs covering the 5 0 half of the genome encode nonstructural proteins (NSPs) that include proteases, membrane-associated proteins, an NTP-binding protein, and the RNA-dependent RNA polymerase (RdRp). ORFs for NSPs are overlapping and synthesis of the longer polyprotein product likely requires a ribosomal frame-shift between ORF1a and ORF1b ( Fig. 14.2). Astrovirus RNA is not capped, and based on the presence of a protein sequence similar to calicivirus VPg (in ORF1b), it is postulated that RNA synthesis is initiated using a protein primer. Astrovirus replication is cytoplasmic. VIRION MORPHOLOGY Astrovirus particles have T 5 3 icosahedral symmetry. The capsid protein is encoded from ORF2, expressed from a subgenomic mRNA. The capsid precursor undergoes multiple cleavages. The full-length precursor (VP90) is cleaved by cellular proteases (caspases) to generate the VP70 product. If capsase inhibitors are added to infected cells, release of virions is blocked. However VP70-containing capsids are likely noninfectious until VP70 is further processed by trypsin-like proteases to generate mature virions containing three polypeptides (VP25, VP27, and VP34). Addition of trypsin to cultured cells produces G E N E R A L C H A R A C T E R I S T I C S Astrovirus genomes are single stranded, unsegmented positive-strand RNA B6.8À7.9 kb. The genome is not capped but has a poly A tail. Genomes have three overlapping ORFs that encode polyproteins. Nonstructural proteins (NSPs) are cleaved by viral proteases. A third ORF encodes the capsid precursor. The capsid precursor is cleaved by host proteases. Two mRNAs are present in infected cells (full-length and a single subgenomic mRNA). Replication is cytoplasmic. Virions (B40À45 nm in diameter) are unenveloped. Capsids have T 5 3 icosahedral symmetry with spikelike projections at the vertices. infectious particles and similar enzymes are present in the intestine during a natural infection. The capsid core is formed by VP34 while VP25 and VP27 form the spikes on the virion surface. Binding sites for neutralizing antibodies map to VP25 and VP27. Due to the lack of robust cell culture systems, many details of astrovirus replication have not been confirmed. However the overall replication cycle of astroviruses is predicted to be quite similar to that of other positive-strand RNA viruses. Uptake of virions is thought to be by endocytosis and the uncoated genomic RNA would be translated to produce the viral RNA replication machinery. DISEASES CAUSED BY ASTROVIRUSES HAstVs are thought to be the second or third most common cause of viral diarrhea in young children. They have also been isolated from sporadic outbreaks of acute gastroenteritis in adults. A few studies have associated astroviruses with chronic diarrhea in immunocompromised children and adults. HAstVs are found worldwide. The main mode of human astrovirus transmission is by contaminated food (including bivalve mollusks) and water, although direct person-to-person transmission has also been documented (Fig. 14.3). There are multiple serotypes of human astrovirus and the main target cells are enterocytes (epithelial cells of the intestinal tract). Astrovirus infection does not notably alter intestinal architecture and does not induce inflammation. It has been proposed that pathogenesis may be caused by apoptotic death of infected epithelial cells. Symptomatic infections are most common in children younger than 2 years of age, and it is estimated that 5%À9% of cases of viral diarrhea in young children are caused by astroviruses. In the US population the presence of antiastrovirus antibodies is very high, indicating that most infections are asymptomatic or very mild. Outbreaks of astrovirusassociated diarrhea have been reported among elderly patients and military recruits. Food-borne outbreaks, affecting thousands of individuals, have occurred in Japan. In temperate climates astrovirus infection is highest during winter months while in tropical regions prevalence is highest during the rainy season (Box 14.2). Rarely, astroviruses have been isolated from organs other than the gastrointestinal tract. They have been isolated from a few children with CNS disease although disease causation has not been confirmed. (Stenglein et al., 2012 However, there is a good example of CNS-associated astrovirus infection in an animal model. Shaking mink syndrome is a neurologic disorder of farmed minks. Outbreaks have occurred in Denmark, Sweden, and Finland. Examination of diseased mink revealed brain lesions (nonsuppurative encephalomyelitis) and experimental infection of brain homogenates into healthy mink recapitulated the disease, a result highly suggestive of an infectious agent. Attempts to culture an infectious agent were unsuccessful but the agent was finally identified using metagenomics. Nucleic acids sequences were obtained from brain material of diseased and healthy mink. Comparisons revealed an astrovirus genome associated only with diseased mink. The CNS-associated astrovirus shares about 80% nucleotide identity with an enteric mink astrovirus. In this chapter we learned that: • Astroviruses are unenveloped, positive-strand RNA viruses. Their name derives from their star-shaped virions. • Astroviruses were first identified in association with outbreaks of gastroenteritis. • HAstVs are most often transmitted by the fecal oral route, through contaminated food and water. • The astrovirus capsid protein is processed by host proteases. One cleavage is mediated by intracellular caspases and others by extracellular trypsin-like proteases.
1,624
2017-09-01T00:00:00.000
[ "Physics" ]
Toward a Hierarchical Bayesian Framework for Modelling the Effect of Regional Diversity on Household Expenditure Problem statement: Household expenditure analysis was highly demandin g for government in order to formulate its policy. Since household data was viewed as hierarchical structure with household nested in its regional residence whi ch varies inter region, the contextual welfare analysis was needed. This study proposed to develop a hierarchical model for estimating household expenditure in an attempt to measure the effect of regional diversity by taking into account district characteristics and household attributes using a Ba yesi n approach. Approach: Due to the variation of household expenditure data which was captured by th e three parameters of Log-Normal (LN3) distribution, the model was developed based on LN3 distribution. Data used in this study was household expenditure data in Central Java, Indones ia. Since, data were unbalanced and hierarchical models using a classical approach work well for bal anced data, thus the estimation process was done by using Bayesian method with MCMC and Gibbs sampli ng. Results: The hierarchical Bayesian model based on LN3 distribution could be implemente d o explain the variation of household expenditure using district characteristics and hous ehold attributes. Conclusion: The model shows that districts characteristics which include demographic and economic conditions of districts and the availability of public facilities which are strongl y associated with a dimension of human development index, i.e., economic, education and he alt , do affect to household expenditure through its household attributes. INTRODUCTION Regional income distribution can determine the ability of the region in creating change and improvement of its people, such as reducing poverty. It is noted that inequality of regional income distribution will not create wealth for society in general, but only creates wealth for certain groups. According to BPS (2010b), inequality of income distribution can be viewed from three sides. First, the relative inequality i.e., size distribution of income disparities. Second, rural-urban income disparities which are usually caused by more development-oriented to urban areas. This Urban bias development often occurs in developing countries such as Indonesia. Third is the regional income disparity, which is generally viewed in Indonesia because of the economic development disparities between regions and inequality in the distribution of natural resources between region. Basically the factors that affect the welfare problems can be broadly categorized into two main things. Those are behavior paradigms and policy paradigms (Akita and Pirmansyah, 2011). Behavioral paradigms related to the effort of responsibilities of each individual or household in achieving their welfare levels. In each household, there are specific factors that potentially contribute to the paradigm of such behavior. While the policy paradigms associated with economic conditions, politics and government policy. In addition, non-household factors may also affect the difference in the level of welfare. An example is community-level factors such as geography and availability of public facilities (economic, education and health facilities). Income per capita is an economic indicator that is often used for measuring the prosperity and wellbeing. Analysis of household income is essential in order to formulate government policy. However, household income is generally very difficult to be measured accurately, especially in developing countries. Basically, household income and household expenditure are not the same thing. But such relationships between those two are very strong. Akita and Pirmansyah (2011) states that consumption expenditure is more reliable than income as an indicator of a household permanent income because it does not vary as much as income in the short term. For those reasons, household expenditure patterns approach is then widely used to analyze the pattern of household income. Indonesia has changed its governance systems for centralized into a decentralized system since 1999. Consequently, the achievement of local government will be largely determined by the active and innovative role of local government in determining its local policy in order to achieve prosperity and welfare of its residents. Since the Indonesian area is vast and the regional conditions vary with each other, the contextual welfare analysis needed by taking into account the regional diversity in order to formulate government policy. Shahateet (2006) shows that there is regional effect of income inequality. Central Java is one of the provinces on Java Island in Indonesia. It is known as the heart of Javanese culture because the culture of Central Java is diverse and includes a variety of cultures from another province in Java. The total area of Central Java is 32,800.69 km2, or approximately 25.34% of the total Java Island (BPS, 2010a). Its poverty rate was about 16.6% of its population in 2010 (BPS, 2010a). That number is higher than average percentage of poor people of Indonesia (13.3%) (BPS, 2010a). In 2011, the local government shows the success in declining the percentage of poor people in Central Java to around 15.76% (BPS, 2011). Administratively, the province of Central Java is divided into 35 districts consisting of 29 regencies and 6 cities. The differences regarding the household expenditure level in the Central Java inter-district can be seen in Fig. 1. This Fig. 1 shows that the mean of household expenditure varies between districts and districts in urban areas have a higher household expenditure mean than rural areas. Household expenditure distribution has a shape that close to a right skewed distribution such as lognormal. Battistin et al. (2007) state that Log-Normal distribution provides a useful theoretical model for studying certain economic population such income and expenditure distributions. Two parameter log-normal distribution however, is insufficient to capture the variation in the empirical distribution of household data in Central Java. The three parameter Log-Normal distribution (LN3) therefore is applied to explain the variation of the data. The probability density function for LN3 is specified as follows: where: µ>0 = The location parameter τ>0 = The scale parameter and -∞<λ>∞ = The threshold parameter It is shown in Eq. 1 that LN3 has additional parameter, i.e., threshold parameter that shifts the whole of its distribution curve above zero. This characteristic represents the expenditure data which never has zero value. Since household data is nested in its regional residence, it is classified as hierarchical platform. In this case, household expenditure can be influenced by factors from several different levels, i.e., factors at the household level and factors at the regional level. Hierarchical models are formulated for analyzing data with complex sources of variation (Raudenbush and Bryk, 2002). Cases with complex sources of variation are frequently referred to the hierarchical structure of data (Goldstein, 1995;and Hox, 1995). Hierarchical data structure viewed data to be classified as a multilevel structure. Standard unilevel methods are not appropriate for analyzing such of hierarchical system (Maas and Hox, 2004), due to the parameter estimates are inefficient and standard error is negatively biased (Hox, 1995;Maas and Hox, 2004). Raudenbush and Bryk (2002), Goldstein (1995) and Hox (1995) proposed hierarchical models for overcoming this kind of several different levels of hierarchical data modeling into a single statistical analysis. It is noted that hierarchical models, mostly use a classical approach in the estimation process. In the case of the complex hierarchical models, however, parameter estimation using the classical approach would be very difficult to be derived. Raudenbush and Bryk (2002) demonstrate that a hierarchical model using a classical approach works well when the data is balanced and the number of higher level unit is large. In some applications, however, this condition will not be easily hold. MATERIALS AND METHODS Residential conditions and facilities are frequently used as visual indicators to judge the level of socioeconomic welfare of the household. A number of studies, which have been done, show that several household attributes affect household expenditure, i.e., household size, education level of household head, house area, types of wall, type of floor, source of drinking water, kitchen, toilet facilities and electricity Iriawan and Ismartini, 2011;Haughton and Nguyen, 2010;Mok et al., 2007;and Grosh and Baker, 1995). This study will use predictors based on those previous studies, called micro variables and other predictors, called macro variables, that are investigated to influence household expenditure. Public service facilities are the example of macro variables. Since the availability of those facilities illustrates concrete steps of the local government policies in enlarging the person's welfare. The sample coverage area of data used in this study is a Central Java Province. Preliminary analysis of the data is shown in Fig. 2 which demonstrates the pattern of simple regression lines for five districts in Central Java that have difference in both of their slopes and intercepts. This fact indicates that there are variations on district level or the presence of regional influence in which hierarchical analysis should be employed for analyzing this problem. This study proposes to model community characteristics and household attributes on household expenditure on Central Java Province, Indonesia, using a hierarchical Bayesian model based on the three parameter log-normal distribution. Data descriptions: This study relies extensively on household expenditure data collected by the National Socioeconomic Surveys (Susenas) which have been conducted regularly by Statistics Indonesia (BPS). The dependent variable used in the model is household expenditure per capita (y). There are several household attributes as micro variables (X) and district characteristics as macro variables (W) that are considered as having affected the household expenditure per capita. Those variables are a type of house wall (X 1 ), type of house floor (X 2 ), floor area per capita (X 3 ), type of sources of drinking water (X 4 ), toilet facilities usage (X 5 ). Type of cooking fuel (X 6 ), household size (X 7 ), the level of household head education (X 8 ), whether the head of household working in agriculture (X 9 ), population density (W 1 ), ratio of primary school to primary school age children (W 2 ). The ratio of junior high school to junior high school age children (W 3 ), ratio of senior high school to senior high school age children (W 4 ), number of health facilities (W 5 ), number of medical personnel (W 6 ). The percentages of villages having public phone (W 7 ), a number of cooperative, that is an establishment that its members are people or establishments with the legal status of the cooperative and its activities based on peoples' economic movements (W 8 ). The number of large and medium enterprise (W 9 ), number of small/household industry (W 10 ), gross regional domestic product at current price per capita (W 11 ) and percentage contribution of revenue to budget revenue (W 12 ). Fig. 2: Simple regression lines for five districts in central Java Log-normal hierarchical models: A hierarchical model is formed by two sub-models, i.e., micro models (the models at a lower level) and macro models (models at higher levels) (Goldstein, 1995). For the two level hierarchical models of household expenditure in Central Java, the micro model investigates the association between household expenditure and various household attributes, while the macro model examines the relation among coefficients of micro model and district characteristics. Suppose N is the number of households which is sampled from m districts and n j is the number of households which is sampled in j th districts, so . Suppose y j as a response in micro model and x j as micro variables where j = 1,2,..m. y j is n j ×1 vector and X j is n j × p a matrix where p = k+1 and k represent a number of micro variables. Since , the micro models based on Log-Normal distribution is specified as follows (Stata, 2009) where, j j ¢ y = ln(y ) , r j is the residual vector of micro models and j j ¢ r = ln(r ) . β j is p×1 coefficient vector of micro models. The macro models are, therefore can be specified as follows: where, W j is p×q the matrix of macro variables with q = l+1 and l represent a number of macro variables, γ is the coefficient vector of macro models and u j is the residual vector of macro models. The single equation models for Eq. 2 and 3 can be specified as follows: Refer to Eq. 2, 3 and 4, the two level hierarchical Bayesian models for household expenditure in Central Java are defined as follows: 9 ij 0j kj kij ij j k=1 ¢ ¢ y = β + β X + r ; i = 1,2,...,n , j = 1,2,...,35 ∑ 12 pj p0 pl lj pj l=1 β = γ + γ W + u ; p = 0,1,2,...,9, l = 1, 2,...,12 ∑ Bayesian inference: Consider Bayes' Theorem (Box and Tiao, 1992;Gelman and Hill, 2007): where, θ and z are both random, θ is parameter vector and z denotes vector of observations from the sample. p(z) is defined as normalized constant with respect to θ. Then, the posterior can be represented as a proportional form as follows: It is shown in Eq. 8, the posterior is proportional to the combination of prior information and current information of data. All information about the unknown parameter of interest is included in their joint posterior distribution. Based on Eq. 7, the joint posterior distribution of the two level hierarchical models for household expenditure can be expressed as: [y] [β] f(y | β, λ, τ )p (β | γ, τ ) p (γ, λ, τ , τ ) p(β, γ, λ, τ , τ y) = p(y) | , With: p (β | γ, τ ) is a first stage prior for random parameters and p (γ, λ, τ , τ ) 2 [y] [β] is a second stage prior or hyper prior for hyper parameter. Eq. 10 is a proportional form of posterior for two level hierarchical model. In Bayesian inference, all parameters need prior distribution. The nature of proposed prior distributions in this study is treated as independent prior distributions (Box and Tiao, 1992;Carlin and Chib, 1995) which are comprised combination of conjugate and informative prior distributions and pseudo prior. Inference about the subset of focal parameters of interest is derived using its marginal conditional distribution. The marginal conditional distribution is calculated by integrating Eq. 10 with respect to auxiliary unknown parameters, which tend to complex numerical integration. To overcome that problem, Bayesian method is taking repeated samples from the full conditional posterior distribution using MCMC and Gibbs Sampling (Gelman et al., 2004;Gelman and Hill, 2007;Ntzoufras, 2009). The estimation of parameters of interest is implemented in WinBUGS 1.4 as a computational power of recent software for Bayesian computation. The concept of that iterative estimation process is generated by Winbugs derived from Directed Acyclic Graph (DAG) of the hierarchical model. Figure 3 shows DAG of two level hierarchical Bayesian model for household expenditure in Central Java as the implementation of Eq. 5 and 6. DISCUSSION The two hierarchical Bayesian model shows that household welfare levels in Central Java, generally, can be indicated by several household attributes. First, the household welfare can be specified from housing condition such as, a good type of wall and floor and size of floor area per capita. Second, in majority, the welfare can also be identified by the availability of daily needs facilities such as, clean water sources, toilet ownership and a good cooking fuel. Third, Human capital of household for instance, the number of people in the household and level of education of household head affect the household welfare as well. The fact in 18 districts shows that household which generally economically active in agriculture sector has lower welfare level than others. According to BPS (2010c), those 18 districts mainly have a high percentage of wetland area and poverty level compare to other districts. For example, Brebes has almost 37.73% of its area is dominated by wetland and its percentage of poor people stands the fifth highest percentage among districts in Central Java (24.39%). District characteristics do affect positively to household welfare through the specific household attributes. Those districts characteristics are demographic and economic conditions of districts and the availability of public facilities, i.e., Economic, education and health which are strongly associated with a dimension of human development index. This relation shows that better availability of those public facilities yields higher welfare of the people. In terms of the economic dimension, number of small/household industry has also a positive effect on household welfare. This is reasonable since industry can create job opportunities for the people therein. CONCLUSSION This study has already demonstrated the work of the developed model for estimating household expenditure in order to measure the effect of regional diversity by taking into account district characteristics and household attributes using a hierarchical Bayesian approach based on the three parameters of the log-normal distribution. The result shows that the regional diversities do affect the household expenditure therein. The local government effort in providing public facilities statistically can improve its people welfare. Other interesting future research perspective is to investigate other specific district characteristics and household attributes that might affect household expenditure.
3,950.8
2012-06-25T00:00:00.000
[ "Economics" ]
Bmc Medical Informatics and Decision Making Technical Development of Pubmed Interact: an Improved Interface for Medline/pubmed Searches Background: The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Background This research continues to investigate innovations in usercomputer interface for online storage and retrieval systems in medical research. The goal of the project is to advance the development of a Web-based medical search tool that can enhance user interaction with the MEDLINE/ PubMed database and push to the forefront the different strategies and filters in Entrez PubMed that often remain hidden from novice users, such as age groups, clinical study filters and systematic reviews. The long-term objective is to study and implement clean and effective user interfaces for MEDLINE/PubMed that increases utilization and improves search outcomes without overwhelming novice users and limiting the workflow of advanced users. This manuscript reports the development, imple-mentation and technical evaluation of the research application, PubMed Interact. An earlier version of this project is the Slider Interface for MEDLINE/PubMed searches, or SLIM [1]. SLIM is a Webbased application that implements JavaScript slider bars to set search limits and filters. It uses dynamic HTML (DHTML), which is the method of using static markup language and JavaScript to create interactive pages. Users can choose from several preset search parameters. They can hide and display abstracts when viewing the results. An educational feedback tool for MeSH terminologies called the 'information box' is also available. New approaches in the development of Web-based applications prompted the exploration to look beyond search forms and provide users the ability to further interact with results. Document Object Model (DOM) tree manipulation and Ajax (Asynchronous JavaScript + XML) [2] have gained popularity and recognition among Web application developers. The Document Object Model is 'a platform-and language-neutral interface that permits scripts to access and update the content, structure, and style' of an HTML document [3]. Ajax provides a script engine for Web sites to send and receive small packets of data from the server without interrupting user activity. The combination of both methods can be a robust platform for alternative Web-based search applications for medical research. PubMed Interact makes extensive use of DHTML, DOM tree manipulation and Ajax scripting to enhance interactivity and productivity. Although it extracts several features from SLIM, many of its integral features allow interactions with the retrieved set of citations. We hope this project will contribute to ongoing efforts to improve online storage and retrieval systems for medical literature. Search interface PubMed Interact introduces an improved version of the SLIM interface [ Figure 1]. A wide text box accepts input for search terms, while seven JavaScript slider bars and a dropdown menu control search limits and parameters. The search parameters include publication date, journal subset, age group, methodology filter, Medical Subject Headings (MeSH) mapping, human studies and language. The publication date parameter uses two sliders: the start year slider and the end year slider. With an end year slider, searches are not limited to the current year as the permanent end date. Users can set different date ranges within the past 10 years, e.g. 1998 to 2002, or limit to one specific year, e.g. 2003. No publication date is set by default for the start year slider, i.e. no date limits. The end year slider defaults to the current year. The PubMed database contains several subsets, among them the MEDLINE subset and the Core Clinical Journals. The journal subset slider controls options to search within the whole PubMed database, the MEDLINE database subset, or the Core Clinical Journals. Within each subset, users can further limit the search to articles with abstracts or to those with links to full-text or free full-text. The default setting searches the PubMed database without any abstract or full-text restrictions. The age group slider is a modified version of the age group dropdown menu in the Limits page of Entrez PubMed. It PubMed Interact search form with search terms and slider settings The methodology filter slider can limit searches to case reports, clinical study categories, or systematic reviews. The case report filter uses the publication type search tag of Entrez PubMed to limit the search. The clinical study categories, also called PubMed Clinical Queries, are 10 search methodology filters based on the works of Haynes RB et al [4]. The systematic reviews subset is a pre-configured filter that finds citations for systematic reviews, metaanalyses, reviews of clinical trials, evidence-based medicine, consensus development conferences, and guidelines [5]. No limits are set by default. The MeSH mapping slider, a feature first developed in SLIM as the search mapping slider, is intended for intermediate to advance users of PubMed familiar with search tags and MeSH term operations. A customized PHP function extracts the mapped MeSH terms from the original search and modifies the search tags according to the slider setting. These modified terms are then appended to the current search to refine and redirect the search strategy. The default setting submits the search terms to the ESearch utility as entered in the text box without any modifications. The last slider controls the number of citations to be displayed in the results list. It does not affect the search query. It merely provides users the option to display the number of retrieved citations (10, 20, 40, 60, 80 or 100). To limit the search by subject or language, a dropdown menu below the sliders contains options for human studies, English language or both. To reload the form or reset the sliders to the default settings, users can click on the links found below the dropdown menu. Users can opt to hide the slider bars. A text link at the top right part of the search form beside the search button allows users to hide the search limits. This process is most advantageous when viewing the search results because it maximizes the display area of the browser page [ Figure 2 and 3]. Scripting made use of dynamic HTML and DOM tree manipulation. An interactive feature of the search form is the ability to preview the results count without submitting the form or reloading the page [ Figure 1]. This function uses Ajax to fetch data from the server and DOM tree manipulation to display the resulting number of citations. After typing in the search terms and setting the limits, users can click on the 'Preview Count' button and the number of citations is displayed. This process can be repeated with different search terms and slider settings. This feature allows the user to quickly gauge the effectiveness of the keywords and search parameters before submitting the form. Results interface PubMed Interact adapts two features from the results interface of SLIM: the information box and the ability to toggle the display of abstracts. An important distinction of PubMed Interact is the facility by which users can manipulate the search results. The information box is displayed only after the search form is submitted [ Figure 2 Abstracts can be displayed or hidden from view [ Figure 2 and 3]. It is possible to hide or display all abstracts as a group using links above the main results list. A link below the citation details will toggle the display for individual abstracts. An added feature in PubMed Interact is the ability to display structured abstracts. A simple PHP output function uses regular expressions to detect and display abstracts of a specific structure [ Figure 3]. This facilitates reading and scanning of specific abstracts. Removal of single citations from the search results is seldom found in Web-based medical search applications. In PubMed Interact, users can delete individual citations from the main list by clicking on a link below the citation details [ Figure 2]. When a citation is deleted, it is highlighted with a light red background for a few seconds before it disappears from view and removed from the main HTML source code. The visual effect is achieved by DHTML, while removal of the citation from the HTML document is done through DOM tree manipulation. This delete function enables the user to keep only citations relevant to their search. The 'Auto-Append Article' feature, also called A3, is linked with citation deletion. If active, the A3 function automatically retrieves the next citation in the results and appends it at the bottom of the list when a citation is deleted. The new citation data is retrieved from the local PMI domain server using Ajax scripting methods, while the action of appending and displaying that citation is done using DOM tree manipulation. All A3 processes are asynchronous and achieved without reloading the page. The appended article acquires the functionality of the original citations on the list. This feature is deactivated by default and can be activated using a checkbox at the top of the list. PubMed Interact implements two relevance lists: high and low. These relevance lists are user-dependent and colorcoded. Users can label specific citations according to relevance to the original search. Citations tagged with high relevance will have a light green background, while those with low relevance will have a light yellow background [ Figure 2]. Citations without any labels will have the default white background. The relevance lists can be viewed separately using links found at the top of the main results list. An advanced interactive feature of PubMed Interact is the ability to retrieve the related articles of each citation within the same page. In the current PubMed Interact implementation, only the top 10 related articles are retrieved and displayed where the abstract is positioned [ Figure 4]. Clicking on the title of a related article generates a floating HTML division element (DIV) with the citation details and an excerpt of the abstract [ Figure 5]. The related article can be added to the main results list by clicking on the 'Add to Main List' link. This feature allows the user to change the composition of the main results list "on-the-fly" using related articles. The new citation is inserted immediately below the 'parent article' and acquires all the functionalities of the original citations on the list. This feature depends heavily on Ajax scripting methods, DOM tree manipulation, DHTML, JavaScript loops and server-side PHP functions. A large part of script development adopted the object-oriented programming (OOP) approach. A custom set of PHP classes connect to the Entrez Programming Utilities, specifically the ESearch, EFetch and ELink tools [6]. These PHP classes are modified XML parsers that send queries to E-Utilities and parse the retrieved XML files. The OOP approach allows developers to reuse sets of code for different functions, thus, drastically reducing the amount of code maintained and opening possibilities for expanding code functions. In PubMed Interact, the code used to get the citation details for the search is the same code used to get the details for the related articles. System development and implementation The retrieved XML files are processed and stored in a local MySQL database to minimize the load on the E-Utilities servers. Instead of several remote queries to E-Utilities, the PHP scripts that retrieve data for the search results send one query and store the top 200 of the citation details regardless of the number of citations to be displayed. Thus, the A3 feature which appends new articles after a citation is deleted retrieves data from the local domain server and not from E-Utilities. The same process is used for the related articles of one citation. The details of all 10 related articles are stored in the local server and retrieved without reconnecting to the E-Utilities server. JavaScript functions were essential in the development of client-side interactivity. DOM tree manipulation captured information within the page and passed the data to the Ajax script engine. The core of Ajax scripting is the XML-HTTPRequest Object of JavaScript which performs HTTP client functions. However, the XMLHTTPRequest Object, by design, can only connect to the local domain server and not to remote servers, e.g. E-Utilities. Thus, custom PHP scripts were written to receive data from the Ajax engine, connect to the E-Utilities server or to the local MySQL database as needed and deliver an output in HTML format. JavaScript then displays the output in the browser page using both the Ajax engine and DOM tree manipulation. This is all done asynchronously without reloading the page and interrupting user activity. Result and discussion PubMed Interact is an experiment in user-computer interface. It is part of an ongoing project to make use of modern Web technologies in the development and improvement of Web-based medical search applications. The growing trend of using the Web as a platform to deliver services opens opportunities for alternative solutions in medical literature research. Web-based applications that function like traditional software, combined with rich user interface and improved user control of data, contribute to the indispensable nature of online information storage and retrieval systems for health resources. Two important components of the trend are DOM tree manipulation and Ajax. By integrating both technologies, PubMed Interact bridges an effective search strategy with a highly-interactive interface. Users not only have the ability to modify searches by setting parameters, they can also label, delete and add from within the existing list of citations. Access to related articles in the same page also provides an additional resource for more relevant citations not found in the original search results. PubMed -the clinical study categories, also known as PubMed Clinical Queries, and the systematic reviews subset -are made available for both novice and seasoned users with the Methodology Filter slider. In Entrez PubMed, the MeSH terms and subheadings of a search are viewed from the Details tab. In PubMed Interact, the MeSH details mapped from the keywords are presented in the information box, which can then be used as guides for the MeSH Mapping slider. Several features for future integration may include adding publication types, language options and subsets and searching in the Journals and MeSH database. These efforts are consistent with the longterm aim of developing a user-computer interface for medical research that empowers novice users with interactive tools for search parameters and provides expert users with easy access to advanced search filters. Technical evaluation The application is available online without restrictions. The alpha version and the beta version went live in late November 2005 and February 2006, respectively. The local MySQL database of the beta version contains over 29,900 records of citations in XML format and uses 54 megabytes of disk space. A scheduled maintenance script can be implemented in the future to delete old XML records from the database and keep the storage allocation manageable. This plan is deferred until the implementation is moved out of beta phase to record benchmarks for MySQL usage. Browser compatibility evaluation showed full functionality in Windows versions of Mozilla Firefox 1.5+, Internet Explorer 5.5+ and Opera 8.5+ and in the Linux version of Mozilla Firefox 1.5+. Some formatting inconsistencies were observed in Mac OSX versions of Mozilla Firefox 1.5+ and Safari 2.0+ but no functionality problems were noted. The search form and citation list of the application were tested using the W3C Markup Validation Service [7]. An unsupported element attribute for the Document Type Declaration used was reported for each slider. The validation report of the citation list accounted one recurring error for each citation. The error involves using numerical strings as id attributes for the citation divisions. Despite being reported as markup validation errors, these 'invalid codes' proved important for user-friendly functionality. They were also supported by the different browsers used for testing. Thus, they were noted down for reference but retained for use. Removing these 'invalid codes' degraded the functionality of the application. Limitations This paper is limited to the development, implementation and technical evaluation of PubMed Interact. It does not provide empirical evidence to show increased efficiency in searching or better precision and recall for results. A formal user evaluation of the application is needed to validate the usability and benefits of an alternative PubMed search interface. The technical evaluation of PubMed Interact employed commonly accepted procedures in Web applications, such as functionality, storage space used, markup validations and browser compatibility testing. It was not evaluated against any formal framework or standard criteria for software development. Future activities User evaluation is valuable in the continued development of PubMed Interact. The researchers plan to do comparative studies between PubMed Interact and Entrez PubMed. Users with various levels of searching skills will perform structured and unstructured tasks. Through user interviews, online questionnaires and direct observation, the research team will assess the effectiveness of PubMed Interact as compared to Entrez PubMed in usability, performance and search outcomes. The educational impact, speed and stability of the system and the effect on searching attitudes and strategies will also be studied. The projected study will be an opportunity to gather more information on how medical researchers interact with alternative search interfaces and obtain data on usability and functionality. User feedback will determine which features need to be improved or abandoned, and whether new functionalities should be added. As the progress of Web technologies continues, better platforms and methods will be available for further innovations in search interfaces for medical literature search. Conclusion PubMed Interact is a Web-based MEDLINE/PubMed search application that explores recent trends in Web development technologies like DOM tree manipulation and Ajax scripting methods. Users can control search parameters, refocus search strategies and modify search results easily. Many enhanced and interactive features occur at client-side and allow instant feedback without reloading or refreshing the page. PubMed Interact is a novel approach in the development of online tools for medical information research.
4,183.2
0001-01-01T00:00:00.000
[ "Computer Science" ]
Particleboard Manufactured from Tauari (Couratari oblongifolia) Wood Waste Using Castor Oil Based Polyurethane Resin Postgraduate Program in Materials Engineering – PPGEM, Federal Institute of Education, Science and Technology of Maranhão – IFMA, Av. Getúlio Vargas, 04, Monte Castelo, CEP 65030-005, São Luis, MA, Brazil Department of Design – DDE, Federal Institute of Education, Science and Technology of Maranhão – IFMA, São Luis, MA, Brazil Department of Civil Construction – DCC, Federal Institute of Education, Science and Technology of Maranhão – IFMA, São Luis, MA, Brazil Department of Physics – DEFIS, Federal Institute of Education, Science and Technology of Maranhão – IFMA, São Luis, MA, Brazil Department of Chemistry – DAQ, Federal Institute of Education, Science and Technology of Maranhão – IFMA, São Luis, MA, Brazil Introduction Community forest management is a sustainable practice that involves the rational exploitation of forest resources for the preservation of forests and ecosystems.The use of management as a tool for conservation has increased considerably in recent years in Amazonia 1 .This practice currently focuses on the reduction of logging, which is encouraged by NGOs and local governments through incentives and subsidies 1 .Selective logging and slash-andburn deforestation cause drastic environmental impacts, particularly the reduction of exploited species, affecting the natural regeneration of trees 2,3 . An alternative that can contribute to reduce wasteful deforestation is to use the wastes (leaves, branches, twigs, chips, bark, sawdust or wood shavings) for the manufacture of composite materials called particleboards [4][5][6][7] as a way to reduce costs and increase revenue in rural settlements 8 , since these wastes represent 50.7% of all log production 9 . Tauari (Couratari oblongifolia) is a tree belonging to the family Lecythidaceae, which occurs throughout Amazonia, mainly in the states of Pará, Amazonas, Acre, Rondônia and Maranhão, and in neighboring countries such as French Guiana, Suriname, Peru and Venezuela 10 .Tauari wood has the following characteristics: moderately heavy (620 kg•m -3 ) and easy to cut; heartwood and sapwood of undifferentiated color, pinkish tending to straw white; medium texture; straight grain; slightly glossy and smooth surface; and unnoticeable smell and taste 11 .However, records about the use of this wood in the manufacture of particleboard are almost nonexistent in the literature. According to Maloney 12 , the processing temperature is one of the most significant properties in the manufacture of agglomerated particleboards.Moreover, the following factors must be considered: the wood species, particles size and geometry, compaction pressure, type of resin and/or adhesive, and their mixing time. Dias and Lahr 26 used castor oil-based polyurethane resin (COPR) as an alternative adhesive for the production of plywood panels with layers of Eucalyptus grandis wood species.The physical and mechanical tests indicated that the properties of plywood manufactured with COPR at low temperature (60°C) were superior to those of commercial panels fabricated with Brazilian tropical woods, using traditional adhesives at low temperature. Campos et al. 27 , who produced and characterized medium density fiberboard (MDF) from alternative raw materials (Eucalyptus fibers) and COPR, showed that MDF produced with eucalyptus fiber and castor-oil-based polyurethane resin presents very satisfactory results when compared with standard Euro Class MDF boards. Iwakiri et al. 28 evaluated the influence of density on the mechanical properties of particleboards with nominal densities of 0.60, 0.70, 0.80 and 0.90 g/cm³, using Pinus spp particles collected from a particleboard manufacturing plant and urea-formaldehyde resin.Their results indicated a correlation between particleboard density and mechanical properties, and demonstrated the possibility of predicting these properties based on board density.Based on these results, they concluded that particleboards can be manufactured with an average density above 0.80 g/cm³ for specific applications that require high mechanical strength. In another study, Fiorelli et al. 29 investigated the production and properties of particleboards made of sugarcane bagasse and castor oil mono-component and bi-component resin.The characterized materials, which presented an average density of 0.93 g⋅cm -3 , can be classified as high density material recommended for industrial use, showing that castor oil based resin was efficient as a polymer matrix for the production of composite boards made of sugarcane bagasse. Paes et al. 30 evaluated the combined effect of pressure (2.0, 3.0, and 3.5 MPa) and temperature (50, 60, 90°C) applied to Pinus elliottii wood and COPR particleboard on the response variables: D AP , TS and WA (0-2h, 2-24, 0-24h), MOR, SP and IB.They concluded that the combination of 3.0 MPa and 90°C and of 3.5 MPa and 60°C produced the best results, and that the temperature at which pressure is applied is the most important variable in particleboard quality. The use of coconut fiber as raw material to produce particleboards, using COPR adhesive and urea-formaldehyde (UF) with two different densities (0.8 g/cm 3 and 1.0 g/cm 3 ), was investigated by Fiorelli et al. 31 Their results indicated a decrease in TS and an increase in MOR of coconut fiber panels with polyurethane resin when compared to those of coconut fiber panels manufactured with urea-formaldehyde resin.These observations were explained based on scanning electron microscopy (SEM) micrographs, which indicated that castor oil-based polyurethane adhesive occupies the gaps between the particles, thus contributing to improve the physical and mechanical properties of the panels. Iwakiri et al. 32 used sawmill waste from nine tropical wood species from Amazonia, including Couratari oblongifolia (Tauari), to evaluate the quality of particleboards using urea-formaldehyde resin as adhesive (8% of solid in oven-dried wood particles), and applying a pressure of 40 kgf/cm², a temperature of 160ºC and a pressing time of 8 min.The characterization tests indicated that the best physical and mechanical properties were achieved with Ecclinusa guianensis (Caucho) wastes. Bertolini et al. 33 demonstrated that high density wood particles from urban tree pruning, including the bark, can be used to produce medium density particleboards (MDP) using COPR (prepolymer and polyol bi-component) at a ratio of 16% (based on wood mass). Silva et al. 34 examined the behavior of boards made of castor oil-based polyurethane resin with coconut and sisal as plain weaves, using unidirectional short fibers (10 mm of length) and unidirectional long fibers.Their results revealed that the properties of sisal were superior to those of coconut fibers and that increasing the volume fraction of fiber improved the tensile strength, stiffness and WA of the boards but decreased their flexural strength. Silva et al. 35 investigated the physical properties of particleboards manufactured with castor oil bi-component polyurethane resins and Cambará, Canelinha and Cedrinho wood fiber, using a 2 2 full factorial design.The panels were produced with a particle moisture content of 5%, nominal density of 0.80 g/cm 3 , resin content of 15%, pressure cycle of 10 min, and a pressure of 5 MPa applied at 100°C.The resulting materials, which showed better mechanical and physical properties than those stipulated by the Brazilian NBR 14810:2002 standard, can be classified as high density particleboards. Particleboards from leucena (Leucaena leucocephala) wood particles and COPR were also investigated by Silva et al. 36 .The particleboards were manufactured by hotpressing under 4 MPa and 90 o C, using wood particles with a moisture content of 5% and 10% of mono-component and bi-component COPR.The bi-component COPR improved the physical properties (MOR and density) when compared to those recommended by the standard. In this paper, we report on a study of the feasibility of producing particleboard made from Tauari wood waste agglomerated with castor oil-based polyurethane resin, and the influence of the processing temperature on the particleboard's physical and mechanical properties. Material Tauari (Couratari oblongifolia) wastes in the form of chips and flakes supplied by the furniture industry of the municipality of João Lisboa (Maranhão, Brazil) were received in a laboratory and dried to a constant moisture content of 5% (dry basis).The wood wastes were milled in a vertical milling machine with fixed and movable blades (MARCONI model MA 680) to homogenize the sample.After milling, the particles were sifted through 14-18 mesh sieves, and the material retained in the 18 mesh sieve (1 mm -ABNT) was used to fabricate the particleboards. The wood particles were agglomerated with bicomponent castor oil-based polyurethane resin (COPR) with a density of 0.9 to 1.2 g/cm 3 , manufactured by KEHL Indústria and Comércio, São Carlos, SP, Brazil, containing 0.1% of free formaldehyde after 24 h. Preparation of the particleboards The particleboards were manufactured with 16% of COPR (bi-component) adhesive based on the wood mass, using a 1:2 ratio (one part of diisocyanate prepolymer and two parts of polyol).The resin was added to the particles and homogenized in a blender for five min.After homogenization, the mixture was placed in a 400 x 400 x 10 mm mold, compressed in a 50-ton hydraulic press and then hot pressed at 200°C (MARCONI model MA-098/50) for 10 min.Press cycles were performed at 90, 110 and 130°C, applying a pressure of 5 MPa (pressure used in the industrial production of medium density panels), to reach a nominal density 1000 kg/m 3 .Four particleboards were fabricated for each treatment at the Laboratory of Wood and Timber Structures (LaMEM) of the University of São Paulo (USP) at São Carlos, SP, Brazil.Table 1 describes the experimental conditions employed in the manufacture of the particleboards in the laboratory and the nomenclature used to identify each of the composites. Physical and mechanical characterization of the particleboards The particleboards were allowed to rest at ambient temperature for 48 h.Twelve test specimens per treatment were then cut randomly from these particleboards for each physicomechanical test, as follows: i) Apparent Density (D AP ) and Perpendicular Tensile Strength (IB) measurements were performed using 50x50x12 mm test specimens; ii) Thickness Swelling (TS) and Water Absorption (WA) tests were performed on 25x25x12 mm test specimens after 24 h of immersion in water; iii) Static bending tests (MOR and MOE) were carried out on 250x50x12 mm test specimens; and iv) the Screw Pullout (SP) test was performed with 250x50x24 mm test specimens.All the experiments were performed as recommended by the ABNT NBR 14810-3 standard 37 . Statistical analysis In the analysis of the tests, the variables (WA, D AP , M, MOR, MOE, TS and SP) were expressed as mean values and were analyzed by the Shapiro-Wilk normality test, at a 5% level of significance.An analysis of variance (oneway ANOVA) followed by a Tukey post-hoc test were used to detect differences between the three treatments of the Tauari particleboards in each variable assessed with normal distribution.The variables of SP, WA and MOR were compared by the Kruskal-Wallis test at a 5% level of significance. Physical characterization of the particleboards The determination of physical and mechanical properties such as internal bond, static bending, screw pullout strength, density, water absorption and thickness swelling serve to indicate the quality of particleboards 21 .Figure 1 illustrates the densities of the particleboards. As can be seen, the average densities of the particleboards at 90, 110 and 130ºC correspond to 930.3, 932.2 and 941.8 kg/m 3 , respectively, with coefficients of variation not exceeding 4%.The analysis of variance revealed no significant difference (F=0.4813,p>0.05), indicating that the densities of the particleboards were statistically similar. These values fall below the preestablished nominal density of 1000 kg/m 3 , but are higher than those reported by Dias and Lahr 26 and similar to those obtained by Fiorelli et al. 31 , Bertolini et al. 33 , Silva et al. 35,36 , and Sartori et al. 38 , who studied particleboards made of bi- 2014; 17 (3) component castor oil based polyurethane and several types of wood wastes.In every case, the compression ratio (CR) was about 1.5, which is higher than the CR of 1.3 recommended by Moslemi 39 and Maloney 12 . The densities attained allow the particleboards to be classified as high-density, according to the ANSI A208.1-1999 standard 40 .On the other hand, the difference between nominal and average density of particleboards has already been reported by other authors 41,42 , and has been attributed to the loss of raw material (wood particles and resin) during the manual mixing and pressing process. The average moisture content of the particleboards varied from 5.51 to 8.29% (Figure 2), which falls within the range of 5 to 11% recommended by the NBR 14810-2 standard 43 , and is lower than the minimum values of 8 ± 2% for dry particles recommended by Deppe & Erns, cited by Moslemi 39 . The treatments at 110 and 130ºC resulted in significantly lower moisture contents than in the particleboards treated at 90ºC (F=109.8,p<0.001).This difference may be explained by the increase in compression temperature responsible for the evaporation of water adhered to the particle surfaces during processing, which also causes resin to cure with better densification of the particleboard 44 .The initial particle moisture content of 5% did not affect the interaction with COPR and the homogeneity of the mixture, as was also observed by Silva et al. 35 . Figure 3 illustrates the water absorption (WA) test results after 24 h of immersion of the particleboards in water.Note that the average WA of the particleboards compressed at 90ºC exceeded 30%.Increasing the processing temperature from 90°C to 110°C and 130ºC led to a significant decrease in WA (from 22.49 to 19.87%), representing a statistically significant difference (F=56.6,p < 0.001). The difference in the average values of WA of the particleboards is ascribed to the decrease in resin viscosity.Increasing the processing temperature enhances the impregnation of the particles with resin, which reduces the thickness of the particleboards during compression, thereby increasing the degree of polymerization of the resin. Low WA values are associated with high densities of particleboards and a high compressibility ratio (1.61:1).It should be noted that the Brazilian NBR 14810-2 standard 43 does not specify WA requirements for particleboard.The average WA values of particleboards treated at 110ºC and 130°C indicate that these temperatures are suitable for obtaining high density Tauari wood particleboards 40 .These results are consistent with those obtained by Fiorelli et al. 45 for particleboards made with COPR and sugarcane bagasse fiber and Pinus sp. and by Bertolini et al. 46 and Iwakiri et al. 32 for boards made of Tauri wood using ureia formaldehyde resin as adhesive. Figure 4 depicts the thickness swelling (TS) values after 24 h of immersion in water.It should be kept in mind that the NBR 14810-2 standard 43 does not establish TS values for particleboard.However, this test allows one to observe more clearly the differences between the treatments, as well as the bonding and strength conditions of the particleboard particles after 24 h of immersion in water. It can be observed (Figure 4) that the mean values of TS after 24 hours are lower than 15%, which is consistent with the values obtained by Iwakiri et al. 32 for Tauri wood using ureia formaldehyde resin.The particleboards treated at 130°C showed a significantly lower TS than those treated at 90 and 110°C (F=6.47,P < 0.05), indicating that the compression temperature was a predominant variable in curing the adhesive.However, the mean TS values obtained for all the particleboards were lower than those reported by Sartori et al. 38 and Fiorelli et al. 29 . Mechanical characterization of the particleboards Tables 2 and 3 list the results of the mechanical tests of static bending strength (MOR and MOE), perpendicular tensile strength (IB) and screw pullout (SP) and their respective coefficients of variation.Table 2 indicates that the MOR increased with increasing processing temperature.At 130°C, the average MOR (19.57MPa) was similar to that obtained by Iwakiri et al. 32 and exceeded the value recommended by the NBR 14810-2 standard 43 , showing a statistically significant difference from the first two MOR values.However, these values are higher than the 13 MPa recommended by the EN 312:2010 standard 47 .The results indicate that particleboards processed at 130°C using twocomponent castor oil based resin and Tauari wood particles have a promising potential to be classified as high density particleboards indicated for commercial and industrial applications 28 . The mean values of modulus of elasticity (MOE) of particleboards (Table 2) did not reach the minimum value (2750 MPa) recommended by the A208-1-1999 standard 44 in any of the treatments, and the MOE value obtained at 130°C (2378.5 MPa) was 15.62% lower than that recommended by the aforementioned standard; however, all the values exceeded those recommended by the EN 312:2010 standard.Several researchers have reported low MOE values 16,26,30,41 , attributing this behavior to poor distribution of the adhesive during the compression of particleboards.The static bending tests showed statistical differences between the modulus of rupture (MOR) and of elasticity (MOE) as a function of the temperature. As for thickness swelling, the IB test results shown in Table 2 demonstrate that, regardless of the processing temperature, the mean values of this property varied from 1.55 to 1.70 MPa, which are higher than those recommended by the NBR 14810-2, A208.1:1999, and EN 312:2010 standards 43,44,47 .The mean values of IB showed no statistically significant differences at a 5% level of probability and are in agreement with those reported by Iwakiri et al. 32 for Tauri wood. The mean screw pullout (SP) values of the particleboards produced in this study at all the temperatures were higher than the minimum of 1020 N at the surface and 800 N at the top, established by the NBR 14810-2 standard.The treatments at 90 and 110°C showed no statistically significant differences at the 5% level. It is important to note that the coefficients of variation of the properties of MOR, MOE and IB were below the recommended 20%, and these values ensure the consistency of the manufacturing process. Conclusions Tauari wood particleboards agglomerated with bicomponent castor oil based polyurethane resin, with an average density ranging from 930 to 940 kg⋅m -3 , can be produced in the laboratory with physical and mechanical properties suitable for commercial and industrial applications. The compaction pressure of 5 MPa and a compaction ratio above 1.5 were suitable for the compression of Tauari particleboards with densities exceeding 900 kg⋅m -3 and a thickness of 10 mm, at all the processing temperatures. The use of 16% (based on wood mass) of bi-component castor oil-based polyurethane resin results in IB values exceeding those recommended by the Brazilian NBR 14810-2 standard. The particleboards compressed at 130°C showed better physical and mechanical properties than those compressed at 90 and 110°C, indicating that this is the best compression temperature. Regardless of the processing temperature, the particleboards showed higher values of tensile strength perpendicular to fibers (internal bonding) and screw pullout than those required by the Brazilian and American standards.The same letters indicate that the treatments did not differ statistically after the one-way ANOVA followed by the Tukey post-hoc test (p <0.05). Table 1 . Treatment proposed for the manufacture of particleboards. Table 2 . Mean values of the mechanical properties of the particleboards. Table 3 . Coefficients of variation of the mechanical properties of the particleboards.
4,372.2
2014-02-18T00:00:00.000
[ "Materials Science" ]
Waste Polyethylene Terephthalate as an Aggregate in Concrete This paper reports the strength behaviour of concrete containing three types of recycled polyethylene terephthalate (PET) aggregate. Results are also analysed to determine the PET-aggregate’s effect on the relationship between the flexural and splitting tensile strengths and compressive strength and to know whether the relationships between compressive strength and other strength characteristics given in European design codes are applicable to concrete made with PET-aggregates. The compressive strength development of concrete containing all types of PET-aggregate behaves like in conventional concrete, though the incorporation of any type of PET-aggregate significantly lowers the compressive strength of the resulting concrete. The PET-aggregate incorporation improves the toughness behaviour of the resulting concrete. This behaviour is dependent on PET-aggregate’s shape and is maximised for concrete containing coarse, flaky PET-aggregate. The splitting tensile and flexural strength characteristics are proportional to the loss in compressive strength of concrete containing plastic aggregates. Introduction The consumption of plastic has grown substantially all over the world in recent years and this has created huge quantities of plastic-based waste.Plastic waste is now a serious environmental threat to the modern way of living.In Portugal, post-consumer packaging accounts for almost 40% of total domestic waste and it is therefore an important source for the recycled materials market 1 .In a typical Portugal municipality about 10-14% of all generated waste is plastic 1 .Plastic waste cannot be dumped in landfills because of its bulk and slow degradation rate.Recycling plastic waste to produce new materials like aggregate in concrete could be one of the best solutions for disposing of it, given its economic and ecological advantages.The European aggregates demand is 3 billion tons per year, representing a turnover of around €20 billion.Some 90% of all aggregates are produced from natural resources.The other 10% come from recycled aggregates (6%), and marine & manufactured aggregates (2% each).Naturally, the use of waste materials as aggregate in concrete production will reduce the pressure on the exploitation of natural resources. The incorporation of PA can significantly improve some properties of concrete because plastic has high toughness, good abrasion behaviour, low thermal conductivity and high heat capacity [23][24][25] .PA is significantly lighter than natural aggregate (NA) and therefore its incorporation lowers the densities of the resulting concrete 22,26 .This property can be used to develop lightweight concrete.The use of shredded waste PA in concrete can reduce the dead weight of concrete, thus lowering the earthquake risk of a building, and it could be helpful in the design of an earthquake-resistant building 6 . However incorporation of PA in concrete has several negative effects such as poor workability and deterioration of mechanical behaviour 22,26 .The strength properties and modulus of elasticity of concrete containing various types of PA are always lower than those of the corresponding reference concrete containing NA only.The decrease in bond strength between PA and cement paste as well as the inhibition of cement hydration due to the hydrophobic nature of plastic are the reasons for the poor mechanical properties of concrete containing plastic.Treating plastic chemically and coating plastics with slag and sand powders can improve the mechanical performance of concrete by improving the interaction between cement paste and PA 14,27,28 .The prolonged curing of PET fibre in simulated cement pore-fluid can initiate the alkaline hydrolysis of PET, and form some organic compounds, which may increase the interaction between plastic aggrgate and cement hydration products 29 . However, the information available on the use of plastic waste as aggregate in concrete is not always adequate.For example, the workability behaviour of concrete containing similar type of PA is reported to be contradictory in different references 22,26 .The shape and size of the aggregate have a significant influence on both fresh and hardened concrete properties.No thorough study is available on the effect of the shape of PA on the properties of the resulting concrete. Further research to evaluate plastic waste as an aggregate in concrete production is therefore required.This is the background to the work reported here, in which three types of recycled polyethylene terephthalate aggregate (PETaggregate) of differing sizes and shapes were considered so as to understand how its size and shape influences the behaviour of the resulting concrete.The development of compressive strength, the most important concrete property is analysed along with the relative tensile and flexural strength, with reference to compressive strength.The results are then analysed using the present Eurocode 2 and the European EN 206 standard specifications. Material and Methods The plastic waste used as aggregate was collected from a plastic recycling plant in Portalegre, Portugal.The plant mainly recycles post-consumer PET bottles collected as compressed bales (Figure 1) that come from urban and industrial collection sites.The bales of PET-waste mostly consist of dirty PET-bottles, which are usually contaminated with other materials and with some non-PET containers such as PVC, HDPE and poly propylene, bottles.The composition of a typical waste plastic raw material is presented in Table 1. In this plastic waste treatment plant, several steps are adopted to recycle waste plastic.The coarse flakes and fine fractions were obtained after mechanical grinding of PET wastes followed by cleaning and separation by physico-chemical methods.The plastic pellet is produced from plastic flakes.This material consists of predefined and even-sized PET-grains, free of contamination at the microscopic level. For production of pellets, the flakes of PET are dosed to a reactor through a system capable of maintaining a vacuum in the reactor by using a dosing screw, according to predetermined conditions.The vacuum obtained is less than 10 mbar.The reactor is equipped with an agitation system that, by friction, promotes heating of the material to the drying temperature.The agitation system has three floors, which ensure uniform and gradual warming of the material to extrude.Feeding of the extruder is made through a window with a slider that controls the amount of material allowed. The heated material is extruded through an extruder spindle, with a polymer filter and a spinneret with holes.The heating and melting of the heated material is performed in vacuum, which allows the extraction of volatile contaminants.The extrusion process is relatively short, which limits the occurrence of secondary reactions during the melting stage.After passing through a spinneret, The experimental methods used to determine various aggregate properties and results are presented in Table 3.The concrete mixes were prepared by the same method, which requires using exactly the same aggregate grading curve and concrete composition in terms of cement content, coarse and fine aggregate quantities and slump value.The differences between the various mixes are thus reduced solely to the coarse aggregates' nature.The Faury aggregate grading curve presented in Figure 3 was used in this work.It also shows the grading size distribution of the natural aggregates (NA), determined using NP EN 933-2.All types of aggregate were therefore separated into different size fractions by mechanical sieving. A total of nine concrete mixes containing three types of PET-aggregate, plus one reference concrete (exclusively with NA) mix, were prepared for a constant range of slump 120-135 mm (Table 4).Three sub-classes of concrete mixes were prepared by replacing 5%, 10% and 15% volume of NA by equal volumes of each type of PET-aggregate.The preparation of concrete mixes, their casting and the evaluation of their properties followed standard procedures.The different test methods used to determine fresh and hardened state concrete properties are presented in Table 5. Results for all mechanical properties are the average of three specimens. Mechanical behaviour The development of compressive strength of the reference concrete and of those containing the three types of PET-aggregates in varying amounts is presented in Figure 4.The 7, 28 and 91-day compressive strengths (f cm ), the melt is collected in a cooling bath that solidifies the polymer before being granulated in a rotary cutter in water. The mixture of water and grains of polymer is subjected to a vibratory separator and then the grains of polymer are centrifuged to remove excess water.The plastic pellets are then pneumatically transported to the weighing system and then to a packaging station.The coarse flakes (PC), fine fraction (PF) and the plastic pellets (PP) are used as plastic aggregate in the preparation of structural concrete and depicted in Figure 2. No further crushing of the PET-aggregates was done in the laboratory.The sieve analysis of the PET-aggregates was carried out according to method NP EN 933-2 and is presented in Table 2. CEM II A-L42.5 R type cement was used in this work.Calcareous natural coarse aggregates of three different size ranges and quartzite natural fine aggregates of two different size ranges were used throughout.standard and average deviations (S dev and A dev respectively) of concrete with plastic aggregate as a substitution of 0 (reference), 5%, 10% and 15% natural aggregate are given in Table 6. The figure indicates that the development of compressive strength of concrete containing all types of PET-aggregate follows a similar behaviour to conventional concrete, although the incorporation of any type of PET-aggregate significantly lowers the compressive strength of the resulting concrete.The increase of compressive strength in the initial curing period (0 to 28 days) is substantially higher than that for later curing periods.However, a significant proportion of the reduction in strength of the PC and PF mixes compared to the control is due to the increased water to cement ratio necessary to maintain slump (Table 4).The 28-day compressive strength is near the figures for 91 days in most cases, since it is known that concrete almost reaches its full strength during the first 28 days of curing.Albano et al. and Frigione et al. report similar observations for concrete containing PET-aggregates 5,7 . Figure 5 shows the relative strength of all types of concrete with respect to the strength of the 91-day concrete for different percentages of substitution.It appears that the early strength gain trend of concrete prepared with 15% volume replacement of NA by PC (PC15) with respect to 91-day strength is slightly different from the equivalent trend for concrete made with 5% and 10% replacement (PC5 and PC10).The trends followed by PP and PF for all substitution levels are almost identical, however.The 7-day compressive strength of PC15 is low compared to its 91-day strength.After 28 days of curing it gains substantial strength but is still lower than the other types of concrete.The 7-day relative compressive strength of concrete containing PP and PF at all substitution levels is considerably higher than that for the control concrete specimens, and it is highest for concrete containing 15% PP-aggregate (PP15). The possible reason for the early strength gain for most of the concretes containing PET-aggregate is the low thermal conductivity of PET-aggregate.The low thermal conductivity may reduce the heat loss and therefore increase the temperature rise during hydration of cement pastes, which ultimately increases the strength of concrete the concrete, thereby increasing the heat of hydration.However, less strength gain in the early period for PC15 is unknown, though the very high w/c ratios (Table 4) might have some effect. Substantial reductions in other strength properties (splitting tensile strength, TS and flexural strength, FS) were also observed for all substitution patterns as the percentage of PET-aggregate incorporated increased.The reason is basically similar to that given in almost all studies related to plastic aggregate incorporating concrete: the weak interfacial binding between the plastic aggregate and cement paste. Relationship of compressive strength with other properties The ratio between the tensile and compressive strength can give information on the toughness behaviour of concrete specimen 30 .Concrete of higher toughness exhibits higher values of this ratio.The tensile/compressive strength and flexural/compressive strength ratios are therefore determined and presented in Table 7.The ratios between the tensile and compressive strengths observed for all PET-aggregate containing specimens are higher than that for conventional concrete, and the value increases with PET content.Thus specimens.Kan and Demirboga also observed a substantial strength gain in concrete containing modified expanded polystyrene aggregate (MEPS) in the early curing periods 18 .The authors stated that the lower specific thermal capacity of MEPS-aggregate resulted in a reduced heat loss from incorporation of PET-aggregate in concrete mixes increases the toughness behaviour.For a particular amount of PETaggregate addition, this order can be arranged as: PC > PF > PP, which indicates that the large-flake PET-aggregates can have more effect on improving the toughness behaviour of resulting concrete than the other two fractions.The ratio between flexural and compressive strength behaves like the ratio between tensile and compressive strength.Figure 6 shows the specimens after failure during the tensile strength determination of various concrete specimens.The presence of PF at 10% and 15% substitution levels and PC at all substitution levels in the concrete specimens prevented them from suddenly separating into two pieces, as was generally observed in the reference concrete, in the specimens containing PP for all substitution levels and in PF for 5%.Thus concrete specimens with PET-aggregate are able to withstand additional loading after they crack.This is perceptibly more pronounced for concrete containing flaky PET-aggregates, where the specimens do not physically separate into two pieces under loading, possibly due to the bridging of cracks by PET-particles.Concrete containing flaky PET-aggregate may be able to do this better than that containing PET-pellet because of the differences in their load transfer ability.Dnce debonded from the concrete matrix pellets are too short to transfer the applied load through interfacial frictional force, whereas flakes are longer and can transfer the applied load 31 . The percentage reduction of compressive strength was also compared with the percentage reductions of flexural strength with respect to the reference concrete, as presented in Figure 7.As in the Hannawi et al. study (2010), the reduction in compressive strength with respect to the reference concrete was greater than the reduction observed in the flexural strength 8 .This difference is more pronounced for concrete with PC, which is coarser and flakier than the other two PET-aggregates.Incorporating PET-aggregate in concrete thus improves the relative flexural strength behaviour.The observed results also suggest that the flexural behaviour is dependent on the size and shape of the PETaggregate.Regardless of the type of plastic, the correlations between the 28-day splitting tensile and flexural strengths (represented in X-axis) and the 28-day compressive strength (represented in Y-axis) can be presented by following linear Equations 1 and 2 respectively: Figure 8 shows the relationship between 28-day compressive strengths (represented in Y-axis) and corresponding dry densities (represented in X-axis) of concrete specimens.It can be seen that the decrease in dry density of concrete specimens is associated with the decrease of its compressive strength.It should also be mentioned that increasing the content of all types of PET-aggregate in concrete lowers its dry density.Regardless of the type of plastic, the following linear relationship can be proposed to correlate dry density with compressive strength: Y = 0.1091X -219.71;R 2 = 0.983 (3) Water absorption behaviour The 28-day water absorption capacities of concrete specimens containing the different PET-aggregates at various replacement levels are presented in Figure 9.The results reveal that the incorporation of PP-aggregate at all replacement levels and PF-aggregate at 5 and 10% replacement levels do not have too much influence on the water absorption behaviour of the resulting concrete.In fact, incorporating PP-aggregate up to 10% replacement level lowers the water absorption capacity of the concrete specimen.But the water absorption capacity of concretes containing 15% PF-aggregate replacement and PC aggregate at all replacement levels is higher than the normal concrete.For PC-aggregate, water absorption increases with higher replacement levels. The differences in percentage of compressive strengths and water absorption capacities of concrete containing various types of PET-aggregates from the control mix are presented in Figure 10.The positive and negative values on the Y-axis respectively indicate the increasing and decreasing percentage amount of these parameters with respect to the amount for the reference concrete.The decreasing percentage of compressive strength for concretes containing 15% PF replacement and the concrete containing PC-aggregate at all replacement levels is due to the higher porosity of these concretes, as higher water absorption generally indicates higher porosity.But the reduction in the compressive strength of concretes containing PP-aggregate at all replacement levels, as well as the concrete containing PF-aggregate at 5 and 10% replacement levels, cannot be related to the water absorption capacity of these concrete compositions as the later values nearly the same or better than that of the reference.This indicates that another factor such as PET-aggregate to cement paste binding is also responsible for the strength reduction of concrete containing PET-aggregate. Analysis of results using Eurocode 2 In spite of limitations such as restricted amounts of experimental data and minimum variations of experimental conditions, the experimental results are analysed using the existing Eurocode 2 (EC 2004), as explained below 32 . According to Eurocode 2 (EC2), the relationship between mean cubic compressive strength (f cm ) and characteristic cylindrical compressive strength (f ck ) can be expressed by 33 : According to this relationship, the 28-day cylindrical compressive strength of the reference concrete should be around 34.46 MPa.Again from EC2, the splitting tensile strength (f ctm,sp ) can be related to f ck by the following expression: Thus the predicted splitting tensile strength for reference concrete should be around 3.53 MPa.The observed tensile strength (3.47 MPa) is almost the same as the predicted value for the 28-day splitting tensile strength of the reference concrete.A plot of the cubic compressive strength versus tensile strength for different concrete specimens is presented in Figure 11a.The solid line is extrapolated from data obtained using the EC2 expression for concrete with various types and amounts of PET-aggregate.From the figure, it can be concluded that the tensile strength of the reference concrete and the concretes incorporating PP and 5% and 10% of PF are almost the same as or slightly lower than the value predicted by EC2.Dn the other hand, the tensile strength of concrete containing PC at all substitution levels and concrete containing 15% PF is considerably lower than the value obtained from EC2.The deviations of the experimental results from the results established in EC 2 are possibly due to the higher w/c ratios, the poorer workability and greater porosity of this type of concrete compared with the other concrete mixes. According to EC2, the relationship between mean cubic compressive strength (f cm ) and flexural strength (f ct, fl ) of concrete below class C50/60 is expressed by 33 : f ct,fl (MPa) = 1.5 [0.3×(f cm /1.25) 2/3 ], where f cm is in MPa (6) Thus, based on this relationship, concrete with a cubic compressive strength of 43 MPa should have a flexural values.In this sense, the addition of a superplasticizer to improve the workability performance of concrete containing plastic aggregate may be an interesting option to improve the properties and therefore these types of concrete will probably meet various subclasses, defined in Table 9. Conclusions The results of this investigation can be summarised as: • The development of compressive strength of concrete containing all types of PET-aggregates is similar to conventional concrete, though this incorporation significantly lowers the compressive strength of the resulting concrete; • The early compressive strength gain (0 to 7 days) relative to the strength determined after 91 days of curing for most of the concretes containing PET-aggregates is higher than that observed for conventional concrete; • The incorporation of PET-aggregate in concrete increases the toughness behaviour.For a given amount of PET addition, this order is: PC > PF > PP, which indicates that adding large-flake PET-aggregate can have more effect on the improvement of the toughness behaviour of resulting concrete than the two other fractions; and • The splitting tensile and flexural strength of concrete containing any type of PET-aggregate are proportional to its loss of compressive strength.This preliminary study has thus shown that the accepted and assumed relationships between engineering properties and compressive strength, as used in European design codes, can be applied to concrete containing PETaggregate. strength of 4.76 MPa.The flexural strength observed for the reference concrete, i.e. 4.74 MPa, is almost equal to the value predicted by EC2.A plot of the cubic compressive strength versus flexural strength for different concrete specimens is presented in Figure 11b.The solid line is extrapolated from data obtained using the EC2 expression for concrete with various types and amounts of PET-aggregate, which is calculated from the experimental compressive strength.Unlike the splitting tensile strength, the flexural strength of the reference concrete and the concrete incorporating plastic behaves according to EC2. Analysis of results using European standard EN 206 The European Standard EN 206 defines the classes of concrete according to various environmental conditions and recommends relevant technical limits for concrete composition and strength class 34,35 .Again, each class has various subclasses.The definition of the various classes is presented in Table 8. Table 9 lists some relevant subclasses, plus the technical limits required for durable concrete in terms of maximum water-cement ratio (w/c), minimum 28-day characteristic compressive strength (strength class), and minimum air volume, if any, along with the properties of the concrete prepared in this research. Although more investigation is necessary, from Table 9 it can be concluded that concrete mixes prepared with 5% substitution of NA by PC, 5% and 10% substitution of NA by PF and 15% substitution of PP meet the requirements of concrete subclasses XC1, XC2 and possibly XF2.Furthermore, concrete containing 15% PF meets the specifications for XC1.Concrete prepared by substituting 5% and 10% of NA by PP meets all concrete subclasses' requirements indicated in Table 9 except subclass XD3, due to not enough strength, and some other classes due to high water to cement ratio.However, concrete mixes with 10% and 15% PC do not conform to any of the classes mentioned in this table.This is mainly because of the very poor workability of these concrete mixes and their high w/c Figure 3 . Figure 3. Faury grading curve (with markers) and the grading size distribution curves of NA. Figure 4 . Figure 4. Development of compressive strength of various concretes with increasing curing time. Figure 5 . Figure 5. Percentage of 7-and 28-day compressive strength of concrete containing PET aggregates and the reference concrete with respect to the 91-day strength for different percentages of substitution. Figure 7 . Figure 7. Percentage reduction of flexural strength (FS) with compressive strength (CS) reduction in various other concrete with respect to the reference concrete. Figure 8 . Figure 8. Relationship between 28-day dry density and compressive strength. Figure 9 . Figure 9. Water absorption capacity of concrete containing various amounts of PET aggregate. 1 XC Corrosion of the reinforcement induced by carbonation 4 XD 4 XA Chemical attack 3 Figure 10 . Figure 10.Percentage reduction of water absorption (WA) and compressive strength (CS) of various other concrete with respect to the reference concrete. Figure 11 . Figure 11.Cubic compressive strength versus (a) tensile strength; (b) flexural strength (the solid line is obtained using the EC2 expression). Table 1 . Composition of typical waste plastic raw materials. Table 2 . Sieve analysis of various PET-aggregates. Table 3 . Properties of the aggregates. Table 5 . Experimental methods used to evaluate concrete. Table 6 . Compressive strength of concrete with various percentages of replacement of natural aggregates (NA) by plastic aggregates. Table 9 . Some concrete subclasses and comparison of the relevant properties of these classes with concrete prepared in this investigation. *Cylinder/cube concrete strength class (N/mm 2 ) based on cement of strength class 32.5.
5,411
2013-04-01T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
Improving Tumor-Treating Fields with Skull Remodeling Surgery, Surgery Planning, and Treatment Evaluation with Finite Element Methods Tumor-treating fields (TTFields) are alternating fields (200 kHz) used to treat glioblastoma (GBM), which is one of the deadliest cancer diseases of all. Glioblastoma is a type of malignant brain cancer, which causes significant neurological deterioration and reduced quality of life, and for which there is currently no curative treatment. TTFields were recently introduced as a novel treatment modality in addition to surgery, radiation therapy, and chemotherapy. The fields are induced noninvasively using two pairs of electrode arrays placed on the scalp. Due to low electrical conductivity, significant currents are shielded from the intracranial space, potentially compromising treatment efficacy. Recently, skull remodeling surgery (SR-surgery) was proposed to address this issue. SR-surgery comprises the formation of skull defects or thinning of the skull over the tumor to redirect currents toward the pathology and focally enhance the field intensity. Safety and feasibility of this concept were validated in a clinical phase 1 trial (OptimalTTF-1), which also indicated promising survival benefits. This chapter describes the FE methods used in the OptimalTTF-1 trial to plan SR-surgery and assess treatment efficacy. We will not present detailed modeling results from the trial but rather general concepts of model development and field calculations. Readers are kindly referred to Wenger et al. [1] for a more general overview of the clinical implications and applications of TTFields modeling. model development and field calculations. Readers are kindly referred to Wenger et al. [1] for a more general overview of the clinical implications and applications of TTFields modeling. Glioblastoma GBM is the most common and one of the most aggressive primary malignant tumors in the central nervous system [2]. GBM is a WHO grade IV glial tumor characterized by invasive growth and significant anaplasia. The age-standardized incidence rate of GBM in Denmark is 6.3/100,000 person-years for males and 3.9/100,000 personyears for females with a median age of 66 years and a median overall survival of 11.2 months [3], which corresponds well with survival estimates from other Western countries [4]. Today standard therapy consists of maximal surgical resection followed by radiotherapy with concomitant and adjuvant temozolomide chemotherapy [5]. Tumor Treating Fields In the search for new treatment options for GBM, TTFields have recently been introduced as a fourth and supplementary treatment modality applied in parallel with adjuvant temozolomide. TTFields are alternating electric fields of low intensity (100-500 V/m) and intermediate frequency (200 kHz) that are transmitted through the head and brain between electrodes placed noninvasively in an individualized pattern on the patient's scalp (Fig. 1). The electric fields affect dividing cells in particular and hereby primarily cancer cells. The therapeutic effect of TTFields is explained by two physical principles, dielectrophoresis and dipole alignment. In combination, the two principles affect the normal movement of charged and polarizable structures, including septin and tubulin, which is highly responsible for successful mitosis. Thus the disruption of these mechanisms leads to cell death [1]. In patients with newly diagnosed GBM, TTField therapy in combination with chemotherapy has been proved to have a significant effect on median overall survival (OS) and median progression-free survival (PFS) compared to chemotherapy alone [6]. A recent meta-analysis of studies on TTField treatment of GBM patients further concludes that TTFields are an efficient and safe treatment modality [7]. The positive effects of TTFields, recently, led to the introduction of TTFields as a category 1 recommendation of TTFields for a selected population of patients with newly diagnosed GBM by the National Comprehensive Cancer Network in the USA [8]. In regard to the practical use of TTFields, patients are recommended to wear the active device as much as possibledesignated as the level of compliance. A compliance threshold above 50% correlates positively with improved outcome, but maximal effect on survival rates is attained with a compliance of >90% [9], and therefore continuous treatment is recommended whenever possible. TTFields Dosimetry In recent years, finite element (FE) methods have been used to estimate the distribution of TTFields intensity in the patient's head and tumor with the objective of improving technology design and treatment implementation. The rationale behind this approach is that high field intensities correlate positively with longer overall survival [11] and increased tumor kill rate in vitro [12,13], so field estimation can be considered an approach to TTField dosimetry with potential applications for individual treatment planning as well as identification of expected responders to therapy and prediction of the expected treatment prognosis and topographical patterns of recurrence in the brain. Although previous studies have established that field intensity is a highly relevant surrogate dose parameter, it is well-known that other factors such as field frequency, treatment duration, and spatial correlation also affect the efficacy of TTFields [14][15][16]. Ongoing work is being conducted to refine the dosimetry methods and establish a golden standard with a strong correlation to clinical outcome. Skull Remodeling Surgery and the Utility of FE Modeling As an example of FE modeling utility, we recently demonstrated that the high resistivity of the skull causes significant amounts of currents to be shielded from the intracranial regions of interest, which may compromise treatment efficacy. To overcome the obstacle, we proposed a surgical skull remodeling procedure (SR-surgery) aiming to introduce localized skull defects (with reduced skull resistivity) and thereby redirect the tumor inhibiting currents toward the underlying regions of interest (Fig. 2) [17]. SR-surgery encompasses thinning of the skull or formation of burr holes or larger skull defects (craniectomies) over the tumor region, which causes the intensity of the field (i.e., treatment dose) to increase in these regions (Fig. 3) and further reduces the amount of wasted electrical energy deposited in the skin (Fig. 2b). In search of a feasible approach for clinical implementation, we previously explored a number of different configurations of craniectomy and found that the field intensity in the underlying tumor increases with craniectomy diameter, until the skull defect is approximately the same size as the underlying region of interest. When the defect area exceeds the size of the underlying pathology, it causes currents to be shunted around and pass the intended target and therefore does not contribute to further dose enhancement in the desired area (Fig. 4). In addition, we found that it was more effective to use multiple smaller burr holes distributed over the region of interest, rather than a single craniectomy. With this approach it was possible to achieve higher field enhancement per skull defect area, which made the approach favorable from a clinical safety perspective. Recently, we demonstrated the safety and feasibility of the SR-surgery concept in a clinical phase 1 trial (OptimalTTF-1, clinicaltrials.gov ID: NCT02893137). We found that SR-surgery combined with TTFields was not associated with serious adverse events related to the intervention, and adverse events observed could be attributed to medical therapy or TTField treatment alone. In addition, the trial further indicated a promising treatment efficacy with prolonged overall survival and progression-free survival compared to historical data from comparable patient cohorts [18]. The Aim and Motivation of Field Modeling in SR-Surgery Planning and Evaluation In the OptimalTTF-1 trial, we used field modeling for a number of purposes. The most important motivation was the need for a method to ensure that enrolled patients would gain an expected benefit from the participation in the trial. Since all enrolled patients underwent SR-surgery, and thereby had to accept the potential risks of the surgery itself in addition to the risks associated with reduced skull protection in the operated region, we required the expected dose enhancement to be considerable for ethical reasons. Therefore, we set the threshold to an average expected field enhancement of >25% in the region of pathology, i.e., the remnant tumor or the peritumoral border zone. This was assessed using a reasonably quick and flexible modeling approach, in which a tumor mimicking the actual patient case were introduced virtually in a preexisting computational head model based on MRI data from a healthy individual (see below). The reason for adopting the approach was that we needed a technique for quick evaluation and exploration of SR-surgery benefit in various configurations. In Denmark, there is a legal requirement to initialize treatment of cancer patients (i.e., operate in this case) within 2 weeks of suspected tumor diagnosis or establishment of disease progression. Therefore, it was not possible to construct detailed and personalized head models for each enrolled patient prior to surgery, as this procedure is very time-consuming. Instead, we used the flexible approach, with which model creation and surgery planning could be completed within approximately 2 days. The computations were initiated immediately upon patient enrollment. We used the model to explore different SR-surgery configurations and identify the optimal configuration with the highest field gain possible for each patient. This configuration was then used to guide the surgery. As a predefined rule, the total skull defect area had to be <30 cm 2 . In addition to validating treatment benefit, an important motivation was to be able to correlate topographical patterns of disease recurrence on MRI with detailed individual assessments of the TTField distribution in treated patients. This work is exploratory in nature and requires accurate computational models based on MRI data from individual patients. Moreover, these more accurate models would serve to validate the estimates obtained in the preliminary preoperative simulations. This work is still ongoing and beyond the scope of the present paper, but the concept illustrates how FE modeling may be used to address and explore many clinically relevant aspects of TTField therapy. The following sections will focus on describing the basic framework of the quick and flexible modeling technique that was used for the assessment of treatment benefit upon patient enrollment. Physical Basis of the Field Calculations Before we continue to discuss the construction of the head models, we will briefly present the physical framework assumed for the calculations. Given the dielectric properties of biological tissues, the low to intermediate frequency of TTFields (200 kHz), and the small width of the head (approximately 20 cm) [19], we can assume TTFields to behave in a quasi-stationary fashion. Therefore, the electric potential φ can be approximated with Laplace's equation where ∇• is the divergence operator and σ is the real-valued conductivity [20]. In our calculations, we used the FE approach to obtain an approximate numerical solution to Laplace's equation of the electrostatic potential. The field distribution was then initially derived by taking the gradient of the potential distribution and the current density subsequently from Ohm's law and using the derived field and the scalar conductivity assigned to the element. All distributions were calculated separately for each of the electrode pairs, as they are activated sequentially in the real treatment scenario. In addition, calculations were performed both before and after introducing a virtually planned SR-surgery procedure into the model. This allowed us to calculate the absolute and relative changes in the average field intensity in the respective regions of interest, including the tumor and peritumoral border zone, and thereby to quantify the expected field enhancement caused by the intervention. Creating the Head Models The head models used for computations were constructed from the dataset "almi5," which was created using SimNIBS [21] and which is available from simnibs.org. The model was initially composed of five volumes, namely, skin, skull, cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM). To incorporate the tumor, necrotic regions, and resection cavities, we post-processed the surface mesh STL files of the model for every patient. The post-processing was based on morphological measurements of the pathology regions on preoperative MRI images of the patient, including gadolinium-enhanced T1 sequences. The tumor was incorporated into the GM volume, the necrotic region into the tumor interior, and the resection cavity into the CSF volume. The edited surface meshes were "cleaned" for self-intersections and triangle degenerations using MeshFix. Subsequently, all volumes encapsulated by neighboring surfaces were tessellated with Gmsh (gmsh. info) to construct a tetrahedral computational mesh. The skull defects, i.e., virtual SR-surgeries, were initially outlined in MeshMixer by producing closed (often spherical or cylindrical) compact surface files traversing the exterior and interior boundaries of the skull in a desired geometrical configuration and location. These volumes were then used to define binary volume masks used to select the elements to be contained in the surgical skull defects. These elements were then assigned a uniform isotropic conductivity equal to the skin, based on the assumption that the removed skull tissue would be replaced with a better-conducting skin tissue. The holes in the skull were typically placed directly above the tumor and resection border. A number of configurations were then tested in a trial-and-error fashion, and the model selected for SR-surgery was then visualized using Gmsh and used as a guiding framework for surgery in combination with neuronavigation technologies (Fig. 5). Placement of TTField Transducer Arrays The 3 Â 3 TTField electrode arrays were positioned to maximize TTField intensity for each patient and portray the clinical treatment scenario planned for the individual patient. In a normal clinical setting, the array layout is determined using the NovoTAL ® software (Novocure™). NovoTAL ® uses individual measurements of the head size and tumor size/position to design a layout for each treated individual, which maximizes the field intensity in the tumor. However, the alteration and redistribution of the current density and electric field caused by SR-surgery arguably invalidate this approach, and we therefore planned the array layouts using the guiding principles of optimized and individualized array placement outlined in Korshoej et al. [22,23] as well as generalized principles determining the distribution of TTFields [24,25]. Basically, the arrays were placed so that a row of edge transducers from one array in each pair overlaid the tumor (Fig. 6) and the remodeled region of the skull, while the other array in the same pair was placed on opposite side of the skull, ensuring that currents would flow through the holes in the skull and toward the opposite side of the head and thereby induce high fields in the tumor. This approach is based on the observation that stronger fields are induced in tissues underlying the periphery of the electrode arrays ("edge effect"). Hence, it is not desirable to have the skull holes located under the central parts of the array or in a far distance from the array, as this would reduce the amount of current likely to pass through the holes. The virtual placement of electrodes was performed using the SimNIBS GUI and a custom Matlab script (Mathworks, Inc.). For further details, see [23]. Boundary Conditions and Tissue Conductivities Computations were conducted using the Dirichlet boundary conditions defined by the anatomical boundaries of the head and fixed electrical potentials at the top of the array transducers. Particularly, the potential was set to 1 V in the transducers of one array in a pair, while the potential in the electrodes of the other array were set to À1 V. Numerical approximation was obtained using a conjugate gradient solver with a defined tolerance of 1 E-9. All potentials, fields, and current densities were then rescaled to obtain a total current of 1.8A through the arrays equivalent to the amount of current delivered by the Optune™ device. This allowed us to model the actual scenario that all electrodes in an array were connected to the same electrical source. In all calculations, a uniform isotropic scalar conductivity value σ was assigned to all nodes in a volume based on previous measurements from in vitro and in vivo studies (skin 0.25 S/m, bone 0.010 S/m, CSF 1.654 S/m, tumor 0.24 S/m, and necrosis 1.00 S/m [23]). All transducers were modeled with an underlying layer of conductive gel with 0.5 mm thickness and 1.0 S/m conductivity. SR-Surgery in the OptimalTTF-1 Trial In the OptimalTTF-1 trial, a total of 15 subjects were enrolled. The tumors were located in the temporal (N ¼ 5), parietal (N ¼ 2), frontal (N ¼ 2), occipital (N ¼ 1), frontoparietal (N ¼ 3), and parietooccipital (N ¼ 2) regions, and field enhancement >25% could be obtained for all patients (median 37%, range 25-67%). The applied skull defects had a mean area of 10.5 cm 2 (range 7-24 cm 2 ), and the mean absolute field values in the region of interest were in the range 100-200 V/m. Ten patients had 4-6 burr holes (15-18 mm diameter), and two had total craniectomies (elliptic with semiaxis diameters of approximately 60 Â 50 mm and 85 Â 65 mm, respectively). One had five 15 mm burr holes and one 25 mm mini-craniectomy, while the remaining two patients had seven and eight 20 mm burr holes, respectively. Figure 7 shows examples of two different configurations of SR-surgery, while a third example is given in Fig. 5f. The remodeled regions were placed above the resection cavity/border and residual tumor. Skull thinning was performed if possible and if the resection cavity extended to regions where the overlying skull had an estimated thickness above 3 mm. Skull thinning in areas below this limit was considered less significant because the relative gain in conductivity would be too small in these cases. For patients with temporal tumors, the squamous area of the temporal bone was therefore only perforated by burr holes, and bone bridges were left to support the overlying temporal muscle and maintain cosmetic integrity. All surgeries were conducted by trained neurosurgeons. The operation was technically feasible, easy Conclusion In this chapter, we have introduced the general concept of TTFields and well as background information on the main indication of this treatment, i.e., glioblastoma. We have illustrated the technical framework and rationale for implementation FE modeling dosimetry as a method to plan and evaluate skull remodeling surgery in combination with tumor-treating field therapy of GBM. We have illustrated how SR-surgery can be used to increase the TTField dose in GBM tumors and the techniques used to quantify this enhancement. The presented framework was adopted in a phase 1 clinical trial to validate expected efficacy for patients enrolled in the trial and further to calculate the field enhancement achieved for each patient. The trial, which is concluded at this time, showed that the SR-surgery approach was safe, feasible, and potentially improved survival in patients with first recurrence of GBM [18]. Two different modeling approaches were adopted, namely, a fast but less accurate approach, in which a representative tumor or resection cavity was introduced virtually in a computational model based on a healthy individual and one based on the individual patients MRI data, which was more accurate but also too time-consuming to be used for quick preoperative calculations. Here we have mainly focused on describing the principles and workflow of the simplified framework. Although we considered this approach sufficient for the given purpose, future work is needed to improve the FE pipeline for better time-efficiency and preparation of patient-specific models as exemplified in [17]. Such models would both improve anatomical accuracy and also allow for individualized anisotropic conductivity estimation giving a more accurate and realistic basis for the calculations. In the OptimalTTF-1 trial, we conducted the necessary MRI scans for individualized modeling preoperatively, postoperatively, and at disease recurrence for most patients. Based on this data, we aim to conduct individualized and refined post hoc simulations to accurately reproduce the actual skull remodeling configurations including skull thinning and thereby provide more accurate estimates of the beneficial effect of SR-surgery. This will be highly valuable when exploring the doseresponse relationship and effects of craniectomy enhancement of TTFields in further detail. Furthermore, efforts are being made to streamline and automate the simulation pipeline to enable quick and accurate dose estimation and treatment planning before SR-surgery. Such procedures would ideally also use automated optimization procedures as opposed to the current exploratory approach to ensure maximal dose enhancement. Finally, we are finalizing the analysis of the OptimalTTF-1 trial, which will shed important light to the clinical significance of the concept. A future clinical phase 2 trial is being planned to test treatment efficacy. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
4,809
2020-08-06T00:00:00.000
[ "Engineering", "Medicine" ]
Ficus thonningii Stem Bark Extracts Prevent High Fructose Diet Induced Increased Plasma Triglyceride Concentration, Hepatic Steatosis and Inflammation in Growing Sprague-Dawley Rats BACKGROUND: Ficus thonningii extracts exhibit hypoglycaemic, hypolipidaemic and antioxidant activities. We investigated the potential of methanolic F. thonningii stem-bark extracts (MEFT) to protect growing Sprague-Dawley (SD) against high-fructose diet-induced metabolic derangements (MD) in a model mimicking children fed obesogenic diets. Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 November 2021 © 2021 by the author(s). Distributed under a Creative Commons CC BY license. 2 METHODS: Eighty (40 males; 40 females) 21-days old SD rat pups were randomly allocated to and administered, for 8 weeks, five treatment regimens: 1 standard rat chow (SC) + water (PW), 2 SC + 20% (w/v) fructose solution (FS), 3 SC + FS + fenofibrate at 100 mg/kg bwt/day, 4 SC + FS + low dose MEFT (LD; 50 mg/kg bwt/day) and 5 SC + FS + high dose MEFT (HD; 500 mg/kg bwt/day). Body weight, glucose load tolerance, fasting blood glucose and triglyceride, plasma insulin concentration, sensitivity to insulin, liver mass and fat content, steatosis and inflammation were determined. RESULTS: Fructose had no effect on the rats’ growth, glucose and insulin concentration, glucose tolerance and insulin sensitivity (P>0.05) but increased triglycerides in females; induced hepatic microsteatosis and inflammation in both sexes but macrosteatosis in females (P<0.05). In females, MEFT prevented fructose-induced plasma triglyceride increase. Low dose MEFT increased liver lipid content in females (P<0.05). The MEFT protected the rats against hepatic steatosis and inflammation but fenofibrate protected against hepatic microsteatosis. CONCLUSION: MEFT can be used as prophylaxis against dietary fructose-induced elements of MD but caution must be taken as low dose MEFT increases hepatic lipid accretion in females predisposing to fatty liver disease. INTRODUCTION Obesity is a global public health concern with 13% of the global human adult population and 340 million children obese 1 . In sub-Saharan Africa (SSA) 10.6% of the children are obese 2 and in South Africa 13% of them are obese 3 . Epigenetics contribute to obesity but lack and inadequate exercise and intake of obesogenic diets increases the development of obesity 4 . The risk of developing dyslipidaemia 5 , insulin resistance, non-alcoholic fatty liver diseases 6 , metabolic syndrome 7 and type II diabetes 6 is increased in obese individuals. Metformin is used to manage type II diabetes mellitus and fenofibrate is used to manage dyslipidaemia associated with metabolic syndrome 8,9 . These conventional pharmacological agents are monotherapeutic, relatively expensive, inaccessible to the majority global population and elicit side effects 10 hence the dire need for less costly, more accessible and less toxic alternatives. Majority of the global population makes use of plant-derived ethnomedicines 11 . Eighty percent of the SSA population 1 and 27 million South Africans depend on plant-derived ethnomedicines for health care 12 . Research on efficacy and safety of these alternative medicines is critical to increasing access to global primary health care. Ficus thonningii is an ethnomedicine used to treat a number of conditions 13 . Its parts and extracts contain tannins, saponins and flavonoids 14 with antiobesity, antioxidant and antidiabetic activities 15 making it a potential prophylactic agent against diet-induced metabolic derangements (MDs). We evaluated the prophylactic potential of crude methanolic F. thonningii stembark extracts to protect against dietary fructose-induced MDs in growing Sprague-Dawley rats mimicking children fed obesogenic diets. Plant collection, identification and extract preparation Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 November 2021 Fresh F. thonningii stem bark was collected at a farm (GPS: longitude 20° 13' 47" and latitude 28° 45' 9") in Bulawayo, Zimbabwe. The stem barks and samples of the tree's small branches were transported overnight to the University of the Witwatersrand, South Africa where John Burrows, a nature conservationist, identified and authenticated the plant. Cut strips of F. thonningii stem barks were dried in an oven at 40°C for 24 hours and then milled into a fine powder. The stem bark extract was prepared as described by Musabayane et al 16 . Briefly, 25g of the powder were macerated in 100mL of 80% methanol (Merck Chemicals, Johannesburg South Africa) for 24 hours with continuous stirring. Immediately thereafter the mixture was filtered using a filter paper (Whatman ® , No 1, size 185mm, pore size 7-11). The filtrate was rotor-evaporator concentrated at 60°C and then dried in an oven at 40°C for 12 hours. The dried extract was stored at 4°C in sealed glass bottles until use. Study site and ethical clearance The study, approved by the Animal Ethical Screening Committee of Wits University (AESC number: 2016/05/24/C), was conducted within Wits Animal Research Faciltity and School of Physiology of Wits University. Handling and procedures on the rats were as per international guidelines on animal use in research. Rat management Eighty 21-day olds SD rat pups used were given a 2-day habituation period to familiarise with handling and the experimental environment. Each rat was individually housed in an acrylic cage with a feeding trough and a drinker. Bedding from clean wood shavings was changed twice weekly. Room temperature was maintained at 24±2°C. A light/dark cycle maintained: lights on from 0700 to 1900 hours. A standard rat chow (Epol RCL Food, Centurion, South Africa) and drinking fluid: tap water and 20% (w/v) fructose solution, depending on treatment, were availed ad libitum. Oral glucose tolerance test Following 54 days on treatments (post-natal day 77), the rats were subjected to an oral glucose tolerance test (OGTT) following an overnight fast but with ad libitum access to drinking water. A Contour-plus glucometer was used to determine the fasting blood glucose concentration with blood from a pin-prick of each rat's tail vein 18 . Immediately thereafter, each rat was gavaged with 2g/kg body weight of sterile 50% (w/v) D-glucose (Sigma, Johannesburg, South Africa) solution. Post-gavage blood glucose concentrations were measured at 15, 30, 60 and 120 minutes. 6 Terminal procedures and measurements After a 48-hour recovery from the OGTT on their respective treatments, the rats were again fasted for 12-hours. Fasting blood glucose and triglyceride concentrations were measured using a calibrated Contour Plus ® glucometer and an Accutrend GCT meter. Each rat was then euthanised by intraperitoneal injection of 200mg/kg bwt sodium-pentobarbitone (Euthanaze, Centaur labs, Johannesburg, South Africa). Each rat's blood, collected into heparinised blood collection tubes via cardiac puncture was then centrifuged for 10 min at 5000 × g. Plasma was decanted into microtubes and stored at -20°C pending for plasma insulin concentration. Livers were dissected out, each weighed, a sample preserved in 10% phosphate-buffered formalin and the remainder was frozen-stored at -20°C for liver lipid content determination. Insulin determination and estimation of insulin resistance An ELISA (ElabScience Biotechnology, Texas, USA) kit with monoclonal insulin antibodies specific for rat insulin, was used to determine plasma insulin concentration. Absorbencies were read 450nm off a plate reader (Multiskan Ascent, Lab System Model354, Helsinki, Finland). Insulin concentrations were determined from the constructed standard curve. Fasting blood glucose and plasma insulin data were used to compute fasting whole-body insulin sensitivity and the β-cell function using the homeostasis model assessment of insulin resistance as follows: HOMA-IR = fasting plasma glucose (mg/dL) × fasting plasma insulin (µU/mL)] / 405 19 . Liver lipid content and histology Liver lipid content was determined as described by the Association of Analytical Chemists 20 using a Tecator Soxtec apparatus. Assays were done in triplicate. The formalin-preserved liver 7 samples were processed in an automatic tissue processor (Microm STP 120 Thermoscientific, Massachusetts, USA), embedded in paraffin wax, rotary microtome-sectioned (RM 2125 RT, Leica Biosystems, Germany) at 3µm, mounted on glass slides and then haematoxylin and eosin stained. A Leica ICC50 HD video-camera linked to a Leica DM 500 microscope (Leica, Wetzlar, Germany) captured photomicrographs of stained sections that were analysed using ImageJ software. Stained liver sections were scored semi-quantitatively for macro-/micro-steatosis and inflammation according to Liang et al 21 . Hepatocellular vesicular micro-/macro-steatosis was analysed based on the total area of the liver parenchyma affected per camera field (x20) and scored according to the criteria: grade 0 = ˂5%; grade 1 = 5-33%, grade 2 = 33-66% and grade 3 = ˃% . The number of inflammatory cell aggregates in the liver parenchyma were counted per camera field (x100) and scored as follows: grade 0 = none or no foci per camera field; grade 1 = 0.5 to 1.0 foci per camera field, camera field; grade 2 = 1-2 foci per camera field; grade 3 ≥ 2 foci per camera field (X100). Statistical analysis Parametric data is presented as mean±SD and non-parametric data as median and interquartile ranges. GraphPad Prism 6.0 (Graph Pad Software, San Diego, California, USA) was used to analyse data. A repeated measures ANOVA was used to analyse OGTT data. Other parametric data was analysed using a one-way ANOVA and mean comparisons done via Bonferroni post hoc test. Kruskal-Wallis test was used to analyse scores for macro-/micro-steatosis and inflammation. Medians of non-parametric data were compared using the Dunns post hoc test. Significance was set at P < 0.05. Growth performance and tolerance of an oral glucose load Figures 1A and 1B show the induction and terminal body masses of the male and female rats, respectively. The induction body weights of the male rats and female rats (Figures IA and 1B), respectively, were similar. Treatment regimens had no effect (P>0.05) on the rats' terminal body masses but the rats grew significantly during the trial (P<0.05). Induction mass Terminal mass Ficus thonningii extract (500 mg/kg body weight/day). Data presented as mean ± SD; n = 7-8. Circulating metabolites concentration and insulin sensitivity 12 Effects of the methanolic F. thonningii stem bark extracts on blood glucose and triglyceride and plasma insulin concentrations and HOMA-IR of the male and female rats are shown in Tables 1A and 1B, Figure 4A, Table 2A). Dietary fructose induced macro-and micro-steatosis and hepatic inflammation (P<0.05) in female rats which were prevented by both MEFT doses but not fenofibrate ( Figure 4B, Table 2B). presented as mean ± SD; n = 7-8. Data presented as mean ± SD; n = 7-8. PV= portal vein. White, grey and black arrows indicate inflammatory cell aggregates, micro and macro steatosis, respectively. Scale bar=50 μm. DISCUSSION Dietary fructose alone and or with fenofibrate or crude MEFT as interventions had no effect on rats' growth performance suggesting that these interventions did not compromise growth. Our findings disagree with Pektaş et al 22 and Toop and Gentili 23 who observed increases in body weight of fructose-fed rats. We contend that the difference in our and Pektas et al 22 and Toop and Gentili 23 findings was due to differences in the age of the rats. We used weanling growing rats that are known to channel "extra" fructose calories to support growth and development unlike adult rats that accrete "excess" calories as adipose tissue. The lack of effect of dietary fructose on body weight we report is in tandem with Grau et al 24 observations of similarities in terminal body weight of adolescent SD rats fed 60% fructose solution as a drinking fluid. Badiora et al 25 showed that orally administered F. thonningii stem-bark extracts increased the body weight of rats. In our study the crude MEFT neither compromised nor promoted rat growth thus they can used without the risk of compromising animal and or child growth. Dietary fructose-induced hyperglycaemia, deranged lipid profile and insulin resistance, is well documented 26 . Our findings show that the high-fructose diet alone and or with fenofibrate or crude MEFT as interventions did not alter the rats' glucose and insulin concentrations and HOMA-IR indices but the chronic consumption of dietary fructose increased female rats' blood triglyceride concentration. Thus we infer that the consumption of a high fructose diet for 8 weeks induced hypertriglyceridaemia in female rats only but did not elicit hyperglycaemia and insulin resistance in growing male and female rats. Crude MEFT and fenofibrate did not elicit dysregulation of blood glucose and insulin concentration. Grau et al 24 contend that fructose consumption stimulates de novo hepatic lipogenesis which increases blood triglycerides in rodents. We report, in female rats, significant increase in plasma triglycerides with chronic fructose consumption compared to control counterparts and contend that increased de novo hepatic lipogenesis generates triglycerides that are exported to the systemic circulation hence the increase in plasma triglycerides. We showed similarities in the plasma triglyceride concentration of rats administered the control and that of counterparts fed the high-fructose diet with fenofibrate and or crude MEFT. This demonstrates that both orally administered of low and high 22 dose MEFT and fenofibrate prevented dietary-fructose mediated plasma triglyceride concentration increase in female rats. Crude MEFT can be used as prophylaxes against fructose-induced hypertriglyceridaemia in growing female rats and possibly girl-children. Mapfumo et al 27 and Lê et al 28 reported that fructose-rich diets caused hepatic lipid accretion in growing rats' livers but we show that dietary fructose did not impact the rats' liver lipid across treatment regimens suggesting that dietary fructose per se and or with either orally administered high dose MEFT or fenofibrate did not alter liver lipid storage of the rats. We also show that female rats fed a high-fructose diet with the low dose MEFT had the highest liver lipid content. This suggests that the low dose MEFT may stimulate excessive hepatic lipid accretion and thereby predispose female rats to higher risk of developing fatty liver diseases. Therefore, despite its prophylactic potential against elements of dietary fructose induced MD, use of low dose MEFT must to be with caution in growing females. In growing rats we show that chronic dietary fructose intake elicited hepatic inflammation in both rat sexes, micro-and macro-steatosis in females and micro-steatosis in males. In male rats crude MEFT and fenofibrate prevented the dietary fructose induced hepatic microsteatosis but hepatic inflammation was only prevented by the MEFT. The low and high dose MEFT and fenofibrate mitigated the dietary fructose-induced hepatic steatosis and inflammation in female rats. The crude MEFT appear more efficacious in protecting against dietary fructose induced steatosis and inflammation compared to fenofibrate which did not attenuate dietary fructose induced hepatic inflammation in growing male rats. We speculate that the mutlitherapeutic effects of phytochemicals in crude MEFT make them better prophylactic agents compared to the monotherapeutic fenofibrate CONCLUSIONS Fructose elicited hypertriglyceridaemia in a sexually dimorphic manner and caused hepatic inflammation and steatosis in both rat sexes. Crude low and high dose MEFT prevented dietary fructose induced hypertriglyceridaemia, hepatic inflammation and steatosis hence they can be used as prophylaxis against elements of diet-induced MD in growing SD rats and maybe in children. Caution must be taken as low dose MEFT can predispose females to increased risk of developing fatty liver disease.
3,420.2
2021-11-24T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Infrared Thermographic Evaluation of Temperature Modifications Induced during Implant Site Preparation with Steel vs. Zirconia Implant Drill Background: The heat produced during implant site osteotomy can potentially interfere with and influence the osseointegration process of a dental implant. The objective of this in vitro investigation was to measure the temperature changes during simulated osteotomies in bovine rib bone. The measurements were made at the apical area of the osteotomies with steel implant drills compared to zirconia implant drills. Methods: Steel cylindrical drills (2 mm) and zirconia cylindrical drills (2 mm) were evaluated in vitro using bovine rib bone for a total of five groups based on the number of osteotomies performed with each drill: 10, 20, 40, 90, or 120 osteotomies. Bone and apical drill temperatures were measured by means of infrared thermography. The drilling time for each osteotomy was measured for each preparation. Results: Statistically significant differences were found in the temperature measurements in the bone and apical portion of the drills between the study groups (p < 0.05). A statistically significant difference was observed for drilling time preparation between steel cylindrical drill (2 mm) and zirconia cylindrical drills (2 mm) (p < 0.01). Conclusions: The drill material has an impact on the temperature changes that occur at its apical portion during bone preparation for implant placement. Introduction Oral implant rehabilitation is a highly predictable procedure characterized by 10-year success rates of over 97% [1][2][3]. Bone healing around the implant surface is influenced by different factors such as heat generation during implant site preparation [4,5], insertion torque, micro and macro implant surface characteristics, and quality of bone [6,7]. Bone healing around fixtures is a biological phenomenon with the proliferation and differentiation of pre-osteoblasts into osteoblasts, the production and mineralization of osteoid matrix followed by the organization of the bone-implant interface [8]. These complex biological phenomena allow the dental implant to achieve osseointegration [8]. The implant bed preparation is very important and can negatively influence the bone healing process [1]. During implant site preparation, the amount of heat generated and transferred between 1. sequential drilling with increasing drill diameter [21], 2. Additionally, different drill materials have been proposed as such steel, zirconia, and nitride titanium. Zirconium dioxide or zirconia is a good material used in implantology for its biocompatibility as well as physical and aesthetic properties [25,26]. In clinical practice, it zirconia is used for implant abutments and superstructures because of its durability, strength, corrosion resistance, and response to disinfection and sterilizing agents [25]. The aim of this study was to compare the temperature changes during implant bed preparation using a steel vs. a zirconia implant drill of the same cylindrical shape. Materials and Methods Steel and zirconia implant drills were evaluated in bovine rib bone. Twenty-four bovine ribs were cleaned and removed of all soft tissue residues, then immersed in a physiological saline water below to simulate body temperature. The inferior half of the bone was submerged in a temperature-controlled saline bath (37.0 • C). Care was taken to select samples where the bone was as homogeneous as possible, and the cortical layer was of a similar thickness for all implant sites. Each bovine rib was then secured to the aluminum base plate with adjustable clamps. Site preparation began when the internal temperature of the bone, as measured by the infrared thermography, reached the bath temperature of 37.0 ± 0.1 • C. Saline solution at room temperature was used to irrigate the site and was maintained continuously throughout drilling at a rate of 40 mL/min at room temperature. Thermal measurements were performed in a climate-controlled room (temperature: 23-24 • C, relative humidity: 50% ± 5%, and no direct ventilation on the bone). The steel and zirconia drills evaluated were cylindrical (2 mm) with a double twist system. Twenty sets of new steel drills (Sweden Martina, Padova, Italy) and twenty zirconia drills (SAFE Implant, Malaysia) were evaluated for each system (Figures 1 and 2A,B). The drills were used sequentially for up to 120 osteotomies and the experimental data was grouped by the number of osteotomies done. The experimental data were grouped according to the number of osteotomies performed for a total of five wear groups: Group 1, 10 osteotomies; Group 2, 20 osteotomies; Group 3, 40 osteotomies; Group 4, 90 osteotomies; and Group 5, 120 osteotomies. All drilling was prepared to 10 mm depth at a speed of 800 rev/min under abundant external irrigation with saline solution. The rotational speed of 800 rpm was used for easy comparison with previous work [27]. A 20:1 implant handpiece with a physio-dispenser (Vario-Surgery NSK, Tochigi, Japan) was mounted on a universal testing machine, so that there was a constant drill load ( Figure 3). Continuous drilling was performed with a Lloyd 30K universal testing machine (Lloyd Instruments Ltd., Segensworth, UK), with constant load applied during implant site preparation, which was 2 kg during the entire implant preparation, and a constant torque of 40 N/cm. Moreover, the drilling depth parameter of 10 mm was electronically set for both drill groups using the Lloyd 30K universal testing machine to ensure the reliability and repeatability of the experiment. During implant preparation, the bone rib was always in a thermostat-controlled saline bath leaving 3 mm of bone emerged out of solution. The drills were not sterilized or disinfected, only cleaned. The time taken to perform the osteotomy was recorded and expressed in seconds . The drills were used sequentially for up to 120 osteotomies and the experimental data was grouped by the number of osteotomies done. The experimental data were grouped according to the number of osteotomies performed for a total of five wear groups: Group 1, 10 osteotomies; Group 2, 20 osteotomies; Group 3, 40 osteotomies; Group 4, 90 osteotomies; and Group 5, 120 osteotomies. All drilling was prepared to 10 mm depth at a speed of 800 rev/min under abundant external irrigation with saline solution. The rotational speed of 800 rpm was used for easy comparison with previous work [27]. A 20:1 implant handpiece with a physio-dispenser (Vario-Surgery NSK, Tochigi, Japan) was mounted on a universal testing machine, so that there was a constant drill load ( Figure 3). Continuous drilling was performed with a Lloyd 30K universal testing machine (Lloyd Instruments Ltd., Segensworth, UK), with constant load applied during implant site preparation, which was 2 kg during the entire implant preparation, and a constant torque of 40 N/cm. Moreover, the drilling depth parameter of 10 mm was electronically set for both drill groups using the Lloyd 30K universal testing machine to ensure the reliability and repeatability of the experiment. During implant preparation, the bone rib was always in a thermostat-controlled saline bath leaving 3 mm of bone emerged out of solution. The drills were not sterilized or disinfected, only cleaned. The time taken to perform the osteotomy was recorded and expressed in seconds . The drills were used sequentially for up to 120 osteotomies and the experimental data was grouped by the number of osteotomies done. The experimental data were grouped according to the number of osteotomies performed for a total of five wear groups: Group 1, 10 osteotomies; Group 2, 20 osteotomies; Group 3, 40 osteotomies; Group 4, 90 osteotomies; and Group 5, 120 osteotomies. All drilling was prepared to 10 mm depth at a speed of 800 rev/min under abundant external irrigation with saline solution. The rotational speed of 800 rpm was used for easy comparison with previous work [27]. A 20:1 implant handpiece with a physio-dispenser (Vario-Surgery NSK, Tochigi, Japan) was mounted on a universal testing machine, so that there was a constant drill load ( Figure 3). Continuous drilling was performed with a Lloyd 30K universal testing machine (Lloyd Instruments Ltd., Segensworth, UK), with constant load applied during implant site preparation, which was 2 kg during the entire implant preparation, and a constant torque of 40 N/cm. Moreover, the drilling depth parameter of 10 mm was electronically set for both drill groups using the Lloyd 30K universal testing machine to ensure the reliability and repeatability of the experiment. During implant preparation, the bone rib was always in a thermostat-controlled saline bath leaving 3 mm of bone emerged out of solution. The drills were not sterilized or disinfected, only cleaned. The time taken to perform the osteotomy was recorded and expressed in seconds. Thermal image series during implant site preparation were obtained using a 14-bit digital infrared camera (FLIR SC3000 QWIP, FLIR Systems, Danderyd, Sweden). The acquisition parameters were: 320 × 240 focal plane array; 8-9 µm spectral range; 0.02 K noise equivalent temperature differences (NETD); 50 Hz sampling rate; optics: germanium lens; f 20; and f/1.5). Images were acquired at a rate of 10 images per second and subsequently re-aligned using an edge-detection-based method implemented with in-house software. A video was performed, and the photos were extrapolated via dedicated software (FLIR Reporter, Danderyd, Sweden). The infrared thermographic system was positioned at a focal distance of 1 m from the specimens. The implant bed was positioned in a way that it was perpendicular to the surface from which the thermal image system measured any observed temperature change. To avoid the interference of water with infrared radiation emitted from the specimens, a plastic screen was applied that protected the flat bone surface of interest from the irrigant. Temperature changes in cortical bone during implant bed preparation were determined using these images (Figures 4 and 5). The temperature changes in the apical portion of the drill were determined using thermal image after finishing the preparation of the implant bed and removing the drill from the bone (Figures 4 and 5). Thermal image series during implant site preparation were obtained using a 14-bit digital infrared camera (FLIR SC3000 QWIP, FLIR Systems, Danderyd, Sweden). The acquisition parameters were: 320 × 240 focal plane array; 8-9 µm spectral range; 0.02 K noise equivalent temperature differences (NETD); 50 Hz sampling rate; optics: germanium lens; f 20; and f/1.5). Images were acquired at a rate of 10 images per second and subsequently re-aligned using an edge-detection-based method implemented with in-house software. A video was performed, and the photos were extrapolated via dedicated software (FLIR Reporter, Danderyd, Sweden). The infrared thermographic system was positioned at a focal distance of 1 m from the specimens. The implant bed was positioned in a way that it was perpendicular to the surface from which the thermal image system measured any observed temperature change. To avoid the interference of water with infrared radiation emitted from the specimens, a plastic screen was applied that protected the flat bone surface of interest from the irrigant. Temperature changes in cortical bone during implant bed preparation were determined using these images (Figures 4 and 5). The temperature changes in the apical portion of the drill were determined using thermal image after finishing the preparation of the implant bed and removing the drill from the bone (Figures 4 and 5). Statistical Evaluation A power analysis was performed using clinical software for determining the number of drills needed to achieve statistical significance for quantitative analyses of temperature. A calculation model was adopted for dichotomous variables (yes/no effect) by putting the effect incidence designed to discern the reasons, 85% for zirconia drill and 20% steel drill with alpha = 0.05 and power = 90%. The optimal number of samples for analysis was 20 drills per group. The data were analyzed with the Shapiro-Wilk test of normality and t-test for zirconia and steel drills samples. The differences in temperature between the five osteotomies groups were analyzed using Welch correction ANOVA followed by Games-Howell post hoc test. Differences will be considered statistically significant at a value of p < 0.05. . Figure 5. Infrared thermography temperature evaluation of the steel cylindrical drill (2 mm). Statistical Evaluation A power analysis was performed using clinical software for determining the number of drills needed to achieve statistical significance for quantitative analyses of temperature. A calculation Results The mean temperature produced in cortical bone during implant preparation are shown in Table 1. The rise in temperature was statistically higher when over 20 osteotomies were made for both groups (p < 0.05). No statistical difference was detected in the group 1 (p = 0.54). The zirconia groups showed statistically lower bone temperature compared to steel drills in Group 2, Group 3, and Group 4. After 120 osteotomies, the steel group showed a bone temperature of 42.45 ± 1.70 • C, compared to the zirconia drills which reported average values of 40.80 ± 0.85 • C (Table 1, Figure 6). The optimal number of samples for analysis was 20 drills per group. The data were analyzed with the Shapiro-Wilk test of normality and t-test for zirconia and steel drills samples. The differences in temperature between the five osteotomies groups were analyzed using Welch correction ANOVA followed by Games-Howell post hoc test. Differences will be considered statistically significant at a value of p < 0.05. Results The mean temperature produced in cortical bone during implant preparation are shown in Table 1. The rise in temperature was statistically higher when over 20 osteotomies were made for both groups (p < 0.05). Figure 6). At 120 osteotomies, the mean temperature produced in the apical portion of the drill during implant preparation was 42.15 ± 1.14 °C for the steel drill and 40.62 ± 1.00 °C for the zirconia drill ( Table 2). At 120 osteotomies, the mean temperature produced in the apical portion of the drill during implant preparation was 42.15 ± 1.14 • C for the steel drill and 40.62 ± 1.00 • C for the zirconia drill ( Table 2). A statistical difference in the apical temperature of the drill was present in all groups (p < 0.05). The statistical difference between groups increased as the number of osteotomies increased (p < 0.01) ( Table 2). A statistical difference was detected in the time necessary to perform the osteotomy in all groups (Table 3). Discussion The most interesting finding of the present study is that there was a statistically significant temperature increase and drilling time in the implant bed sites prepared with steel drills. The temperature difference between the steel and the zirconia drill was 1.5 • C. Within the limitations of this study the recorded temperature differences are not critical the health of preimplant bone. However, this difference has no clinical relevance if interpreted as an absolute value. In fact, these results are influenced by the force applied to the drill and feed rates. Inappropriate pressure during drilling may cause higher bone temperatures, which can further have an influence on the health of the peri-implant bone [28]. Moreover, the implant bed [29] preparation is complex, and the amount of pressure is influenced by multiple factors such as rotation speeds [29,30] and feed rates [31,32]. In clinical practice, it is impossible to control the pressure and feed rates of the drill. For these reasons, it can be hypothesized that in clinical practice the temperature is superior to that observed in the present study. In fact, many factors that can influence the heat generation during the implant bed preparation including drilling speed [33,34], drilling depth [35], drill geometry [36,37], sharpness of the cutting tool [38], use of internal or external irrigation [39], use of graduated versus one-step drilling [40], intermittent versus continuous drilling and drill material [41] are controllable in clinical proactive, while the pressure applied to the drill [34] is not controllable. In the zirconia group, the outline of the implant bed was well defined even after 120 osteotomies. The temperature increase observed in the implant bed sites with the zirconia drill was probably due to their great resistance to wear. Furthermore, zirconia is also known to be a good thermal insulator. The use of zirconia material is interesting because it has conductive abilities in the bone tissue, which are almost equivalent to those of titanium implants. Moreover, zirconia drills induce less damage during implant bed preparation and advantageous for bone healing [42]. Zirconia drills, when used for implant bed preparation positively influence bone healing compared to stainless steel drills [42]. We found the generation of friction heat during osteotomies for implant preparation to be influenced by the drill material especially when we prepare implant sites in dense cortical bone. In the present research, we chose bovine ribs, which are almost similar to the human mandibular bone in terms of density and ratio between cancellous and cortical bone [14], and this model has been used by many authors. A study showed that stainless steel and zirconia drills could be used up to 50 times without showing severe signs of wear and deformation [43]. Some studies have demonstrated that heat generation during implant bed preparation plays a significant role in implant failure [17,44]. In fact, heating of the bone induces bone devascularization, loss of vitality of the periosteum, and a denaturation of alkaline phosphatase [45]. Previous research by the current authors has used a thermocouple to measure the temperature change induced during implant site preparation in a bovine rib model [20]. A testing model was subsequently developed to visualize the temperature changes during implant site preparation under saline irrigation. A study that used external irrigation during drilling of bovine bone showed that the temperature increases with the thermocouple were significantly higher in the cortical bone, and increased when increasing the number of times of drills were used [20]. In another study, the authors used thermocouples, which provided information only about thermal changes in the area close to the drill [46][47][48]. These studies concluded that irrigation is more critical to the control of temperature elevation than the material of the drill. Furthermore, a recent research concluded that cooler irrigating solutions can confer benefits in the preparation of the implant bed by eliminating several factors that may cause bone overheating [49]. The thermocouple is fixed to the bone and, therefore, has the disadvantage of not being able to intercept the changes in temperature in the rotating drill itself. In the present research, we used infrared thermography (IRT) evaluation because this method of measuring heat provided information about the changes in temperature in the rotating drill itself. The use of IRT for evaluating the change in temperature during implant bed preparation has the advantage of measuring the temperature of the drill but without providing information about the changes in temperature deep in the implant bed. The disadvantage of IRT is that it allows only surface temperature to be evaluated. IRT is a well-known technique to measure infrared energy emitted from an object which it converts it into a radiometric thermal image and displays the image of surface temperature distribution. This technique is extensively used in other medical fields for evaluating the thermal distribution of a body without any contact between the body and the sensors. It is used for evaluating cutaneous temperature distribution, cutaneous blood perfusion [50], to detect varicocele [51], diabetic neuropathy [52], brain imaging (thermoencephaloscopy) [53], and breast cancer detection [54]. This technique was used for evaluating the temperature of bone during implant bed preparation in 2011 [46]. This method is now also used in the dental implantology field for measuring bone thermal changes [27,55]. The study model used in this work allowed us to evaluate the temperature in the cortical bone and in the apical portion of the drills and to demonstrate that these temperature modifications were correlated to the drill geometry. The results of the present study demonstrate that the material of the drill is also an important factor in heat generation during implant site preparation. In the present study, no consideration was given to either the influence of disinfection and sterilization or the extent of drill use. Although many factors may play a role in drill cutting efficiency and bone temperature, it is their net effect that has a clinical relevance. A review on bone drilling has investigated the methods for reducing thermal osteonecrosis [55]. In fact, the implant failure rate for osseointegration is influenced by many factors and one of them is thermal damage in bone tissue, that is influenced by drilling speed, feed rate, cooling, drill diameter, drill point angle, drill material and wear, drilling depth, pre-drilling, drill geometry, and bone cortical thickness [56]. To reduce heat generation during bone drilling, drill design, drilling parameters, coolant delivery and temperature have been studied. Currently these issues have not yet been clarified because it is difficult to define the variable most responsible for bone heating during drilling. It is difficult to measure the bone temperature during drilling, because bone is a composite of organic and inorganic components and has anisotropic behavior [57]. Moreover, the medullary cavity is a gelatinous structure, contributing to thermal dissipation. For these reasons, it can be hypothesized that in clinical practice the temperature is higher than that observed in the present study. An in vitro study is a simple way to test some hypotheses. The methods used in the present study could provide valuable information for implantology, but it represents a simplification of the clinical reality. The outcomes of the present study were insufficient for precise and conclusive results. Different variables lead to experimental errors. In fact, the bone is a complex anisotropic and mineralized connective tissue with organic and inorganic components. Moreover, there are great individual differences, and the densities of the ribs used in this study were inhomogeneous for bone cortical thickness, even if the specimens were drilled in the same position. In this study, we used a very different in vitro model from vital bone, while in the clinical practice, the drilling is performed in bone with blood flow response to surgical trauma [58]. Finally, the drill shapes used were very similar, but not identical. This aspect can be considered negligible in consideration of the low friction forces related with the reduced diameter and the high penetrating capability of the drills investigated in this study but it could be critical in the case of increased drill diameter. Conclusions In conclusion, drill material plays an important role in thermal changes during implant bed preparation. Implant site preparation by zirconia drills could represent a useful tool for heat control during bone osteotomy in the clinical practice. Conflicts of Interest: The authors declare no conflict of interest.
5,170.2
2020-01-01T00:00:00.000
[ "Medicine", "Engineering" ]
Ways of Being of Equipment : A Heideggerian Enquiry into Design Process The paper lays out an ontological enquiry into the ways of „being of equipment‟ as analysed by Heidegger and its role in understanding the design process. Equipments are things that make up our world. It is hard to imagine living without things because our existence is thingly textured. Heidegger‟s analytics of equipment far exceeds the ontic sense of things. The argument is that there is a danger when designers limit themselves with the ontic understanding of equipment. Such an understanding coaxes us to believe in the half-baked truth about equipment an isolated instance of a piece of artifact and leaves us ignorant of the equipment's character as a part of an equipment structure. An ontological reflection on equipment brings forth its relational nature and can be rewarding in several ways in improving its design process. Introduction Traditional ontology was interested in finding out the different kinds of entities in the world.For example, Aristotle in his Categories was interested in classifying the beings (onta) there are and wanted to identify the beings that are most fundamental and real within that classification (Shields, 2007).Instead of asking what are the kinds of entities, in Being and Time, Heidegger asked the question differently: "what are the kinds of ways that entities can be in the world?" (Riemer & Johnston, 2014, p. 276).Heideggerian method is not focussed on giving a list of different kinds of entities like Aristotle did but looks at the way they are.Our world is constituted of many different kinds of beings: humans, artefacts, plants, animals, water and so on.According to Heidegger, it is possible for these entities to have multiple ways of "being" in the world.He discusses the different ways of Dasein"s being-in-theworld in Being and Time. 1 Heidegger takes effort in Being and Time to make his readers understand the unitary relationship that exists between the different components of Dasein"s being-in-the-world.They are (1) the notion of "being-in" 2) the concept of "the world" in which existence is located (3) that which is in the world, namely "the who" or "the self".According to Heidegger, the best way to study Dasein"s existential constitution is by turning to what is "ontically closest" to us, that is, our everyday situations (Heidegger, 1962, p. 69).We must try to understand ourselves in the act of everyday existence, in Heideggerian terms, in our everydayness (p.69).But this is a challenging task because the moment we look at ourselves, we are likely to misinterpret ourselves.A common error is viewing oneself as essentially detached; Heidegger considers this as a consequence of a long ontologically misleading Cartesian tradition that has been accepted over a period of time without questioning. The first job Heidegger takes up in Being and Time is to deconstruct this idea and establish that humans are not spectators, but engaged actors in everyday situations which is best expressed through the expression, .This implies that humans are engaged beings immersed in a living context which comprises things, people, other living beings, culture and so on.The content may vary from person to person as well as from context to context.Nevertheless, a defining characteristic of human existence is that humans are at all-time actively engaged with their world.2 It entails that if Dasein is being-in-the-world then the world itself is a part of the essential constitution of our existence.Further on, "equipment" is a constitutive element of Dasein"s world.To consider "equipment" as mere free-floating entities is a contradiction to its very nature.Rather, the equipment is Dasein"s way of being.However, Dasein"s mode of existence is different from the kind of existence of equipment in the world, characterised by spatiotemporal features (Heidegger, 1962, p. 121 & Heidegger, 1988, p. 28).Heidegger makes this distinction very explicit when he says that only Dasein can touch while a chair cannot touch though it touches the wall.The wall and the chair cannot engage each other in the way Dasein encounters the chair (Heidegger, 1962, p.155).It is Dasein who can encounter entities within the world.If we follow the Heideggerian logic, the chairs, walls, computers, cars, hammers, tables are being encountered in Dasein"s dealings with the world.They come into picture only on account of Dasein"s engaged practices though they may exist as inert and independent of Dasein"s practices.3 Ready-to-Hand and Present-at-Hand Heidegger argues that artefacts can be in the world in two ways on the basis of how they are being encountered by Dasein in the course of his dealings with the world.Dasein usually encounters entities in its everyday world as ready-to-hand (Zuhandenheit) while at other times as present-at-hand (Vorhandenheit).In our everyday context, we encounter things mostly as ready-to-hand.The ready-to-hand describes our practical relation to things that are handy or useful; entities are encountered as a continuous whole of interconnected relationships.This is our primordial relationship with the world, our lived experience. In Heideggerian analysis, when entities are encountered as "tools, objects of use, cultural products, things of value and significance," (McDaniel, 2013, p. 332) they are called equipment.Heidegger writes, "we shall call those entities which we encounter in concern "equipment".In our dealings, we come across equipment for writing, sewing, working, transportation, measurement" (Heidegger, 1962, p. 97).Equipment is a technical term in Heideggerian framework and is understood not in ontic but in an ontological sense and its meaning is not limited to physical tools alone, although Heidegger uses a hammer to elaborate this concept.Equipment then could be non-physical, a service mechanism, a sign or an environment.He writes that equipment taken in this ontological sense is not only equipment for writing or sewing, but "it includes everything we make use of domestically or in public life.In this broad ontological sense streets, street lamps are also items of equipment" (Heidegger, 1988, p. 292). Imagine a carpenter who is engaged in his work of hammering encounters a hammer as equipment. The carpenter in his familiarity with the hammer as well as the work which he is involved in, namely hammering, encounters it as a means, as an inorder-to (Heidegger, 1962, p. 97 & Riemer, 2014, p. 276).This "inorder-to" describes a relationship where one uses equipment to achieve a particular task; in the case of a carpenter, it is for driving nails in, removing nails, preening, or shaping.While in action, hardly does one encounter a hammer as a wooden shaft with a metallic spherical blob.The hammer as a tool conceals itself and its properties when it is ready-to-hand.In the everyday practical use of hammering, the hammer withdraws from our direct attention and remains inconspicuous.It is its absence that makes it ready-tohand.Through the example of a hammer, Heidegger wants to demonstrate that in our everyday interactions, we encounter equipment not in a theoretical way but as an in-order-to. It is also possible that the same entity may appear for Dasein as present-at-hand when it is approached in a disengaged reflection.One may approach the entity out of mere curiosity, as an object of the first encounter, as an object of design, as an object of scientific enquiry and so on.In all these instances, the entity is present-athand for Dasein and the focus of attention gets shifted from the practical activity to the object itself through its properties.At present-at-hand, an entity is viewed by us as lying inert, determinate and isolable (McDaniel, 2013, p. 334). Heidegger"s point is that the being of the equipment is not fully revealed in one particular mode of its existence.The hammer has the being of Zuhandenheit, disclosed through its performance and at the same time its being of Vorhandenheit is disclosed through disengaged observation or analysis.Nevertheless, they are not two different kinds of entities but the same entity being encountered either as ready-to-hand or present-at-hand modes of being (p.334).In Boedeker"s (2005) words: Presence-to-hand is neither a super-property nor a formal structure common to everything existent.Instead, it is one of the several ways in which we can encounter entities.It is to be contrasted, for example, with "readiness-at-hand" (Zuhandenheit), in which we encounter entities in terms of their usefulness (or uselessness) to our practical projects.Crucially, because "presence-to-hand" and "readiness-athand" are just different ways of encountering what Heidegger calls "intraworldly entities" -a term coextensive with "physical objects" -they are not different kinds of entities.(p.159) The error of modern science, Heidegger complains, is that it placed the "present-at-hand" mode of being as fundamental and superior while in reality, it is only one of the several ways in which we can encounter entities.Heidegger (1962) contrasts the practical understanding with a disinterested theoretical understanding: If we look at things just 'theoretically', we can get along without understanding "readiness-to-hand".But when we deal with them by using them and manipulating them, this activity is not a blind one; it has its own kind of sight, by which our manipulation is guided…Dealings with equipment subordinate themselves to the manifold assignments of the "in-order-to".(p.98) Heidegger calls such sight circumspection Umsicht; the original seeing is always in the context of our projects, its uses, as function as something in-order-to do something, and as something that points beyond itself according to the task at hand.Every entity that is first grasped by our circumspection is "ready-to-hand" (Zuhandenheit) before the possibility of its being perceived or intuited as present-at-hand (Vorhandenheit) (Inwood, 1999, p. 129 & Riemer, 2011, p. 8).When the entities are looked at only as presentat-hand, it loses its intrinsic meaning and warmth of relationship with other beings.It becomes a de-contextualised material, counted as one among the batch having only numerical representations.Such representations miss the richness and complexity of life to which the equipment is associated. The Paradox of Invisibility Heidegger points out that "present-at-hand mode" of being is not what defines our everyday lives.We are not beings who spend most of our everyday life in detached contemplation.Rather, we are beings often absorbed in certain practices.When we are engaged in such practices "the world and all its contents, including things, artifacts, our body and others, are both invisible and subordinate to our practices" (Riemer, 2014, p. 277).In other words, an entity withdraws into invisibility while being in use but becomes visible when there is a failure in its performance or when there is a detached observation. 4 Take the example of the world of a medical doctor who listens to a patient"s heartbeats through a stethoscope.The stethoscope is so much part of her that it appears to her not as an object made of a certain definable substance extended over a geographical location in space and time but as something which is useful for carrying out her work.The stethoscope shapes her view of reality.In the words of Don Ihde, she develops an embodiment relation (Ihde, 2009. pp.42-44).That is, when she listens to a patient"s heartbeat through a stethoscope, the stethoscope has an influence on the way she perceives the patient"s medical condition.Partially, the stethoscope disappears from her awareness, as the hammer did in Heidegger"s example.We can state this relationship as (doctor -stethoscope) world.The parentheses indicate that the stethoscope has become a part of herself in her viewing of the outside world.The stethoscope withdraws from her awareness and becomes embodied in her. It is also possible to have a hermeneutic relation (Ihde, 2009, p. 42-44).When a doctor wants to have information about what happens inside the patient"s body, she reads the X-ray and MRI reports.It is then taken for granted that what is read is closely connected to the patient"s body.In this case, the medical reports as tools for observing do not become a part of the doctor"s body but of the world she observes.The relationship can then be presented asdoctor (medical report-world).Heidegger (1988) writes: We do not always and continually have explicit perception of the things surrounding us in a familiar environment, certainly not in such a way that we would be aware of them expressly as handy.It is precisely because an explicit awareness and assurance of their being at hand does not occur that we have them around us in a peculiar way, just as they are in themselves.(p. 309) This invisibility of equipment in a familiar context or when the equipment functions well is the paradox of equipment.The equipment hides or remains inconspicuous when in use.Heidegger says the equipment withdraws as we concern ourselves in the work (Heidegger, 1962, p. 98).When the doctor is fully absorbed in listening to the heartbeat of the patient, she hardly notices the stethoscope because she is preoccupied in her act of listening or caring.This invisibility is exactly the paradoxical nature of a thing encountered by us as ready-to-hand.Invisibility is not presented as anything less desirable here but as a positive feature of equipment.An equipment"s ready-to-hand is revealed not when we look at it or when we study it reflectively but only when we use it.As Kai Reimer puts it, "equipment is truly encountered as what it is only when it is not experienced at all" (Riemer, 2014, p. 277).Only a broken, malfunctioned equipment announces itself of its presence.A broken hammer, hanged computer, missing tool, misplaced spectacles, a failed system of service network and so on, loudly announce their presence. As long as the stethoscope functions normally, it allows the doctor to be fully engaged in her ministry of healing.Good equipment always withdraws and remains unobtrusive. In such an engagement, "the distinction between self and external world (including others) fades; we are absorbed with the task at hand in such a way that we "lose ourselves" in what we do"(p.277).Heidegger writes, "the self must forget itself, if lost in the world of equipment, it"s able to actually go to work and manipulate something" (Heidegger, 1962, p. 405). Equipment as a System Dasein encounters equipment in his everyday world not as a separate isolable determinate object or collection of objects.It is always encountered as a system with an "in-order-to" structure as Heidegger calls it.Heidegger (1962) says: Taken strictly, there "is" no such thing as an equipment.To the Being of any equipment there always belongs a totality of equipment, in which it can be this equipment that it is.Equipment is essentially "something in-order-to...".A totality of equipment is constituted by various ways of the 'in-order-to', such as serviceability, conduciveness, usability, manipulability.(p. 97) This in-order-to is an essential structure of equipment that is present within the world which Heidegger calls an assignment. Dasein discovers what an equipment is in its "in-order-to", that is, in its "ready-at-hand" mode. In that sense, a stethoscope is encountered not as metal wires and diaphragm, but as a "to-listento-heartbeat'.A smart mobile phone appears as that which allows me to make calls and send text messages.A car, even if it is an old clunker, is encountered first of all as a medium of transportation and then as freedom, empowerment, independence, status and so on.No matter how fashionable a car is, if it does not function as an automobile that helps transportation, it cannot be called a car.This is because one encounters a car primordially in everyday dealings as an equipment "in-order-to"-transport.This reveals to us a fundamental truth about the ways of being of equipment.Equipments are often encountered by Dasein not on the basis of its scientific or metaphysical properties but by its use in a particular situation constituted by other equipments and human practices. Every single piece of equipment is part of a system of equipment and therefore, constituted by the members of the system.Just like Dasein is always "being-in-the-world", equipment is "being-intotality" of equipment. Heidegger brings forth a different understanding of equipment that there is no such "thing" as a piece of equipment (p.97).Equipment, when viewed in material terms, appears as a mere "thing".The ontic understanding of equipment ignores completely that the being of any equipment is constituted by a totality of equipment (Heidegger, 1962, p. 98 & Munday, 2006).In fact, every individual item of equipment is understood as belonging to another equipment. Consider the example of encountering a room.My room is a piece of equipment and it is also beyond a single piece in the sense that it is a collection of other equipment that come together to constitute a room.It is not just encountered as a geometrical space 'between four walls' but as an equipment for residing constituted by other equipments and human practices such as paper, pen, table, lamp, furniture, computer, printer, router, wires, windows, doors, room and so on.It also includes certain accepted behaviours in personal rooms.That an 'individual' item of equipment shows itself in a totality of equipment has already been discovered (Heidegger, 1962, p. 98 & Munday, 2009). Each tool in the totality of equipment occupies a specific position in the system.The system or the totality of equipment (according to Heidegger) is similar to the world of Heidegger.While the ontic understanding of the world is a collection of entities, with each entity having a predetermined structure, the ontological understanding of the world is relational in nature where the world is not fixed or absolute but emerges as a result of a heterogeneous network of entities rather than an assemblage.The individual tools computers, wires, mobiles, printers, tables and electricity are part of this world but these artifacts are not considered equipment in the Heideggerian sense.Even the sum of all the individual items of tools does not make up the totality of equipment.The equipment as a network of related entities, which is also constituted by the assignment for which the individual piece of equipment stands, remains concealed when the focus is only on just an artefact.For example, the hammer as a tool by itself is a mere constituent of hammering.Harman (2002) summarises Heidegger"s point very well: All possibility of independent objects existing in a vacuum outside the world of relations, functions, significations.For him, the tool in the reality of its labor belongs to a worldsystem, one that has swallowed up all individual components into a single world-effect.It is only from out of this system that specific beings can ever emerge.(p.24) Heidegger calls the systemic feature of the equipment totality.The metal wires, diaphragm, a sound transducer, the audio codec electronics, the speakers, binaural tubes, batteries, and so on in a stethoscope if taken alone do not mean the same.It is in combination with many other minutely engineered pieces together.Stethoscope always bears for what it is on other equipment with which it is constituted.A stethoscope is a tool "to-listen-to" heartbeats only in and through its relation to various other toolpieces.So, the first insight is that no individual item of equipment stands alone but is drawn into a system of tool-pieces making an equipment.The task of stethoscopping cannot come about without this totality of equipment, nevertheless, in our ontic consideration, we are hardly conscious of this totality.Equipment, in fact, functions "by vanishing in favor of the visible reality that it brings about" (Harman, 2002, p. 25) -metal wires and diaphragm in favor of stethoscope and carpenter"s tools in favor of the visible house.Every withdrawal equipment "allows the ultimate reference to swallow all of its component forces into an invisible system or network lying silently beneath it"(p.25).Present-at-hand is what is visible of equipment and what is behind the visible reality is readyto-hand.Behind every equipment there is an anonymous labour, as Harman calls it (p.26). The second insight is that equipment gains its identity and meaning only in our concernful dealings and in the context of its use.Equipment always draws its particular "in-order-to" from its place in the referential whole (Heidegger, 1962, p. 99 & Riemer, 2011, p. 7).Heidegger says "the structure of the Being of what is ready-to-hand as equipment is determined by references or assignments (Verweisung)" (Heidegger, 1962, p. 105).These references or assignments have an "in-order-to" structure (p.97).By referential totality, Heidegger implies that an individual item of equipment appears as referring to other entities within a totality of equipment (Sinclair, 2006, p. 57).The stethoscope is a part of a doctor"s everyday tools; it refers to the user.It has a purpose-an "in-order-to".It also refers to various material things with which it is made of.As Mark Sinclair puts it: "These references are not the "things" themselves but rather constitute the horizon in which they can appear, a horizon of meaning or sense by virtue of which items of equipment can be encountered as referring to one another."These web of references or assignments themselves are not explicitly noticed but "they are rather 'there' when we concernfully submit ourselves to them…".It becomes explicit only when the "assignment has been disturbed-when something is unusable for some purpose" (Heidegger, 1962, p. 105). Heidegger writes on such occasions, "the context of equipment is lit up…With this totality, however, the world announces itself" (p.105).The structure of this referential totality is an a priori transcendental horizon, which Heidegger calls "worldhood".""Worldhood" is an ontological concept and stands for the structure of one of the constitutive items of Being-in-the-world…moreover, that assignments and referential totalities could in some sense become constitutive for worldhood itself"(p.92 & 107).This horizon, according to Heidegger is a system of relations which is the constitutive structure of the equipment"s way of being (p.121).We often consider an equipment as a mere thing and forget about the totality of the equipment.However, the important thing, as Harman points out is not our finding that "equipment is always found in conjunction with related items…but (sic) what is essential is that at the level of readiness-to-hand, the idea of a single tool reposing in its solitary effect is shown to be untenable.Instead, individual equipment is already dissolved into a global toolempire"(p.22). Contextuality of Equipment Equipment is always encountered in the background of some "specific familiarities" and "competencies for dealing things and others" (Hall, 1993, p. 131).For example, a stethoscope becomes equipment for someone as "ready-to-hand" only if that person is familiar with the practical environment specific to health care and a "network of practical relations" associated with it.To a person not acquainted with medical practices, the stethoscope is only presentat-hand.Along with the specific familiarities and coping skills associated with healthcare activities and practical settings, the user still needs a broader range of familiarities which are more basic and fundamental to deal with any tools (p.132).Dreyfus calls it the suitability and appropriateness of equipment (Dreyfus, 2007); that is, if an artefact has to become a piece of equipment, first of all, it needs to be suitable for a project.The suitability comes only when it has all the required material properties enabling it to do the project.But this kind of suitability alone is not sufficient though it is a necessary condition for something to be an equipment (Riemer, 2014, p. 279). The appropriateness of equipment depends upon its relation to the totality of other equipment, shared practices involved in it, user competencies and other broader social orthodoxies which can be meaningful only in specific contexts (Reimer, 2014, p. 280 & Dreyfus, 1980, pp. 7-9).Dreyfus calls these practical holism, a broader horizon which is a prerequisite for interpreting what it means to be a human being, a tool, dining in a party, participating in Eucharistic celebration, citizen, student, doctor, employee, and so on.One acquires these social background practices by being brought up in a specific context and not by forming beliefs and learning rules.Heidegger calls it "befindlichkeiten" translated as "attunement" -a state one finds oneself in without any deliberate doing, finding oneself in a context before one settles into it, "the state in which one may be found".(Liberman, 2012, p. 53 & Heidegger, 1962, p. 172) This background cannot be made explicit in a theoretical form through a detached analysis (Dreyfus, 1980, p. 8).One may contest that a certain amount of rule learning is requireed even for the basic skills like body movements or language speaking let alone encountering equipment.It is accepted while at the same time once the user becomes proficient "such rules, (sic) are left behind and a single unified, flexible, purposive pattern of behaviour is all that remains;" and it is a futile effort to formalise these procedures (p.8).Heidegger (1962) writes: The context of assignments or references, which, as significance, 'is constitutive for worldhood, can be taken formally in the sense of a system of Relations…The phenomenal content of these 'Relations' and 'Relata'… resist any sort of mathematical functionalization ; nor are they merely something thought, first posited in an 'act of thinking.'They are rather relationships in which concernful circumspection as such already dwells.(p.122) An entity in its explicit form is discovered only in the background of network of relations-familiarity and expertise, which are often non-representable (Hall, 1993, pp. 131-32).Heidegger calls it primordial truth or primordial understanding.These different background practices enable the user to encounter the equipment differently.This is the reason why in our concernful dealing with the equipment, certain features become relevant or irrelevant (p.134). No equipment is self-evident.Rather, everything is trapped in supplimentarity.Therefore, the appropriateness of an individual piece of equipment is subjective to the shared background practices; the system of relations.Individual tools like screws, bolts, spans, decks, girders, rails, pile footings and so on gain identity and meaning by getting swallowed into the larger system of the bridge.Thus, the meaning of an individual tool is discovered in its use, and meaning of equipment is discovered in the wider context of what it is being used for in its larger equipmental way of being in the world.So there is no terminal point to its ultimate finality.It can rather be said that it is circular in nature (Munday, 2006& Heidegger, 1962, p. 107). The same artefacts need not be considered appropriate in another context.For instance, a cell phone is generally used by Gen X'ers or Baby Boomers only to make calls or texts.But the same smart phone may appear to a millennial as that which allows gaming, web browsing, photography and creating and maintaining virtual communities or the automobile signals become meaningful in the context of vehicles and traffic regulations (Heidegger, 1962, p. 109).Harman puts it rightly, "for Heidegger equipment is its context" (Harman, 2002, p. 23).A piece of equipment always remains opaque outside of its proper context (Munday, 2006). The insight we must draw from this discussion is not that individual item of equipment gains its meaning and value depending upon the context but the key insight is that every tool is drawn into a certain system of relations which defines and determines its ways of being.Thus each tool occupies a certain unique position in the system of relations which is constitutive of the equipmental structure.This totality of equipment is not just a sum total of ontic entities or a place where tool-pieces are situated but it is a unitary phenomenon where the entire individual realm is already dissolved while in act (Harman, 2002, p. 22).Heidegger calls it an equipmental way of being (Heidegger, 1962, p. 146 & Munday, 2006). Seeing Beyond the Present This section re-engages with the previous themes that we discussed by placing them in relation to design.Design in terms of functional performance, ergonomic comfort and aesthetic value is closer and familiar to us while the ontological conditions behind design remain far removed from us.There is a danger in limiting ourselves with the ontic understanding of equipment because it hides from us the true nature of equipment"s character and makes us believe that it is an isolated instance of a piece of artefact helping us to do some function.What this paper tries to point out is that the current design concerns should extend far beyond the physical and measurable ontic features to include the forgotten ontological sphere which organises and structures our thinking and experiences.Overlooking the ontological basis of equipment is a krisis situation of the design practice of our times 5 (Buckley, 1992, p. 9). The enigma of design field today is the forgetfulness of its original unity between ontic and ontological design, ignoring the relational dimension of design.This has come about because we are being trapped in a particular metaphysical tradition often referred to under various nomenclatures such as "rationalistic," "Cartesian," "objectivist," and often associated with related terms such as "mechanistic" (worldview), "reductionistic" (science), "positivistic" (epistemologically) and, more recently, computationalist" (Escobar,p. 16).Heidegger would call it machination or at other times in his later writings as gestell 6 (Joronen, 2012, p. 373).Machination is the emergence of the manipulative power as a possessive and coercive force of ordering and gestell is the technological enframing of things into standing in reserve.It is the outgrowth of a long western metaphysical tradition called metaphysics of presence globally expanding its willful orderings manifested in our everyday lived experience, market mechanisms, business rationalities including design practices (p.373). Any attempt to reduce equipment only to its immediate utility or physical appearance is a fallen state.Heidegger calls it the forgetfulness of the real nature of being.The being of artefacts withdraws and therefore will always be more than whatever we see or say about it.It is elusive and not directly available to us, therefore, needing interpretation.Trying to know them only by what is present is a kind of reductionism.In Being and Time, Heidegger criticises this interpretation of being that has come about since the time of Greeks "without any explicit knowledge of the clues which function here, without any acquaintance with the fundamental ontological function of time or even any understanding of it and without any insight into the reason why this function is possible" (Heidegger, 1962, p. 48).Heidegger cautions us about the consequence of metaphysics of presence when he says, "entities are grasped in their Being as "presence"; this means that they are understood with regard to a definite mode of time; the "Present""(p.47).The equipment appears to me in my temporality and gains its meaning as the totality of my existential possibilities.Equipment, therefore, must be defined by more than "what is present".It gains its identity by belonging to an "equipmental totality" that is shaped by its ways to be in time.So, the equipment is more than what it appears to us. In a certain sense, Heidegger spent the whole of his philosophic career to clarify this insight that being is not presence.Being is not presence because being is time as Heidegger writes, "being is understandable only by way of time.If we are to think being and speak of being, and do it properly without confusing being with any beings, then we have to think and speak of it in temporal concepts and terms."(Heidegger, 1988, p. xxv).The primordial ontological basis of human's existentiality is time or temporality.Time is a unitary phenomenon continuously extended to the past and future and cannot be limited to the present.Heidegger prefers to call it ecstatic nature of time in the sense of reaching out beyond itself.This ecstatical nature of time is foundational to human"s way of being.We stand out into our future possibilities, into a past heritage, and into a present world.The krisis in design practice of our time is that we have forgotten the basic ecstatical nature and confined ourselves with only the present while the future is the primary dimension of our existence.So, the krisis lies in human"s failure to see our own existential possibilities (Rojcewicz, 2006, pp. 141-43).Human may progress in perfecting our scientific seeing and yet be blind to our own conditions that would make us fixated in the ecstasy of the present. At the first glance, being is not presence seems to be a technical jargon but a closer look would tell us that it is something that happens in our everyday world.We normally consider a thing to be what appears to us in terms of how useful it is to us or its physical body.This is very obvious in the case of, say, mobile or the fluorescent lamps that we often use.But Heidegger would say that to describe a mobile or fluorescent lamp by referring only to its usage, outer appearance, or by concepts, is a misrepresentation because there is always something more to it than whatever we see or say.The being of things such as mobiles and fluorescent lamp lights is not fully present before us.Heidegger calls it "ways to be", in other words, being (Heidegger, 1962, p. 172 & 418). We use a fluorescent lamp or a mobile without noticing it.Whenever I switch on the lamp in my room, my focus is only on the light that helps me see things in my room.My attention turns to the lamp only when it fails to provide me with enough light.The same is the case with the mobile.We notice it only when it breaks down.The true being of things is actually a kind of absence.Since things can never be directly or completely present to us, we are always interpreting more than seeing. But absence does not engulf the fluorescent lamp or the mobile.It is only one side of it.Had it not been so, we would not have seen anything.Thus, there are many visible aspects of the mobile or the lamp to which I see and relate to.These visible aspects vary depending on who encounters it.Every time I encounter a thing, certain aspects of its being remain hidden from me; for example, it"s past.While the others come present to me as having features to be "interpreted as tools, weapons, or items of entertainment."The presence of every object is a dynamic interplay of presence and absence.The description of mobile and fluorescent lamps can be extrapolated to the being of all entities, including human beings.Being discloses itself in this play of presence and absence.Heidegger calls this experience "event" (ereignis) by which he meant happening, occurrence, becoming visible (Inwood, 1999, pp. 54-57).We understand this experience in our concern for the world; identifying it only with cognitive experience and describing it in terms of a subject-object relation is a misunderstanding of what being is. By saying this, Heidegger does not mean that presence is insignificant.Rather, his contention is that though presence is rich and complex it does not exhaust the meaning of Being.Prioritising a certain mode of temporality, that is, understanding the "being of equipment" only in terms of presencing of things (Anwesen) 7 in the present, has devastating consequences.For example, understanding equipment as tools or machines is concrete and easily graspable because it is present before us.While equipment defined as a system is more complex and accommodates many elements as its constituents such as inventors, operators, recyclers, consumers, user knowledge, marketers, advertisers, government administrators, and so forth, Krippendorff (2006) points out how designers are blinded towards the unintended uses and users of equipment: Before a product reaches its intended user group, it passes through the hands of many who use it for a variety of reasons: to solve an engineering problem, to keep jobs in a factory, to profit from increased sales, or to supply supporting gadgets.After its intended use, it may become of interest to repair shops, benefit recycling companies, and become an ecological nightmare for communities that live near dumps.(p.64) Limiting design concerns only to the producer"s profit or the experience of the end-user at the expense of all the others who are touched by the equipment presents a krisis situation.The so-called end users are only just one point of contact in the vast network equipmental totality who need to co-operate to bring a design to presence.Designers are surrounded by many factors which have an interest in the outcome of a design process: clients, engineers, labour force, financiers, sales representatives, recyclers, environment, other living beings, researchers and so on.Design practices say little about them while much is written and argued about the end users (p.63).Accommodating these factors into the design process of a tool or equipment helps design it better and makes the process more democratic and inclusive. The world we are living in is facing unprecedented challenges which call for new approaches in design than "business as usual" (Wahl, 2016, p. 9).But our design practices are caught up with providing quick fix solutions. Our universities, industries, infrastructures, energy systems, water management, health systems, agriculture -all need a new form of knowledge that can guide us to a new way of being in this world, a way of being that is concerned beyond the present.The current design practices, as Tony Fry says, take away our future because we do not know how to create conditions for the future. 8This has come about because we are living in an illusion of permanence which is an outgrowth of the metaphysics of presence.We are glued to the present as if it is permanent.An obsession with the present deeply influences our thinking and actions and becomes a hindrance for a collaborative relational culture of design.We design systems with a win-lose mindset which works on the assumption that the other has to be dominated, won over, and subdued.This process is always progressing in a gigantic proportion in the world and the most characteristic of this designing or way of coming-to-presence is the transformation of the world into a totalised network of resources.In this age, it means that beings are given to us configured as standing-in-reserve, as disposables, as stocks; everything around us in the world is seen as something there for us to consume.The entire world becomes a bestand, a stock, existing in a manner which makes it ready for our use. This has come about, according to Heidegger, because we are being unconsciously trapped in the dehumanising process of everyday normalisation.We are being normalised in a particular ontological tradition which encourages us to pose as masters of the earth, centers of the universe and yet blind to the self -condition that makes us slaves. 9What we need today is to create conditions for the future by developing systems which necessitates win-win situations for every stakeholder which also ensures the benefit of nature (pp.8-9). Conclusion Fostering inclusive design approach to equipment needs questioning an ontological tradition that valorizes presence over absence.It calls for revisioning a relational ontology which claims that "the relations between entities are ontologically more fundamental than the entities themselves" (Wildman, 2010, p. 55). If this relational ontology needs to be operationalised in design practices, a significant amount of reconstruction of the current design paradigms is required.This paper is only a discussion about the philosophical phase of this reconstructive process.The bias against relational ways of being is operative in market-based design practices of our time.The notion that a tool exists as a separate entity having its own predetermined structure continues to be one of the "most enduring naturalized, and deleterious fictions" of our cultures these days (Escobar, p. 19 & Dreyfus, 2011, pp. 241-242). One decisive step towards this is to encourage more and more serious ontological discussions in design practices.Heidegger is one of the contemporary philosophers who has made ontological questioning central to his thought than any other philosopher of our time.Ontological understanding of equipment brings to the fore the relational nature of the equipment"s way of being in the world and this interconnectedness of beings has tremendous implication for an understanding design agency, design process and design object.Design practices, then, need to broaden its focus beyond the work of what we might call proximate designers-those professionals closest to the design process, such as, engineers, architects, draftsmen, graphical artists, and so on who exercise direct control over the details of design (Feng, 2004, p. 105).Little attention has been paid to the ways in which cultural assumptions and values about the product, the future unintended uses of the product, various stakeholders of the product, ethical issues, the meanings that product forms have for their users, and so forth have the potential to shape the design process.Limiting design agency only to proximate designers is autocratic, one dimensional and exclusive and adds to the krisis situation. Our future depends on creating systems that are interdependent.This is possible only if we have a relational ontology at the back of our mind to inspire our actions.Integrating ontological insights with contemporary popular design methodologies which are market-centric may help us seek the possibilities of how alternative values can be brought into the design process so that designs are sustainable, humane, ethical, liberating and eco-efficient rather than oppressing, controlling and exploitative.It will also help the design methodologists in translating philosophical insights into conceptual design tools to improve the quality of the designs.
9,283.4
2019-01-01T00:00:00.000
[ "Philosophy" ]
Research on Application of Mobile Agent Based on Immune Algorithms in Ad Hoc Network Ad Hoc network is a kind of multi hop, self-organizing wireless network without center. Each node in network can be used as host as well as router and it can form any network topology through wireless connections. Because of characteristics of itself, many new service items and application fields appear; meanwhile, it is also faced with many new security threats. Using immune agent can perceive the change of network node and make corresponding decision behavior, find the misbehavior nodes in the network as soon as possible and reduce the network attack and improve the immune competence of network. Introduction Mobile Ad Hoc network [1] is a collection of multiple mobile nodes that can communicate with each other without fixed infrastructure and network topology can arbitrarily change.As a kind of special wireless network, nodes in network generally consist of mobile terminal.Because Ad Hoc network has characteristics of open media, dynamic topology, lack of center authentication mechanism, and limited node ability, compared with traditional network, it has more network security problems.Security problems have become choke point to hinder the development of Ad Hoc network.Attacks in Ad Hoc network can be divided into "active attack" and "passive attack".Active attack can be detected easily but passive attack has high concealment and it is not easy to be found [2].While mobile agent based on immune system can well monitor each node in the network, timely find its misconduct and make use of the immune ability of the network to restore the health of network nodes or eliminate it out of the entire network.This article will focus on how to apply immune algorithms and mobile agent to improve network security performance in Ad Hoc network. Immune System Immune system is a highly collateral distributed, adaptive information processing learning system [3].The system can recognize self and non-self materials, eliminate and defend foreign invasive viral substances or molecules.In order to recognize self and non-self materials, in biological systems, it usually has T cells that specially detect antigen and B cells that produce antibody.There are two types of T cells.One kind is the helper T cell Th that is in the majority to be responsible for promoting B cells to secrete antibodies and strengthen the immune function of T cells and macrophage; another kind is suppressor T cell Ts, which activates under the action of cytokines to form effector cell and eliminate bad cells. The immune response can be divided into three phages: immune recognition phase, antibody production stage, antigen elimination stage. 1) Immune recognition phase: when external antigen enters in the system, at first, the T cell does immunological recognition.If it recognized similar antigen and retain the information before, the memory part of T cell will combine with the antigen to realize immunological recognition; 2) Antibody production stage: if T cell has recognized the antigen, T cell will be activated and stimulate B cell to secrete large amounts of antibodies; 3) Antigen elimination stage: antibodies produced by B cell combine with antigen, which destroy the activity of antigen and it excretes the antigen out of the body.The antigen disappears, and the immune system returns to normal. Immune Algorithms Immune algorithm is the learning algorithm based on the immune system [4], and there are four basic elements, antigen recognition, antibody formation, immune selection and immune memory respectively.In the algorithm, regard the objective function of the optimal solution as the antigen, including problems to be solved and various constraint conditions.Solve the fitness function x and regard it as antigen, search the direction and the distribution of solution group according to concentration adaptive modulation of antibody.The representation of immune selection is . Immune memory refers to reserving the solving result of a specific problem and characteristics and parameters of the problem as initial solutions to solve the similar problems next time so as to accelerate the solving speed and effect.The process to realize immune algorithms is as follows: 1) Initialization, randomly generate initial B cell mass 0 .P 2) Calculate the fitness of each B cell to generate antibodies ( ). k f x 3) Save B cell with the optimal antibody as immune memory cell max .k x 4) Calculate the concentration of antibody and generate B cell mass k P according to the probability of im- mune selection. 5) k P forms the next generation 1 k P + of B cell through clone and differentiation.6) If immune memory cell hasn't completely recognized antigen, return to step 2). The Working Mechanism of Mobile Agent Agent technology derives from artificial intelligence domain [5], can simulate human behavior and relationships and it is a procedure that can run independently and provide specific service.It can sense the environment change, make judgment and reasoning for changes and form the decision to control corresponding behavior and complete a task.Compared with traditional distributed computing model, mobile agent technology has many advantages, such as save network bandwidth, support off-line calculating, unrelated to platform and balance computation load, etc.In the network environment, under the self-control, the mobile agent can move from one computer to another computer, hang up at any point, and it continues execution from the hanging point when moving to a new machine.Under normal circumstances, mobile agent system (MAS) includes mobile agent MA and mobile agent platform MAE.Mobile agent platform is responsible for establishing safe and correct operation environment for mobile agent, providing basic services and forming restriction mechanism, fault-tolerance strategy and security control for MA.The mobility of agent and the solving ability of problems depend largely on the services provided by mobile agent platform. The Application of Immune Agents in Ad Hoc Network Because the Ad Hoc network topology is changeful, the establishment of a connection mode completely depends on the location of each node and coverage area of their transceiver.Each node in the network is equivalent to a router, realizing the retransmission of information packet and participating in the discovery and maintenance of route.Ad Hoc network brings convenience to people and is faced with serious security threat.Attacks from network can be divided into passive attacks and active attacks.Under passive attack mode, attackers don't interfere with the communication of network nodes and their purpose is to obtain sensitive information in the network; active attack refers to that the attackers interfere with or destroy the communication of network nodes and cause that the network nodes cannot work normally.Flooding DDOS attack is a typical attack type.In order to early find misbehavior nodes in network and maintain normal communication of the system, this article combines immune algorithm with mobile agent and applies them to Ad Hoc network, makes network node can dynamically monitor change situation of other nodes in network, timely remove or restrict the behavior of misbehavior nodes and realize autonomous management of network nodes under the circumstances of no one involved. Working Model of Immune Agent It is easy to find the corresponding relation between immune system and Ad Hoc network through analyzing the division of labor of lymphocyte, T cell, B cell and the working mechanisms of mobile agent in immune system.The security architecture of Ad Hoc network is equivalent to the entire immune system; misbehavior nodes in Ad Hoc network are equivalent to the antigens in immune system; normal nodes in Ad Hoc network are equivalent to healthy cells in immune system; mobile agent in Ad Hoc network is equivalent to lymphocyte in immune system and responsible for transporting other agents with specific function.In view of the function of T cell and B cell, three types of agents have been set in the system, and they are monitoring agent, coordinating agent and blocking agent respectively.Monitoring agent is equivalent to T cell in immune system, distributes in all the nodes of the whole system, and it is responsible for collecting behavior information of neighbor nodes and regularly reports it to coordinating agent; coordinating agent distributes in different areas of the network and it is equivalent to B cell in immune system and responsible for arranging the information of nodes in the charge of it in the network and storing it into information memory bank.When coordinating agent receives a new report, it will compare the report with the information in memory bank and strategy library.If it is normal information, the agent will ignore it; if it isn't normal behavior information (if a node receives routing information and doesn't transpond), judge the time difference of bad report at the same node in last time.If the interval is nearer, then accumulate.Otherwise, remove the bad report number of this node.If it exceeds the threshold value, activate the blocking agent.Blocking agent is equivalent to antibody in immune system and can clone itself.Its task is to eliminate the misbehavior nodes [6].Coordinating agent needs certain tolerance and can tolerate a small amount of bad behavior of individual nodes (some bad behaviors may be caused by network interference). The safety verification of antibody moving from point 0 h to 1 i h + is described as follows: , ,1 When the antibody begins to roam, 0 H produces a random number 0 r and a 0 o , and then calculate 0 h and 0 O , send 0 O to i H .When the antibody reaches When the antibody moves to the position near the goal node, it clones, mutates and compresses [7] and produces a large number of antibodies that can kill the "virus".These antibodies will isolate the communication between misbehavior nodes and other nodes in network and achieve the purpose of eliminating harmful nodes.When misbehavior nodes disappear, blocking agent will vanish by itself. The mathematical description of antibody clone is as follows: For binary coding, antibody , , , , The main effect of compression choice is to choose n antibodies with the highest affinity in intermediate population A′′ to form a new generation of antibody group ( ) 1 A t + .Evaluate the immune system in the form of weighted sum, and the evaluation function is Sl Mat K Sim K Sup = + * − * .In this formula, 1 K and 2 K are weighted system of stimulating effect and inhabitant activity. Result Analysis It is can be seen from Figure 1 that when there is no attack in network, the message delay maintains at a relatively low level and the network communication is normal shown in the figure as "•"; when the network appears attack, the delay rate of message will rise rapidly.With the extension of time, network performance becomes worse and worse and the worst result is network paralysis, shown as "▲"; after applying the method of immune agent in network, the number of message delay of the system will reduce sharply, shown as "■".Although it is higher than the message delay rate when there is no attack, it is still in a lower level and acceptable. formed by all the binary strings with length of l.Antibody cluster of antibody a.Clone operator is defined as follows:
2,557.8
2015-10-14T00:00:00.000
[ "Computer Science", "Engineering" ]
Hydrodynamics of a Droplet in Space The phenomena related to the flow of fluids are generally complex, and difficult to quantify. New approaches - considering points of view still not explored - may introduce useful tools in the study of Hydrodynamics and the related transport phenomena. The details of the flows and the properties of the fluids must be considered on a very small scale perspective. Consequently, new concepts and tools are generated to better describe the fluids and their properties. This volume presents conclusions about advanced topics of calculated and observed flows. It contains eighteen chapters, organized in five sections: 1) Mathematical Models in Fluid Mechanics, 2) Biological Applications and Biohydrodynamics, 3) Detailed Experimental Analyses of Fluids and Flows, 4) Radiation-, Electro-, Magnetohydrodynamics, and Magnetorheology, 5) Special Topics on Simulations and Experimental Data. These chapters present new points of view about methods and tools used in Hydrodynamics. Introduction 1.1 Droplet in space It is considered that our solar system 4.6 billion years ago was composed of a proto-sun and the circum-sun gas disk.In the gas disk, originally micron-sized fine dust particles accumulated by mutual collisions to be 1000 km-sized objects like as planets.Therefore, to understand the planet formation, we have to know the evolution of the dust particles in the early solar gas disk.One of the key materials is a millimeter-sized and spherical-shaped grain termed as "chondrule" observed in chondritic meteorites. Chondrules are considered to have been formed from molten droplets about 4.6 billion years ago in the solar gas disk (Amelin et al., 2002;Amelin & Krot, 2007).Fig. 1 is a schematic of the formation process of chondrules.In the early solar gas disk, aggregation of the micron-sized dust particles took place before planet formation (Nakagawa et al., 1986).When the dust aggregates grew up to about 1 mm in size (precursor), some astrophysical process heated them to the melting point of about 1600 − 2100 K (Hewins & Radomsky, 1990).The molten dust aggregate became a sphere by the surface tension (droplet), and then cooled again to solidify in a short period of time (chondrule).The formation conditions of chondrules, such as heating duration, maximum temperature, cooling rate, and so forth, have been investigated experimentally by many authors (Blander et al., 1976;Fredriksson & Ringwood, 1963;Harold C. Connolly & Hewins, 1995;Jones & Lofgren, 1993;Lofgren & Russell, 1986;Nagashima et al., 2006;Nelson et al., 1972;Radomsky & Hewins, 1990;Srivastava et al., 2010;Tsuchiyama & Nagahara, 1981-12;Tsuchiyama et al., 1980;2004;Tsukamoto et al., 1999).However, it has been controversial what kind of astronomical event could have produced chondrules in early solar system.The chondrule formation is one of the most serious unsolved problems in planetary science. The most plausible model for chondrule formation is a shock-wave heating model, which has been tested by many theoreticians (Ciesla & Hood, 2002;Ciesla et al., 2004;Desch & Jr., 2002;Hood, 1998;Hood & Horanyi, 1991;1993;Iida et al., 2001;Miura & Nakamoto, 2006;Miura et al., 2002;Morris & Desch, 2010;Morris et al., 2009;Ruzmaikina & Ip, 1994;Wood, 1984).Fig. 2 is a schematic of dust heating mechanism by the shock-wave heating model.Initially, the chondrule precursors were floating in the gas disk without any large relative velocity against the ambient gas (panel (a)).When a shock wave was generated in the gas disk, the gas behind the shock front was accelerated suddenly.On the other hand, the chondrule Fig. 1.Schematic of formation process of a chondrule.The precursor of chondrule is an aggregate of µm-sized cosmic dusts.The precursor is heated and melted by some mechanism, becomes a sphere by the surface tension, then cools to solidify in a short period of time. precursors remain un-accelerated because of their inertia.Therefore, after passage of the shock front, the large relative velocity arises between the gas and dust particles (panel (b)).The relative velocity can be considered as fast as about 10 km s −1 (Iida et al., 2001).When the gas molecule collides to the surface of chondrule precursors with such large velocity, its kinetic energy thermalizes at the surface and heats the chondrule precursors, as termed as a gas drag heating.The peak temperature of the precursor is determined by the balance between the gas drag heating and the radiative cooling at the precursor surface (Iida et al., 2001).The gas drag heating is capable to heat the chondrule precursors up to the melting point if we consider a standard model of the early solar gas disk (Iida et al., 2001). Physical properties of chondrules The chondrule formation models, including the shock-wave heating model, are required not only to heat the chondrule precursors up to the melting point but also to reproduce other physical and chemical properties of chondrules recognized by observations and experiments.These properties that should be reproduced are summarized as observational constraints (Jones et al., 2000).The reference listed 14 constraints for chondrule formation.To date, there is no chondrule formation model that can account for all of these constraints. Here, we review two physical properties of chondrules; size distribution and three-dimensional shape.The latter was not listed as the observational constraints in the literature (Jones et al., 2000), however, we would like to include it as an important constraint for chondrule formation.As discussed in this chapter, these two properties strongly relate to the hydrodynamics of molten chondrule precursors in the gas flow behind the shock front. Size distribution Fig. 3 shows the size distribution of chondrules compiled from measurement data in some literatures (Nelson & Rubin, 2002;Rubin, 1989;Rubin & Grossman, 1987;Rubin & Keil, 1984).The horizontal axis is the diameter D and the vertical axis is the cumulative fraction of The precursors of chondrules are in a gas disk around the proto-sun 4.6 billion years ago.The gas and precursors rotate around the proto-sun with almost the same angular velocity, so there is almost no relative velocity between the gas and precursors.(b) If a shock wave is generated in the gas disk by some mechanism, the gas behind the shock front is suddenly accelerated.In contrast, the precursor is not accelerated because of its large inertia.The difference of their behaviors against the shock front causes a large relative velocity between them.The precursors are heated by the gas friction in the high velocity gas flow.chondrules smaller than D in diameter.Table 1 shows the mean diameter and the standard deviation of each measurement.It is found that the chondrule sizes vary according to chondrite type.The mean diameters of chondrules in ordinary chondrites (LL3 and L3) are from 600 µm to 1000 µm.In contrast, ones in enstatite chondrite (EH3) and carbonaceous chondrite (CO3) are from 100 µm to 200 µm. It should be noted that the true chondrule diameters are slightly larger than the data shown in Fig. 3 and Table 1 because of the following reason.This data was obtained by observations on thin-sections of chondritic meteorites.The chondrule diameter on the thin-section is not necessarily the same as the true one because the thin-section does not always intersect the center of the chondrule.Statistically, the mean and median diameters measured on the thin section are, respectively, √ 2/3 and √ 3/4 of the true diameters (Hughes, 1978).However, we do not take care the difference between true and measured diameters because it is not a substantial issue in this chapter. It is considered that in the early solar gas disk the dust aggregates have the size distribution from ≈ µm (initial fine dust particles) to a few 1000 km (planets).In spite of the wide size range of solid materials, sizes of chondrules distribute in a very narrow range of about 100 − 1000 µm.Two possibilities for the origin of chondrule size distribution can be considered; (i) size-sorting prior to chondrule formation, and (ii) size selection during chondrule formation.In the case of (i), we need some mechanism of size-sorting in the early solar gas disk (Teitler et al., 2010, and references therein).In the case of (ii), the chondrule formation model must account for the chondrule size distribution.The latter possibility is what we investigate in this chapter. Deformation from a perfect sphere It is considered that spherical chondrule shapes were due to the surface tension when they melted.However, their shapes deviate from a perfect sphere and the deviation is an important clue to identify the formation mechanism.Tsuchiyama et al. (Tsuchiyama et al., 2003) measured the three-dimensional shapes of chondrules using X-ray microtomography.They selected 20 chondrules with perfect shapes and smooth surfaces from 47 ones for further analysis.Their external shapes were approximated as three-axial ellipsoids with axial radii of a, b,andc (a ≥ b ≥ c), respectively.Fig. 4 shows results of the measurement.It is considered that the deviation from a perfect sphere results from the deformation of a molten chondrule before solidification.For example, if the molten chondrule rotates rapidly, the shape becomes oblate due to the centrifugal force (Chandrasekhar, 1965).However, the shapes of chondrules in group-B are prolate rather than oblate.Tsuchiyama et al. (Tsuchiyama et al., 2003) proposed that the prolate chondrules in group-B can be explained by spitted droplets due to the shape instability with high-speed rotation.However, it is not clear whether the transient process such as the shape instability accounts for the range of axial ratio of group-B chondrules or not. Hydrodynamics of molten chondrule precursors If chondrules were melted behind the shock front, the molten droplet ought to be exposed to the high-velocity gas flow.The gas flow causes many hydrodynamics phenomena on the molten chondrule droplet as follows.(i) Deformation: the ram pressure deforms the droplet shape from a sphere.(ii) Internal flow: the shearing stress at the droplet surface causes fluid flow inside the droplet.(iii) Fragmentation: a strong gas flow will break the droplet into many tiny fragments.Hydrodynamics of the droplet in high-velocity gas flow strongly relates to the physical properties of chondrules.However, these hydrodynamics behaviors have not been investigated in the framework of the chondrule formation except of a few examples that neglected non-linear effects of hydrodynamics (Kato et al., 2006;Sekiya et al., 2003;Uesugi et al., 2005;2003). To investigate the hydrodynamics of a molten chondrule droplet in the high-velocity gas flow, we performed computational fluid dynamics (CFD) simulations based on cubic-interpolated propagation/constrained interpolation profile (CIP) method.The CIP method is one of the high-accurate numerical methods for solving the advection equation (Yabe & Aoki, 1991; 385 Hydrodynamics of a Droplet in Space www.intechopen.comYabe et al., 2001).It can treat both compressible and incompressible fluids with large density ratios simultaneously in one program (Yabe & Wang, 1991).The latter advantage is important for our purpose because the droplet density (≈ 3gc m −3 ) differs from that of the gas disk (≈ 10 −8 gcm −3 or smaller) by many orders of magnitude. In addition, we should pay a special attention how to model the ram pressure of the gas flow. The gas around the droplet is so rarefied that the mean free path of the gas molecules is an order of about 100 cm if we consider a standard gas disk model.The mean free path is much larger than the typical size of chondrules.This means that the gas flow around the droplet is a free molecular flow, so it does not follow the hydrodynamical equations.Therefore, in our model, the ram pressure acting on the droplet surface per unit area is explicitly given in the equation of motion for the droplet by adopting the momentum flux method as described in section 3.2.2. Aim of this chapter The hydrodynamical behaviors of molten chondrules in a high-velocity gas flow are important to elucidate the origin of physical properties of chondrules.However, it is difficult for experimental studies to simulate the high-velocity gas flow in the early solar gas disk, where the gas density is so rarefied that the gas flow around droplets does not follow the hydrodynamics equations.We developed the numerical code to simulate the droplet in a high-velocity rarefied gas flow.In this chapter, we describe the details of our hydrodynamics code and the results.We propose new possibilities for the origins of size distribution and three-dimensional shapes of chondrules based on the hydrodynamics simulations. We describe the governing equations in section 2 and the numerical procedures in section 3.In section 4, we describe the results of the hydrodynamics simulations regarding the deformation of molten chondrules in the high-velocity rarefied gas flow and discuss the origin of rugby-ball-like shaped chondrules.In section 5, we describe the results regarding the fragmentation of molten chondrules and consider the relation to the size distribution of chondrules.We conclude our hydrodynamics simulations in section 6. Governing equations The governing equations are the equation of continuity and the Navier-Stokes equation as follows; where ρ is the density of fluid, u is the velocity, p is the pressure, and µ is the viscosity.The ram pressure of the high-velocity gas flow, F g , is exerted on the surface of the droplet and given by (Sekiya et al., 2003) where n i is the unit normal vector of the surface of the droplet, n g is the unit vector pointing the direction in which the gas flows, and r i is the position of the liquid-gas interface.The delta function δ( r − r i ) means that the ram pressure works only at the interface.The ram 387 Hydrodynamics of a Droplet in Space pressure does not work for n i • n g > 0 because it indicates the opposite surface which does not face the molecular flow.The ram pressure causes the deceleration of the center of mass of the droplet.In our coordinate system co-moving with the center of mass, the apparent gravitational acceleration g should appear in the equation of motion.The surface tension, F s , is given by (Brackbill et al., 1992) where γ s is the fluid surface tension and κ is the local surface curvature.Finally, we consider the equation of state given by where c s is the sound speed. Numerical methods in hydrodynamics To solve the equation of continuity (Eq.( 1)) numerically, we introduce a color function φ that changes from 0 to 1 continuously.For incompressible two fluids, a density in each fluid is uniform and has a sharp discontinuity at the interface between these two fluids if the density of a fluid is different from another one.By using the color function, we can distinguish these two fluids as follows; φ = 1forfluid1,φ = 0 for fluid 2, and a region where 0 < φ < 1forthe interface.The density of a fluid element is given by where ρ 1 and ρ 2 are the inherent densities for fluid 1 and fluid 2, respectively.The governing equation for φ is given by The conservation equation for φ (Eq.( 7)) is approximately equivalent to the original one (Eq.( 1)) through the relationship between ρ and φ given by Eq. ( 6) (Miura & Nakamoto, 2007).Therefore, the problem to solve Eq. ( 1) results in to solve Eq. ( 7).We solve Eq. ( 7) using R-CIP-CSL2 method with anti-diffusion technique (sections 3.1.2and 3.1.3). In this study, the fluid 1 is the molten chondrule and the fluid 2 is the disk gas around the chondrule.The inherent densities are given by ρ 1 = ρ d and ρ 2 = ρ a , where subscripts "d" and "a" mean the droplet and ambient gas, respectively.The other physical values of the fluid element (viscosity µ and sound speed c s ) are given in the same manner as the density ρ, The Navier-Stokes equation (Eq.( 2)) and the equation of state (Eq.( 5)) are separated into two phases; the advection phase and the non-advection phase.The advection phases are written as 388 Hydrodynamics -Advanced Topics www.intechopen.com Parameter Sign Value Momentum of gas flow p fm 4000 dyn cm −2 Surface tension γ s 400 dyn cm −1 Viscosity of droplet Density of ambient ρ a 10 −6 gcm −3 Sound speed of ambient c s,a 10 −5 cm s −1 Viscosity of ambient µ a 10 −2 gcm −1 s −1 Droplet radius r 0 500 µm Table 2. Canonical input physical parameters for simulations of molten chondrules exposed to the high-velocity rarefied gas flow.We ought to use these parameters if there is no special description. We solve above equations using the R-CIP method, which is the oscillation preventing method for advection equation (section 3.1.1).The non-advection phases can be written as where Q is the summation of forces except for the pressure gradient.The problem intrinsic in incompressible fluid is in the high sound speed in the pressure equation.Yabe and Wang (Yabe & Wang, 1991) introduced an excellent approach to avoid the problem (section 3.2.1).It is called as the C-CUP method (Yabe & Wang, 1991).The numerical methods to calculate ram pressure of the gas flow and the surface tension of droplet in Q are described in sections 3.2.2 and 3.2.3,respectively. The input parameters adopted in this chapter are listed in Table 2. CIP method The CIP method is one of the high-accurate numerical methods for solving the advection equation (Yabe & Aoki, 1991;Yabe et al., 2001).In one-dimension, the advection equation is written as where f is a scaler variable of the fluid (e.g., density), u is the fluid velocity in the x-direction, and t is the time.When the velocity u is constant, the exact solution of Eq. ( 10) is given by which indicates a simple translational motion of the spatial profile of f with the constant velocity u. Let us consider that the values of f on the computational grid points x i−1 , x i ,a n dx i+1 are given at the time step n and denoted by f n i−1 , f n i ,and f n i+1 , respectively.In by filled circles.From Eq. ( 11), we can obtain the values of f i at the next time step n + 1b y just obtaining f n i at the upstream point x = x i − u∆t,w h er e∆t is the time interval between t n and t n+1 .If the upstream point is not exactly on the grid points, which is a very usual case, we have to interpolate f n i with an appropriate mathematical function composed of f n i−1 , f n i , and so forth.There are some variations of the numerical solvers by the difference of the interpolate function F i (x).One of them is the first-order upwind method, which interpolates f n i by a linear function and satisfies following two constraints; (here we assume that u > 0 and the upstream point for f n i locates left-side of x i ).The other variation is the Lax-Wendroff method, which uses a quadratic polynomial satisfying three constraints; We show these interpolation functions in Fig. 5. On the contrary, the CIP method interpolates using a cubic polynomial, which satisfies following four constraints; x,i ,where f x ≡ ∂ f /∂x is the spatial gradient of f .The interpolation function is given by where a i , b i , c i ,a n dd i are the coefficients determined from T h e expressions of these coefficients are shown in (Yabe & Aoki, 1991).We show the profile of F i (x) in Fig. 5 with f n x,i−1 = f n x,i = 0.In the CIP method, therefore, we need the values of f n x in addition of f n for solving the advection phase.In the CIP method, f x is treated as an independent variable and updated independently from f as follows.Differentiating Eq. ( 10) with respect to x, we obtain where the second term of the left-hand side indicates the advection term and the right-hand side indicates the non-advection term.The interpolate function for the advection of f x is given by ∂F i /∂x.The non-advection term can be solved analytically by considering that ∂u/∂x is constant. Additionally, there is an oscillation preventing method in the concept of the CIP method, in which the rational function is used as the interpolate function.The rational function is written as (Xiao et al., 1996) where α i and β i are coefficients.The expressions of these coefficients are shown in (Xiao et al., 1996).Usually, we adopt α i = 1 to prevent oscillation.This method is called as the R-CIP method.The model with α i = 0 corresponds to the normal CIP method. CIP-CSL2 method The CIP-CSL2 method is one of the numerical methods for solving the conservative equation. In one-dimension, the conservative equation is written as Integrating Eq. (15) over x from x i to x i+1 , we obtain where σ i+1/2 ≡ x i+1 x i fdx.F o rf being density, σ i+1/2 corresponds to the mass contained in a computational cell between i and i + 1, so it should be conserved during the time integration.Since the physical meaning of uf in the second term of the left-hand side is the flux of σ per unit area and per unit time, the time evolution of σ is determined by where t n ufdt is the transported value of σ from a region of x < x i to that of x > x i within ∆t.The CIP-CSL2 method uses the integrated function Moreover, since Eq. ( 15) can be rewritten as the same form of Eq. ( 13), we can obtain the updated value f n+1 as well as f n+1 x in the CIP method. Additionally, there is an oscillation preventing method in the concept of the CIP-CSL2 method, in which the rational function is used for the function D i (x) (Nakamura et al., 2001).This method is called as the R-CIP-CSL2 method. 391 Hydrodynamics of a Droplet in Space www.intechopen.com Anti-diffusion To keep the sharp discontinuity in the profile of φ, we explicitly add an diffusion term with a negative diffusion coefficient α (anti-diffusion) to the CIP-CSL2 method (Miura & Nakamoto, 2007).In our model, we have an additional diffusion equation about σ as Eq. ( 18) can be separated into two equations as where J ′ indicates the anti-diffusion flux per unit area and per unit time.Using the finite difference method, we obtain where Ĵ ≡ J ′ /(∆x/∆t) is the mass flux which has the same dimension of σ, α ≡ α/(∆x 2 /∆t) is the dimensionless diffusion coefficient, and Here, it should be noted that σ takes the limited value as 0 ≤ σ ≤ σ m ,w h e r eσ m is the initial value for inside of the droplet.The undershoot (σ < 0) or overshoot (σ > σ m ) are physically incorrect solutions.To avoid that, we replace αi = 0.1 only when σ i−1/2 or σ i+1/2 are out of the appropriate range.We insert the anti-diffusion calculation after the CIP-CSL2 method is completed. Test calculation In order to demonstrate the advantage of the CIP method, we carried out one-dimensional advection calculations with various numerical methods.Fig. 6 shows the spatial profiles of f of the test calculations.The horizontal axis is the spatial coordinate x.The initial profile is given by the solid line, which indicates a rectangle wave.We set the fluid velocity u = 1, the intervals of the grid points ∆x = 1, and the time step for the calculation ∆t = 0.2.These conditions give the CFL number ν ≡ u∆t/∆x = 0.2, which indicates that the profile of f moves 0.2 times the grid interval per time step.Therefore, the right side of the rectangle wave will reach x = 80 after 300 time steps and the dashed line indicates the exact solution.The filled circles indicate the numerical results after 300 time steps. The upwind method does not keep the rectangle shape after 300 time steps and the profile becomes smooth by the numerical diffusion (panel a).In the Lax-Wendroff method, the numerical oscillation appears behind the real wave (panel b).Comparing with above two methods, the CIP method seems to show better solution, however, some undershoots ( f < 0) 392 Hydrodynamics -Advanced Topics www.intechopen.comor overshoots ( f > 1) are observed in the numerical result (panel c).In the R-CIP method, although the faint numerical diffusion has still remained, we obtain the excellent solution comparing with the above methods. We also show the numerical results of the one-dimensional conservative equation.We use the same conditions with the one-dimensional advection equation.Note that Eq. ( 15) corresponds to Eq. ( 10) when the velocity u is constant.The panel (e) shows the result of the R-CIP-CSL2 method, which is similar to that of the R-CIP method.In the panel (f), we found that the combination of the R-CIP-CSL2 method and the anti-diffusion technique shows the excellent solution in which the numerical diffusion is prevented effectively. C-CUP method Using the finite difference method to Eq. ( 9), we obtain (Yabe & Wang, 1991) where the superscripts * and ** indicate the times before and after calculating the non-advection phase, respectively.Since the sound speed is very large in the incompressible fluid, the term related to the pressure should be solved implicitly.In order to obtain the implicit equation for p * * , we take the divergence of the left equation and substitute u * * into the right equation.Then we obtain an equation The problem to solve Eq. ( 24) resolves itself into to solve a set of linear algebraic equations in which the coefficients becomes an asymmetric sparse matrix.After p * * is solved, we can calculate u * * by solving the left equation in Eq. ( 23). Ram pressure of free molecular flow The ram pressure of the gas flow is acting on the droplet surface exposed to the high-velocity gas flow.It should be noted that the gas flow around a mm-sized droplet does not follow the hydrodynamical equations because the nebula gas is too rarefied.The mean free path of the nebula gas can be estimated by l = 1/(ns),w h e r es is the collisional cross section of gas molecules and n is the number density of the nebula gas.Typically, we adopt n ≈ 10 14 cm −3 based on the standard model of the early solar system at a distance from the sun of an astronomical unit (Hayashi et al., 1985).Substituting s ≈ 10 −16 cm −2 for the hydrogen molecule (Hollenbach & McKee, 1979), we obtain l ≈ 100 cm.On the other hand, the typical size of chondrules is about a few 100 µm (see Fig. 3).Since the object that disturbs the gas flow is much smaller than the mean free path of the gas, the free stream velocity field is not disturbed except of the direct collision with the droplet (free molecular flow). Consider that the molecular gas flows for the positive direction of the x-axis.The x-component of the ram pressure F g,x is given by where x i is the position of the droplet surface.This equation can be separated into two equations as where M is the momentum flux of the molecular gas flow.The right equation in Eq. ( 26) means that the momentum flux terminates at the droplet surface.The left equation in Eq. ( 26) means that the decrease of the momentum flux per unit length corresponding to the ram pressure per unit area. 394 Hydrodynamics -Advanced Topics www.intechopen.comUsing the finite difference method to the right equation in Eq. ( 26), we obtain where φ is the smoothed profile of φ (see section 3.2.4),and M i+1 = M i for φi+1 < φi because the momentum flux does not increase when the molecular flow goes outward from inside of the droplet.Similarly, we obtain from the left equation in Eq. ( 26).The momentum flux at upstream is M 0 = p fm .F i r s t ,w e solve Eq. ( 27) and obtain the spatial distribution of the molecular gas flow in all computational domain.Then, we calculate the ram pressure by Eq. ( 28).We calculate the momentum flux M and the ram pressure F g at every time step in numerical simulations.Therefore, these spatial distributions are affected by droplet deformation. Surface tension The surface tension is the normal force per unit interfacial area.Brackbill et al. (Brackbill et al., 1992) introduced a method to treat the surface tension as a volume force by replacing the 395 Hydrodynamics of a Droplet in Space www.intechopen.comdiscontinuous interface to the transition region which has some width.According to them, the surface tension is expressed as where [φ] is the jump in color function at the interface between the droplet and the ambient gas.In our definition, we obtain [φ]=1.The curvature is given by where The finite difference method of Eq. ( 31) is shown in (Brackbill et al., 1992).When we calculate the surface tension, we use the smoothed profile of φ (see section 3.2.4). Smoothing We can obtain the numerical results keeping the sharp interface between the droplet and the ambient region.However, the smooth interface is suitable for calculating the smooth surface tension.We use the smoothed profile of φ only at the time to calculate the surface tension and the ram pressure acting on the droplet surface.In this study, the smoothed color function φ is calculated by where L 1 , L 2 ,a n dL 3 indicate grid indexes of the nearest, second nearest, and third nearest from the grid point (i, j, k), for example, L 1 =(i + 1, j, k), L 2 =(i + 1, j + 1, k), L 3 =(i + 1, j + 1, k + 1), and so forth.It is easily found that in the three-dimensional Cartesian coordinate system, there are six for L 1 ,t w e l v ef o rL 2 ,a n deigh tforL 3 , respectively.The coefficients are set as We iterate the smoothing five times.Then, we obtain the smooth transition region of about twice grid interval width.We use the smooth profile of φ only when calculating the surface tension and the ram pressure.It should be noted that the original profile φ with the sharp interface is kept unchanged. Deformation of droplet by gas flow 4.1 Vibrational motion We assume that the gas flow suddenly affects the initially spherical droplet.Fig. 8 shows the time sequence of the droplet shape and the internal velocity.The horizontal and vertical axes are the x-a n dy-axes, respectively.The solid line is the section of the droplet surface in xy-plane.Arrows show the velocity field inside the droplet.The gas flow comes from the left side of the panel.The panel (a) shows the initial condition for the calculation.The panel (b) shows a snapshot at t = 0.55 msec.The droplet begins to be deformed due to the gas ram pressure.The fluid elements at the surface layer, which is directly facing the gas flow, are blown to the downstream.In contrast, the velocity at the center of the droplet turns to upstream of the gas flow because the apparent gravitational acceleration takes place in our coordinate system.The droplet continues to be deformed further, and after t = 1.0 msec, the degree of deformation becomes maximum (see panel (c)).After that, the droplet begins to recover its shape to the sphere due to the surface tension.The recovery motion is not all but almost over at the panel (d).The droplet repeats the deformation by the ram pressure and the recovery motion by the surface tension until the viscosity dissipates the internal motion of the droplet.beginning because the initial droplet shape is a perfect sphere.The axial ratio decreases as time goes by because of the compression.After about 1 msec, c/b reaches minimum and then increases due to the surface tension.After this, the axial ratio vibrates with a constant frequency and finally the vibrational motion damps due to viscous dissipation.The calculated frequency of the vibrational motion is about 2 msec not depending on p fm .The calculated frequency is consistent with that of a capillary oscillations of a spherical droplet given by (Landau & Lifshitz, 1987). Overdamping Fig. 10 shows the time variation of the axial ratio c/b when the viscosity is 100 times larger than that in Fig. 9.It is found that the axial ratio converges on the value at steady state without any vibrational motion.This is an overdamping due to the strong viscous dissipation. Effect of droplet rotation We carried out the hydrodynamics simulations of non-rotating molten droplet in previous sections.However, the rotation of the droplet should be taken into consideration as the following reason.A chondrule before melting is an aggregate of numerous fine particles, so the shape is irregular in general.The irregular shape causes a net torque in an uniform gas flow.Therefore, it is naturally expected that the molten chondrule also rotates at a certain angular velocity. The angular velocity ω f can be roughly estimated by Iω f ≈ N∆t,whereI is the moment of inertia of chondrule and ∆t is the duration to receive the net torque N. Assuming that the small fraction f of the cross-section of the precursor contributes to produce the net torque N, we obtain N ≈ f πr 3 0 p fm .W e c a n s e t ∆t ≈ π/ω f (a half-rotation period) because the sign of N would change after half-rotation.Substituting I =( 8/15)πr 5 0 ρ d ,w h i c hi st h e moment of inertia for a sphere with an uniform density ρ d , we obtain the angular velocity (Miura, Nakamoto & Doi, 2008) Therefore, in the shock-wave heating model, the droplet should be rotating rapidly if most of the angular momentum is maintained during melting. In addition, it should be noted that the rotation axis is likely to be perpendicular to the direction of the gas flow unless the chondrule before melting has a peculiar shape as windmill. Fig. 11 shows the deformation of a rotating droplet in gas flow in a three-dimensional view.The rotation axis is set to be perpendicular to the direction of the gas flow.We use µ d = 399 Hydrodynamics of a Droplet in Space Fig. 11.Three-dimensional view of a rotating molten droplet exposed to a high-velocity gas flow.The object shows the external shape of the droplet (iso-surface of the color function of φ = 0.5).The gas flow comes from the left side (arrow).The rotation axis of the droplet is perpendicular to the direction of the gas flow.After t = 1.0 sec, the droplet shape becomes a prolate.We use µ d = 10 3 poise, p fm = 10 4 dyn cm −2 , ω = 100 rad s −1 ,andr 0 = 1 mm. 10 3 poise, p fm = 10 4 dyn cm −2 , ω = 100 rad s −1 ,andr 0 = 1 mm.It is found that the droplet elongates in a direction of the rotation axis as the time goes by.Fig. 12 shows the time variation of the axial ratios b/a (solid) and c/b (dashed).The major axis a corresponds to the droplet radius in a direction of the gas flow, so the decrease of b/a means the droplet elongation.The axial ratio b/a reaches a steady value of 0.76 after 1 sec.The axial ratio c/b is kept at a constant value of ≈ 0.95 during the calculation, which means that two droplet radius perpendicular to the rotation axis is almost uniform.The droplet shape at the steady state is prolate, in other words, a rugby-ball-like shape. Origin of prolate chondrule Why did the droplet shape become prolate?The reason, of course, is due to the droplet rotation.If there is no rotation on the droplet, its shape is only affected by the gas which comes from the fixed direction (see Fig. 13a).In this case, the droplet shape becomes disk-like (oblate) shape because only one axis, which corresponds to the direction of the gas flow, becomes shorter than the other two axes (Sekiya et al., 2003).In contrast, let us consider the case that the droplet is rotating.If the rotation period is much shorter than the viscous deformation timescale, the gas flow averaged during one rotation period can be considered to be axis-symmetrical about the rotation axis (see Fig. 13b).Therefore, the droplet shrinks due to the axis-symmetrical gas flow along directions perpendicular to the rotation axis and becomes prolate if the averaged gas ram pressure is strong enough to overcome the centrifugal force. Doi (Doi, 2011) derived the analytic solution of deformation of a rotating droplet in gas flow in a case that the gas flow can be approximated as axis-symmetrical around the rotation axis as shown in Fig. 13(b).He considered that the droplet radius is given by r(θ)=r 0 + r 1 (θ), where r 0 is the unperturbed droplet radius and r 1 is the deviation from a perfect sphere.θ is the angle between the position (the origin is the center of the droplet) and the rotation axis. According to his solution, the droplet deformation is given by 400 Hydrodynamics -Advanced Topics www.intechopen.comwhere W e (Weber number) is the ratio of the ram pressure of the gas flow to the surface tension of the droplet defined as R is the ratio of the centrifugal force to the ram pressure defined as ω is the angular velocity of the rotation, and P l (cos θ) is Legendre polynomials.This solution is applicable under the assumption of r 1 ≪ r 0 .Eq. ( 35) shows that the particle radius becomes the maximum at θ = 0, and minimum at θ = π/2.R = 19/20 is a critical value for the droplet shape to be prolate (R < 19/20) or oblate (R > 19/20).The droplet shape is sphere when R = 19/20 because the ram pressure balances with the centrifugal force. Fig. 14 shows the droplet shape as functions of the Weber number W e and the normalized centrifugal force R using Eq. ( 35).R = 19/20 (vertical dashed line) is a critical value for the droplet shape to be prolate (R < 19/20) or oblate (R > 19/20).In the prolate region, the axial ratio b/a is less than unity for W e > 0 as shown by contours, but c/b = 1.On the other hand, in the oblate region, the axial ratio c/b is less than unity for W e > 0, but b/a = 1.As W e increases, the degree of deformation increases as shown in decrease of axial ratio b/a or c/b.The blue and red regions show ranges of axial ratios of group-A spherical chondrules and group-B prolate chondrules, respectively.We carried out the hydrodynamics simulations for a wide range of parameters and displayed on this diagram by symbols.It is found that the hydrodynamics simulation results show a good agreement with the analytic solution for awiderangeofW e and R. Let us consider the shape of chondrule expected from the shock-wave heating model.Adopting ram pressure of the gas flow of p fm = 10 4 dyn cm −2 and the radius of chondrule of r 0 = 1 mm, we obtain W e = 2.5 for γ s = 400 erg cm −2 .According to Eq. ( 34), we evaluate R = 0.06 for f = 0.01.The evaluated value of R is smaller than the critical value of 19/20, so the expected droplet shape is prolate.In addition, the axial ratio b/a comes into a range of group-B prolate chondrules (see Fig. 14).This suggests that the origin of group-B prolate chondrules can be explained by the shock-wave heating model.Of course, it should be noted that the shock-wave heating model does not reproduce the group-B prolate chondrules for arbitrary conditions because W e and R depend on many factors, e.g., p fm , r 0 ,a n d f .Namely, it is possible that different shock conditions produce different chondrule shapes, even out of the range of group-A or -B.This fact, on the contrary, indicates that the chondrule shapes constrain shock conditions suitable for formation of these chondrules.The data of three-dimensional chondrule shapes measured by Tsuchiyama et al. (Tsuchiyama et al., 2003) is definitely valuable, however, the number of samples is twenty at most.We need more data to constrain the chondrule formation mechanism from their three-dimensional shapes. Direct fragmentation When the droplet size is too large for the surface tension to keep the droplet shape against the gas ram pressure, the fragmentation will occur.Fig. 15 shows the three-dimensional views of the droplets suddenly exposed to the gas flow fragment for W e >∼ 6 (Bronshten, 1983, p.96).This results into the fragmentation of droplet for r 0 >∼ 6 mm if we adopt our calculation conditions: p fm = 4000 dyn cm −2 and γ s = 400 dyn cm −1 .Our hydrodynamics simulations agree with the criterion for fragmentation. Fragmentation via cavitation Fig. 16 shows the internal pressure inside the droplet for various droplet sizes: r 0 = 3, 4, and 5 mm from panels (a) to (c).We use µ d = 1.3 poise and p fm = 4000 dyn cm −2 .T h e s ed r o p l e t s reach steady states, so their hydrodynamics do not change significantly after these panels.We found a high pressure region at the front of the droplet, and low pressure regions at centers of eddies in all cases.The high pressure is due to the ram pressure of the gas flow.The low pressure in eddy is clearly due to the non-linear effect caused by the advection term in Eq. (2).Surprisingly, the pressure in eddy decreases to almost zero in panels (b) and (c).In the "zero"-pressure region, the vaporization (or boiling) of the liquid would take place because the vapor pressure of the liquid exceeds the internal pressure.This phenomenon is well known as cavitation.We did not take into account the cavitation in our simulations, so no vaporization occurred in the calculation.If the cavitation was taken into consideration, the eddies are no longer maintained because of the cavitation, which would cause the fragmentation of the droplet. Miura & Nakamoto (Miura & Nakamoto, 2007) proposed the condition for the "zero"-pressure region to appear by considering the balance between the centrifugal force and the pressure gradient force around eddies as ρ d v 2 circ /r eddy ≈ p/r eddy ,w h e r ev circ is the fluid velocity around the eddy, r eddy is the radius of the eddy, and p is the pressure inside the droplet.Substituting p = 2γ s /r 0 from the Young-Laplace equation and v circ ≈ v max = 0.112p fm r 0 /µ d (Sekiya et al., 2003), we obtain This equation gives the critical radius of the droplet above which the cavitation takes place in the center of the eddy.We obtain r 0,cav = 1.3 mm for the calculation condition.In our hydrodynamic simulations, we observed the "zero"-pressure region for r 0 = 4mmorlarger .The inconsistency of cavitation criterion between hydrodynamics simulation and Eq. ( 38) might come from the fact that we substitute the linear solution into v circ .The Sekiya's solution did not take into account the non-linear term in the Navier-Stokes equation.On the other hand, the cavitation would be caused by the non-linear effect.The substitution of the linear solution into the non-linear phenomenon might be a reason for the inconsistency.However, Eq. ( 38) provide us an insight of the cavitation criterion qualitatively. Comparison with chondrule properties It was found from the chondrule size distribution (see Fig. 3) that chondrules larger than a few mm in radius are very rare.The origin of the chondrule size distribution has been considered as some size-sorting process prior to chondrule formation in the early solar gas disk (Teitler et al., 2010, and references therein).On the other hand, in the framework of the shock-wave heating model, the upper limit of chondrule sizes can be explained by the fragmentation of a molten chondrule in high-velocity gas flow.The criterion of fragmentation is given by W e = p fm r 0 /γ s ≈ 6.Since the ram pressure of the gas flow is typically p fm ≈ 10 3 − 10 5 dyncm −2 , we obtain the upper limit of chondrule sizes as r max ≈ 0.2 − 20 mm.This is consistent with the fact that chondrules larger than a few mm in radius are very rare. In addition, our hydrodynamics simulations show a new pathway to the fragmentation by cavitation.The cavitation takes place for W e < 6 if viscosity of the molten chondrule is small.The viscosity decreases rapidly as temperature of the droplet increases.This suggests the following tendency: chondrules that experienced higher maximum temperature during melting have smaller sizes that that experienced lower maximum temperature.On the other hand, the data obtained by Nelson & Rubin (Nelson & Rubin, 2002) showed the tendency opposite from our prediction.They considered the reason of the difference in mean sizes among chondrule textural types being due mainly to parent-body chondrule-fragmentation events and not to chondrule-formation processes in the solar nebula.Therefore, to date, there is no evidence regarding the dependence of chondrule sizes on the maximum temperature.The relation between the chondrule sizes and the maximum temperature should be investigated in the future. 405 Hydrodynamics of a Droplet in Space www.intechopen.com How about the distribution of sizes smaller than the maximum one?Kadono and his colleagues carried out aerodynamic liquid dispersion experiments using shock tube (Kadono & Arakawa, 2005;Kadono et al., 2008).They showed that the size distributions of dispersed droplets are represented by an exponential form and similar form to that of chondrules.In their experimental setup, the gas pressure is too high to approximate the gas flow around the droplet as free molecular flow.We carried out the hydrodynamics simulations of droplet dispersion and showed that the size distribution of dispersed droplets is similar to the Kadono's experiments (Yasuda et al., 2009).These results suggest that the shock-wave heating model accounts for not only the maximum size of chondrules but also their size distribution below the maximum size. In addition, we recognized a new interesting phenomenon relating to the chondrule formation: the droplets dispersed from the parent droplet collide each other.A set of droplets after collision will fuse together into one droplet if the viscosities are low.In contrary, if the set of droplets solidifies before complete fusion, it will have a strange morphology that is composed of two or more chondrules adhered together.This is known as compound chondrules and has been observed in chondritic meteorites in actuality.The abundance of compound chondrules relative to single chondrules is about a few percents at most (Akaki & Nakamura, 2005;Gooding & Keil, 1981;Wasson et al., 1995).The abundance sounds rare, however, this is much higher comparing with the collision probability of chondrules in the early solar gas disk, where number density of chondrules is quite low (Gooding & Keil, 1981;Sekiya & Nakamura, 1996).In the case of collisions among dispersed droplets, a high collision probability is expected because the local number density is high enough behind the parent droplet (Miura, Yasuda & Nakamoto, 2008;Yasuda et al., 2009).The fragmentation of a droplet in the shock-wave heating model might account for the origin of compound chondrules. Conclusion To conclude, hydrodynamics behaviors of a droplet in space environment are key processes to understand the formation of primitive materials in meteorites. We modeled its three-dimensional hydrodynamics in a hypervelocity gas flow.Our numerical code based on the CIP method properly simulated the deformation, internal flow, and fragmentation of the droplet.We found that these hydrodynamics results accounted for many physical properties of chondrules. Fig. 2 . Fig. 2. Schematic of the shock-wave heating model for chondrule formation.(a)The precursors of chondrules are in a gas disk around the proto-sun 4.6 billion years ago.The gas and precursors rotate around the proto-sun with almost the same angular velocity, so there is almost no relative velocity between the gas and precursors.(b) If a shock wave is generated in the gas disk by some mechanism, the gas behind the shock front is suddenly accelerated.In contrast, the precursor is not accelerated because of its large inertia.The difference of their behaviors against the shock front causes a large relative velocity between them.The precursors are heated by the gas friction in the high velocity gas flow. Fig. 4 . Fig.4.Three-dimensional shapes of chondrules(Tsuchiyama et al., 2003, and their unpublished data).a, b,andc are axial radii of chondrules when their shapes are approximated as three-axial ellipsoids (a ≥ b ≥ c), respectively.The textures of these chondrules are 16 porphyritic (open circle), 3 barred-olivine (filed circle), and 1 crypto-crystalline (filled square).The radius of each symbol is proportional to the effective radius of each chondrule r * ≡ (abc) 1/3 ; the largest circle corresponds to r * = 1129 ¯m.For the data of crypto-crystalline, r * = 231 ¯m.Chondrule shapes are classified into two groups: group-A shows the relatively small deformation from the perfect sphere, and group-B is prolate with axial ratio of b/a ≈ 0.7 − 0.8. Fig. 5, f n are shown 389 Hydrodynamics of a Droplet in Space www.intechopen.com The superscripts * and ** indicate the time step just before and after the anti-diffusion.The minimum modulus function (minmod) is often used in the concept of the flux limiter and has a non-zero value of sign(a) min(|a|, |b|, |c|) only when a, b,a n dc have the same sign.The value of the diffusion coefficient α is also important.Basically, we take α = −0.1 for the anti-diffusion. Fig. 7 . Fig. 7. Spatial distributions of the momentum flux M (a) and the ram pressure F g (b) of the free molecular gas flow around a spherical droplet in xy-plane.The dashed circles are sections of the droplet surfaces in xy-plane.Units of the gray scales are p fm for the panel (a) and dyn cm −3 for the panel (b), respectively.We adopt p fm = 5000 dyn cm −2 in this figure. Fig. 7 ( Fig. 7(a) shows the distribution of momentum flux M around two droplets in xy-plane.The dashed circles are the external shapes of large and small droplets.The gray scale is normalized by p fm , so unity (white region) means undisturbed molecular flow and zero (dark region) means no flux because the free molecular flow is obstructed by the droplet.It is found that the gas flow is obstructed only behind the droplets.Fig.7(b)shows the distribution of the ram pressure F g,x calculated from the momentum flux distribution.The ram pressure is acting at the droplet surface where M changes steeply.Note that no ram pressure acts at bottom half of the smaller droplet because the molecular flow is obstructed by the larger one.As shown in Fig.7, the model of ram pressure shown here well reproduces the property of free molecular flow. Fig. 8 . Fig. 8. Time evolution of molten droplet exposed to the gas flow.The gas flow comes from the left side of panels.We use p fm = 10 4 dyn cm −2 , r 0 = 500 ¯m, and µ d = 1.3 poise for calculations. Fig. 9 Fig.9shows the time variation of axial ratio c/b of the droplet.Each curve shows the calculation result for the different value of the ram pressure p fm .The droplet is compressed unidirectionally by the gas flow, so the length of minor axis c corresponds to the half length of droplet axis in the direction of the gas flow.The axial ratio c/b is unity at the Fig. 9 . Fig. 9. Vibrational motions of molten droplet; the deformation by the ram pressure and the recovery motion by the surface tension.The horizontal axis is the time since the ram pressure begins to affect the droplet and the vertical axis is the axial ratio of the droplet c/b.E a c h curve shows the calculation result for the different value of the ram pressure p fm .W eu s e r 0 = 500 ¯m and µ d = 1.3 poise for calculations. Fig. 12 . Fig. 12.Time evolutions of axial ratios b/a and c/b in the case of Fig. 11. Fig. 13 . Fig. 13.The reason why the rotating droplet exposed to the gas flow is deformed to a prolate shape is illustrated.(a) If the droplet does not rotate, it is deformed only by the effect of the gas ram pressure.(b) If the droplet rotates much faster than the deformation due to the gas flow, the time-averaged gas flow can be approximated as axis-symmetrical around the rotation axis. Fig. 14 . Fig. 14.Shapes of rotating droplets in gas flow.The horizontal axis is the centrifugal force normalized by ram pressure of the gas flow R. The vertical axis is the Weber number W e .R = 19/20 (vertical dashed line) is a critical value for the droplet shape to be prolate (R < 19/20) or oblate (R > 19/20).Solid lines are contours of axial ratios of b/a (R < 19/20) or c/b (R > 19/20).A ranges of axial ratios of chondrules are shown by colored regions for group-A spherical chondrules (blue) and for group-B prolate chondrules (red), respectively.Symbols are results of hydrodynamics simulations (see legends in figure).Grayed region shows a condition in which the droplet will be fragmented by rapid rotation.thebreak-up droplet.The droplet radius is r 0 = 2 cm, which corresponds to W e = 20.The gas flow comes from the left side of the view along the x-axis.It is found that the droplet shape is deformed as the time goes by (panels (a) and (b)), and leads to fragmentation (panel (c)).The parent droplet breaks up to many smaller pieces. Fig. 16 . Fig. 16.Internal pressure inside droplet for different droplet radius r 0 : (a) 3 mm, (b) 4 mm, and (c) 5 mm.The pressure at a region surrounded by a white line decreases to almost zero bytheeddy .W euseµ d = 1.3 poise and p fm = 4000 dyn cm −2 .
12,378.8
2011-12-22T00:00:00.000
[ "Physics" ]
The Jump Size Distribution of the Commodity Spot Price and Its Effect on Futures and Option Prices and Applied Analysis 3 European commodity futures option is priced as the expected discounted payoff under the Q-measure; see [6, 22]: V (t, S, δ, T2; T1) = EQ [e−∫1 t r(u)du ⋅max (F (T1, S (T1) , δ (T1) ; T2) − K, 0) | S (t) = S, δ (t) = δ] , (5) where r denotes the instantaneous risk-free interest rate, which is assumed to be constant. Moreover, τ1 = T1 − t and τ2 = T2−T1 are thematurity of the option contract and futures contract, respectively. 3. Valuation of Commodity Futures with Introduction In the literature, the commodity price usually follows a diffusion process with continuous paths when pricing commodity derivatives.Although this assumption is very attractive because of its computational, convenience, theoretical derivation and statistical properties, [1][2][3][4] others found significant evidence of the presence of jumps in commodity prices. In traditional jump-diffusion commodity models, the functions of the stochastic processes and the market prices of risk are usually specified as simple parametric functions, for pure tractability and simplicity.Furthermore, the functions of the models are usually chosen to provide an affine model which has a known closed-form solution.For example, [5] considers a three-factor model where the spot price follows a jump-diffusion stochastic process.In [6] existing commodity valuation models were extended to allow for stochastic volatility and simultaneous jumps in the spot price and volatility.The standard geometric Brownian motion augmented by jumps was used by [7] to describe the underlying spot and the mean reverting diffusion processes for the interest rate and convenience yield in gold and copper price models.In [8] a seasonal mean reverting model with jumps and Heston-type stochastic volatility is analyzed. We consider, in this paper, a two-factor jump-diffusion commodity model, where one of the factors is the commodity spot price and the other is the convenience yield.These factors are often used in the commodity literature.For example, [9,10] propose affine models with these two factors, though they do not consider jumps.Then, all the functions can be easily estimated and the commodity derivatives priced.However, there is not any empirical evidence or consensus that affine models are the best models to price commodity futures.Furthermore, the market prices of risk are not observed in the markets.If we considered other more realistic functions for the state variables or the market prices of risk or even a nonparametric approach, then, the model would not be affine anymore, a closedform solution could not be obtained, and, therefore, the estimation of the market prices of risk would not be possible.However, [11] shows a new approach to estimate the whole functions of the model although a closed-form solution is not known.They even apply it to a jump-diffusion model where the jump follows a Normal distribution.Finally, they estimate the whole functions with a nonparametric technique in order to avoid imposing arbitrary functions on the model. The Valuation Model In this section, we introduce a commodity model with two state variables: the spot price and the convenience yield, for pricing commodity derivatives; see also [11,17].We assume that the spot price follows a jump-diffusion process, because commodity prices usually suffer from abrupt changes in the markets; see [1].However, we assume that the convenience yield is a diffusion process because its behaviour is not affected by extreme changes; see, for example, [6]. Define (Ω, F, {F } ≥0 , P) as a complete filtered probability space which satisfies the usual conditions where {F } ≥0 is a filtration; see [18][19][20].Let be the spot price and the instantaneous convenience yield.We assume that these factors follow this joint jump-diffusion stochastic process: where and are the drifts and and the volatilities.Moreover, and are Wiener processes and the impact of the jump is given by the compound Poisson process, () = ∑ () =1 , with jump times ( ) ≥1 , where () represents a Poisson process with intensity (, ) and 1 , 2 , . . . is a sequence of identically distributed random variables with a probability distribution Π.We assume that and are independent of , but the standard Brownian motions are correlated with ( We also suppose that the jump magnitudes and the jump arrivals time are uncorrelated with the diffusion parts of the processes.We assume that the functions , , , , and Π satisfy suitable regularity conditions: see [20,21].Under the above assumptions, a commodity futures price at time with maturity at time , ≤ , can be expressed as (, , ; ) and at maturity it verifies that (, , ; ) = .We assume that the market is arbitrage-free.Then, there exists an equivalent martingale measure, Q-measure, which is known as the risk-neutral measure; see extended Girsanovtype measure transformation in [22].The state variables of the model (1) under the risk-neutral measure are as follows: where Q and Q are the Wiener processes under the risk-neutral measure and The market prices of risk associated with and Wiener processes are (, ) and (, ), respectively.Finally, is the compensated compound Poisson process under Q-measure, the intensity of the Poisson process Q () is Q (, ), and Q denotes the expectation under the Q-measure.Then, the futures price can be expressed as Let (, , , 2 ; 1 ) be the price of a European call option that matures on 1 on a futures contract that expires at 2 , 1 ≤ 2 , and is the strike price.Then, analogously to (4), an European commodity futures option is priced as the expected discounted payoff under the Q-measure; see [6,22]: where denotes the instantaneous risk-free interest rate, which is assumed to be constant.Moreover, 1 = 1 − and 2 = 2 − 1 are the maturity of the option contract and futures contract, respectively. Valuation of Commodity Futures with NYMEX Data In this section, by means of an empirical application with natural gas NYMEX data, we illustrate the advantages and disadvantages of modelling the spot price with a jumpdiffusion process with an Exponential distribution and a Normal distribution.In all the cases, we use the approach, the nonparametric techniques and the in-sample data (January 2004-December 2014) as in [11], to estimate the risk-neutral functions.However, we increase the out-of-sample period where we price the natural gas derivatives from January till July 2015. In this empirical application, we use the model stated in Section 2, where the factors are the commodity spot price and the convenience yield.For simplicity and tractability and as usual in the literature, we also assume that the distribution of the jump size under Q-measure is known and equal to the distribution under P-measure.This means that all risk premium related to the jump is artificially absorbed by the change in the intensity of the jump from under the physical measure to Q under the risk-neutral measure; see [8,11,23].Moreover, we assume the jump size follows a Normal distribution (0, ) (see [11]) or an Exponential distribution Exp ( ) (see [6,24,25]) among others. In order to price natural gas futures, we use daily natural gas data from the NYMEX in Quandl platform.Natural gas spot prices were obtained from the U.S. Energy Information Administration (EIA).The sample period covers from January 2004 to July 2015.More precisely, we use data from January 2004 to December 2014 to estimate the risk-neutral functions as in [11] and, then, we keep data from January to July 2015 to make our out-of-sample analysis of the futures prices. As it is well known in the literature, the convenience yield is not observed in the markets.Then, following [9], we approximate it by the following result where −1, denotes the forward interest rate between − 1 and .We obtain this forward interest rate with two daily T-Bill rates with maturities as close as possible to the futures contracts' ones in order to compute 1,2 , the one-month ahead annualized convenience yield.The latter is identified with the instantaneous convenience yield 0,1 ; see [9,11] for more details. In order to estimate the risk-neutral functions of the jump-diffusion models, we follow the same approach as [11].Note that similar techniques have been proposed for interest rate derivatives; see [26,27]. Firstly, we obtain the compensated risk-neutral drift of the spot price by means of the following equality which relates the futures slope in the origin with the drift of the spot in the stochastic process under Q-measure; see [11] for more detail: We approximate the partial derivative by means of numerical differentiation with futures prices with maturities equal to 1, 2, 3, and 4 months.Then, we estimate it by means of the Nadaraya-Watson estimator; see [28] for more details on this estimation technique. Secondly, for the risk-neutral jump intensity, we use a result proposed in [11] which relates the futures slope in the origin with the spot price, spot price volatility, and parameters of jump size distribution under Q-measure: Initially, [11] assumed that the jump size followed a Normal distribution as 1 (0, 2 ), then, [ 1 ] = 0, and In this paper we also assume that the jump size follows an Exponential distribution as 1 Exp ( ); then: This jump size distribution has also been considered by [29] for the volatility and [30] for interest rates.This assumption could be useful for pricing during periods in which positive jumps are expected to dominate negative jumps, for example, coming out of an economic crisis (see [30]) or in certain economic regimes (see [31]).With both distributions, the parameters of the jump size distribution and the spot price volatility, , are estimated by means of a system of moment equations of a jump-diffusion process (see [11,32,33]): More precisely, we use moments see [11].In order to estimate the correlation, we use the moment and the Nadaraya-Watson estimator; see [36] for more details.Later, we replace the estimated covariance and the approximations of (/)| = and (()/)| = in (13) and we estimate the risk-neutral drift of the convenience yield by means of the Nadaraya-Watson estimator. Finally, the volatility of the convenience yield under Pmeasure is equal to the volatility under Q-measure.Hence, we estimate by means of the second order moment of a diffusion process: and Nadaraya-Watson estimator, with spot and convenience yield data. Up to this point, we have focused on the estimation of the risk-neutral functions of jump-diffusion processes.If we assume that the spot price follows a diffusion stochastic process, the factors of the model will follow this joint diffusion stochastic process under Q-measure: with The estimation of these functions is made by means of the approach in [37] and the Nadaraya-Watson estimator, with the same natural gas data and numerical differentiation approximation as the jump-diffusion model. For analyzing the effect of the jumps on the natural gas futures prices, we price natural gas futures with a diffusion model (DM) as well as a jump-diffusion model with a Normal jump size distribution (JDMN) and an Exponential distribution (JMDExp).In order to price natural gas futures it is necessary to solve a partial integrodifferential equation or, equivalently, by means of Feynman-Kac Theorem the expectation in (4).As we use nonparametric methods a closed-form solution cannot be found.Recently, several numerical methods have been developed to solve this kind of problems; see [38,39]. In this paper, we use the Monte Carlo simulation approach because it is widely used by practitioners in the markets, especially for multiple factor models because of its simplicity and efficiency, [40].More precisely, we consider 5000 simulations and a daily time step, Δ = 1/250.We price natural gas futures with maturities from 1 to 44 months and we compare them with those traded at NYMEX along the outof-sample (January-July 2015).As measures of error, we use the root mean square error (RMSE) and the percentage root mean square error (PRMSE) for the out-of-sample: where is the number of observations, is the futures price traded at NYMEX, and F is the predicted futures price with the different models.Table 1 shows a summary of the RMSE and PRMSE of the different models for the out-of-sample and for several maturities.F1 is the futures price with a maturity of 1 month, F6 with six months, and so on.In this table, we show that for short maturities the RMSE are usually lower than for long maturities.Besides, for very short maturities sometimes the diffusion model prices natural gas futures quite accurately, as for F6.However, for F1 and for maturities higher or equal to 9 months, jump-diffusion models provide lower errors than the diffusion model as in [11].Moreover, for maturities lower than 18 months the JDMN is more accurate than the JDMExp, but for long maturities (higher or equal to 18 months) the results change and the JDMExp displays lower errors than the JDMN.Therefore, depending on the maturity of the futures to price, some models are more accurate than others.As far as the PRMSE is concerned, we reach the same conclusion but, for maturities longer or equal than 36, the differences between the relative error of the JDMN and JDMExp are higher. We now turn our attention to the absolute errors along the out-of-sample for some maturities.Figure 1 plots the absolute errors of the considered models for some maturities such as 6, 18, and 44 months.We show only these maturities because the behaviour of the rest is analogous.For example, for a maturity of 6 months, we observe that the errors of the DM are the lowest along the first months of the out-of-sample, although it changes for the last months.For longer maturities, for example 18 months, the JDExp model provides the lowest errors for a great number of months, followed by the JDN.Finally, when we consider the longest available maturity, the JDExp model is clearly the most accurate. If we analyze the price behaviour along the out-of-sample, we observe high changes for short maturities, but they decrease when we increase the maturity.That is, the longer the maturity, the lower the price variations along the time.In order to illustrate this result, in Figure 2, we plot the futures prices traded at NYMEX and those priced with the different models considered in this paper (DM, JDMN, and JDMExp).As we can see in this figure, the highest variations are for F6 and the lowest are for F44.Focusing on the estimated prices, we observe that, in general, the DM provides the lowest prices and the JDMExp the highest prices for each maturity along the time for some maturities.We observe that the NYMEX and estimated futures prices usually rise when the maturity increases, but the rate of rising of the market prices is higher than the rate of the estimated prices with the different models. We also see that the estimated models overprice the NYMEX F6 futures in several months.However, in most of the cases, the JDMN and the DM underprice the NYMEX futures for a maturity of 18 months.Finally, for a maturity of 44 months, the whole estimated models underprice the NYMEX futures.Then, the higher the maturity the higher the possibility for natural gas futures to be underpriced by the different models, especially by the DM. In conclusion, as in [11], the jump-diffusion models provide lower errors than the diffusion model apart from some price.However, conclusions do not change if we consider other different days of the out-of-sample for valuation. As we do not have observations of European natural gas option prices for different maturities, we compare the prices when the Normal and Exponential jump sizes are considered.In Table 2, we show some ratios between the JDMN and JDMExp for different strike prices and maturities on January 3, 2015.As we can see, for options and underlying futures with short maturities (3 months) the ratios are higher than 90%.The main reason is that the futures prices with short maturities are quite similar for both distributions, although the futures prices with the Normal distribution are slightly lower.However, as we increase the maturities, especially of the futures, the ratios decrease considerably till 19%.This fact is consistent with the high differences between the futures prices with both distributions when the maturity increases.Moreover, these differences are even higher because the futures price is the underlying of the option.Therefore, we conjecture that, in order to price futures options accurately, other stochastic variables should be considered in the model, such as the volatility or interest rates.This result can be very interesting for practitioners, because they should take into account the fact that the Exponential jump size distribution overprices option prices with respect to the Normal distribution, which is consistent with the results obtained in the previous section for jumpdiffusion futures prices.Finally, we see that the higher the strike price the lower the ratio.Therefore, the highest price differences can be found for the out of the money options. Futures Risk Premium The futures risk premium provides a link between natural gas futures and expected spot prices and it is a key measure in risk management.In particular, the term structure of commodity risk premia supplies additional information about the role of the net hedging pressure.Then, it is an important factor in understanding the markets and it deserves great attention. In the literature, the risk premium is defined as the difference between the expected future spot price and the futures price; see [25,41] Therefore, the risk premium is the reward for holding a risk rather than a risk-free investment; see [41].In energy markets, the sign of the risk premium usually changes along the time, with the maturity of the futures and even with the market and the commodity; see for example [42]. On the one hand, commodity consumers may enter into a long position in futures contracts, because they want to insure against future increases in the spot price, so they accept prices over the expected spot price.On the other hand, commodity producers may enter into a short position in futures contracts because they wish to hedge their revenue risk.Since this decision is taken in advance, they accept prices below the expected spot price.Then, if the activity of consumers is greater than that of producers, there will be an excess of commercial participants looking to enter a long position.In this case, the net hedging pressure theory establishes that the futures price will be higher than the expected future spot price to induce speculators to balance the market by taking a short position.In contrast, if the hedging activity of producers is greater than that of consumers, there will be an excess of commercial participants looking to enter a short position.Then, the expected future spot price will be higher than the futures price to induce speculators to balance the market by taking a long position.Therefore, the commodity futures risk premia (in absolute value) can be seen as the return that speculators expect to receive to compensate the market; see [42]. In this section, we obtain the natural gas futures risk premia for the out-of-sample (January-July 2015).We use the natural gas futures prices traded at NYMEX for maturities between 1 and 24 months, but we also need to calculate [() | () = , () = ].In this case, the functions of the stochastic processes (1) are estimated directly from the moment conditions for the different jump distributions; see, for example, [34] for the Normal distribution and [35] for the Exponential distribution.The market prices of risk are not taken into account because there is not a change from the Figure 3 shows the term structure of natural gas risk premia with the Normal and the Exponential jump size distributions, hereafter RPNormal and RPExp, respectively.We calculate these values like the mean of the risk premia, for each maturity, in the out-of-sample.In this figure, both RPNormal and RPExp have, in general, the same behaviour although the risk premium under the Normal jump size distribution is always higher than the risk premium under the Exponential distribution.This fact is consistent with the mean of the distributions considered in each case.Furthermore, as it can be seen in Figure 3, the risk premium is positive for short maturities (approximately, up to 7 or 8 months for the Exponential and the Normal distribution, resp.).Following the net hedging pressure theory, for these short maturities the activity of the producers is higher than the consumers activity and the risk premium is the average return that speculators would receive by entering a long position in the natural gas futures markets and holding the futures to expiration.This means that the futures prices are below the expected spot prices and the futures curve is said to be normally backwardated; see [43].However, for maturities higher than 7 or 8 months the risk premium starts to be negative.In this case, the futures prices are above the expected spot prices and, then, the curve is said to be in Normal contango; see [43].Following the net hedging pressure theory, consumers have to offer an incentive to induce speculators to enter a short position, and the absolute value of the risk premia is the return that speculators expect to receive for balancing the market.More precisely, in general, the higher the maturity the more negative the risk premium and, then, speculators expect to receive a higher compensation to balance the market.In Figures 4 and 5, we plot the estimated risk premium as a function of time when the jump size follows a Normal and an Exponential distribution, respectively.These figures show that there is mixed evidence of the sign of the risk premium and, besides, the risk premia are strongly timevarying.Hence, the activity of speculators is also timevarying.In Figure 3 we saw that the risk premium for very short maturities was positive; however, in Figure 4 we see that it is not always positive but it is on average.Therefore, in general, the futures price is a downward biased predictor of the expected spot price for short maturities.However, for longer maturities, we see that the risk premium is usually negative, apart from maturities longer than 12 months for the Exponential distribution and longer than 24 months for the Normal distribution, where it is always negative.Then, for maturities longer than 6 months, the futures price is an upward biased predictor of the expected spot price as a whole. Conclusions In this paper, we make mainly two contributions.Firstly, we apply the approach in [11] for pricing natural gas futures, but we assume that the jump size follows an Exponential distribution.We use the data and nonparametric techniques to estimate all the risk-neutral functions of the model as in [11].Then, considering a higher out-of-sample period, we show that considering a jump-diffusion model provides lower errors than a diffusion model when pricing futures.Furthermore, we also show that the Normal distribution is the best assumption to price short maturity futures.However, the Exponential distribution provides lower errors when pricing long maturity futures. The second contribution comes through the use of [11] approach and data to price natural gas options and risk premia.We find that, in general, the model with the Exponential distribution overprices option prices with respect to the Normal distribution.We think that, in order to price options more accurately, other state variables should be taken into account. As far as the risk premia is concerned, we find that this premium is negative more times with the Exponential distribution than with the Normal distribution.These facts should be taken into account when a jump-diffusion is applied to price commodity futures or options. Figure 2 : Figure2: Natural gas futures prices (January-July 2015) with maturities: 6, 18, and 44 months.The NYMEX futures prices are the red solid line, the DM is the green dashed dotted line, the JDMN is the blue dash line, and the JDMExp is the black dotted line. Figure 3 : Figure 3: The risk premium as a function of time to maturity for the JDMN and JDMExp models. Figure 4 : Figure 4: The risk premium for the JDMN model along the out-of-sample, for maturities 1, 9, 12, and 24 months. Figure1: Absolute error of the futures prices for the out-of-sample (January-July 2015) with maturities: 6, 18, and 44 months.The absolute error for the DM is the red dotted line, the JDMN is the blue dash line, and the JDMExp is the black solid line.
5,597.2
2017-01-01T00:00:00.000
[ "Economics" ]
Stochastic Resonance with Parameter Estimation for Enhancing Unknown Compound Fault Detection of Bearings Although stochastic resonance (SR) has been widely used to enhance weak fault signatures in machinery and has obtained remarkable achievements in engineering application, the parameter optimization of the existing SR-based methods requires the quantification indicators dependent on prior knowledge of the defects to be detected; for example, the widely used signal-to-noise ratio easily results in a false SR and decreases the detection performance of SR further. These indicators dependent on prior knowledge would not be suitable for real-world fault diagnosis of machinery where their structure parameters are unknown or are not able to be obtained. Therefore, it is necessary for us to design a type of SR method with parameter estimation, and such a method can estimate these parameters of SR adaptively by virtue of the signals to be processed or detected in place of the prior knowledge of the machinery. In this method, the triggered SR condition in second-order nonlinear systems and the synergic relationship among weak periodic signals, background noise and nonlinear systems can be considered to decide parameter estimation for enhancing unknown weak fault characteristics of machinery. Bearing fault experiments were performed to demonstrate the feasibility of the proposed method. The experimental results indicate that the proposed method is able to enhance weak fault characteristics and diagnose weak compound faults of bearings at an early stage without prior knowledge and any quantification indicators, and it presents the same detection performance as the SR methods based on prior knowledge. Furthermore, the proposed method is more simple and less time-consuming than other SR methods based on prior knowledge where a large number of parameters need to be optimized. Moreover, the proposed method is superior to the fast kurtogram method for early fault detection of bearings. Introduction Rolling bearings are key components of rotating machinery and can provide reliable and stable support. However, during the operating process, rolling bearings are inevitable; their defects include wear, cracking and pitting due to insufficient lubrication, contact fatigue, etc. Therefore, it is very important for us to achieve early fault detection and fault diagnosis of bearings to avoid a serious accident. Up until now, there were two well-known categories of bearing fault diagnosis, including model updating [1,2] and direct feature extraction and analysis using machine learning [3,4] or signal processing [5,6]. In addition, some scholars focus on the prediction of remaining useful life [7,8]. Among them, because direct feature extraction and analysis based on signal processing does not require building complex mathematical models and complete fault samples, they have attracted sustained attention in fault detection. However, most signal processing methods attempt to cancel the noise embedded in signals to detect weak fault features, while stochastic resonance is able to harvest the energy of noise to enhance weak useful signals and therefore has been widely used in weak fault detection and fault diagnosis of machinery. In 2019, Lu et al. [9] and Qiao et al. [10] reviewed these articles on SR-based fault diagnosis of machinery and pointed out the potential development directions, respectively. Two review articles are inspiring researchers to explore the potential of SR as well as to develop advanced research in this field. Up until now, there were many scholars who investigated the application of SR in mechanical fault diagnosis. Among them, some paid attention to designing novel indicators to quantify SR. For example, López et al. [11] proposed hidden Markov models and a Box-Cox sparse measures-based SR method for bearing fault diagnosis in which the designed indicator does not depend on prior knowledge of the fault signature to be detected, but the parameter selection of hidden Markov models and Box-Cox sparse measures would complicate the proposed method. Lin et al. [12] designed a novel indicator to quantify the SR for bearing fault diagnosis, where the cross-correlation coefficient, impulse index and signal-to-noise ratio (SNR) are fused. Li et al. [13] presented a multi-parameter constrained potential underdamped SR method and applied it to weak fault diagnosis, where SNR is used to evaluate SR. Lai et al. [14] defined SNR input and SNR output as the signal-tonoise ratios at SR system input and output to quantify SR for mechanical fault diagnosis. Zhang et al. [15] used the opposite of SNR as a fitness function of salp swarm algorithms to optimize the parameters of the proposed SR method. Wang et al. [16] applied order tracking to address the time-varying signals and then designed a tristable SR method to enhance weak fault characteristics with the fitness function of SNR. Zhou et al. [17] used generative adversarial networks to fuse 10 statistical parameters as the fitness function for optimizing the proposed SR method. Shi et al. [18] proposed a novel adaptive multi-parameter unsaturation bistable SR method in which the output SNR is selected as the objective function. Qiao et al. [19] proposed a second-order SR method enhanced by a fractional-order derivative for mechanical fault detection where SNR is used to optimize its parameters, even in coupled neurons [20] or the proposed nonlinear resonant decomposition [21]. Li et al. [22] presented a frequency-shift multiscale noise tuning SR method for fault diagnosis of generator bearings in wind turbines and even presented the coupled bistable SR method [23] in which the parameters of the proposed method are optimized through modified SNR and genetic algorithms. Fu et al. [24] studied the SR in Duffing oscillators and proposed a moment-method-based bearing fault diagnosis algorithm where the output SNR is seen as the objective function of optimization algorithms. Xu et al. [25] studied the SR behaviors in a high-order-degradation bistable system, and the ratio of the amplitudes around the target orders to those of the interference orders was calculated as the objective function of optimization algorithms. In summary, most of the proposed SR methods' objective functions depend on prior knowledge of the fault characteristics to be detected, which would result in false SR and weaken the detection performance of SR further. It is improper to enhance and detect unknown weak fault characteristics, especially for realworld equipment. In addition, a large number of tuning parameters in the existing SR methods need to be optimized by using artificial intelligence, which is very time-consuming and complex and is not suitable to engineering application. Triggering SR has clear mathematical conditions, but among these variables in the conditions, the intensity of noise is indefinite. If we could estimate the intensity of noise in a signal, a parameter-matching equation could be built. Motivated by such an idea, this paper will attempt to design a parameter-matched second-order SR method for enhancing unknown weak fault characteristics of bearings due to second-order SR that has a bandpass filtering property. The remainder of this paper is organized as follows. Section 2 gives clear mathematical conditions of parameter-matched second-order SR and then proposes the corresponding SR method for bearing fault enhancement. In Section 3, a bearing fault experiment was performed to demonstrate the effectiveness of the proposed method, and a comparison is made. Finally, conclusions are drawn in Section 4. A Parameter-Matched Second-Order SR Method A second-order SR can be described as below [26], where γ is the damping factor and 0 < γ ≤ 2 √ 2a [27], s(t) is a periodic signal to be detected and its expression can be written as A cos(2π f 0 t + ϕ) with the amplitude A, driving frequency f 0 and initial phase ϕ. n(t) is noise with n(t), n(t + τ) = 2Dδ(t) and D as the noise intensity. V(x) is the bistable potential as below, where two stable states and one unstable state are located at ±x m = ± √ a/b and x 0 = 0, respectively. Moreover, the barrier height is ∆V = a 2 4b . According to two-state theory, the matched conditions for triggering the SR induced by a periodic signal is [28] r where r K is the Kramers' rate given by [29] in which ω b = √ 2a and ω 0 = √ a. For convenience, we rewrite the matched condition in Equation (3) as [30] F(a, b, D, γ, It can be noticed from Equation (5) that SR can be induced when F = 1. In general, the input and output SNR in Equation (1) for a periodic signal plus additive noise can be written as [31] Then, the SNR gain (Signal-to-noise ratio gain, SNRG) can be calculated as It can be seen from Equation (8) that the SNRG would be controlled by bistable potential parameters and the noise intensity. Thus, the objective function of the signal to be detected is given as the following optimization problem: By solving Equation (8) with respect to D, we can obtain the optimal condition as below: Therefore, the optimal parameter-matched condition for weak fault enhancement can be given as below: However, Equation (1) is suitable to process weak signals subject to small-parameter limitation. In the real world, the mechanical signals are generally large parameter. For solving this issue, a normalized scale transformation is performed to Equation (1). Let z = x √ b/a and τ = at, Equation (1) could be rewritten as where By comparison, it is found that large-parameter signals can be transferred into small-parameter ones in which the amplitude is reduced by the time of √ b/a 5 and the frequency is reduced by the time of 1/a. By substituting the above parameters in Equation (12) into Equation (11), the parameter-matched conditions for triggering SR for weak large-parameter signal detection can be achieved as below: which is calculated further as It can be seen from Equation (14) that the damping factor γ can be adjusted to enhance the weak fault characteristics of bearings. According to the condition 0 < γ h ≤ 2 √ 2a h , we can obtain the limitation. Therefore, the parameter-matched conditions can be given by using Equations (14) and (15) mathematically for weak large-parameter signal detection. According to the above parametermatched conditions, we can design an SR method with parameter estimation to enhance the weak large-parameter fault signature of bearings. The proposed method is shown in Figure 1, and its detailed steps are given as below. 1. Signal pre-processing. In general, the bearing vibration signal v(t) is a large-parameter signal where the bearing fault characteristics are modulated by natural vibration from the machinery itself. Hence, some signal demodulations are performed to the raw signals. Here, Hilbert envelope demodulation is used to pre-process the raw vibration signal of the tested bearing, and the corresponding envelope signal is denoted as v(t). 2. Noise intensity estimation. In Equation (14), an important parameter D, namely, noise intensity, needs to be estimated from the envelope signal v(t) of the raw signal of the tested bearing. Here, we use the principle of maximum likelihood estimation (MLE) [32] to achieve this, and we can obtain D = MLE( v(t)). The MLE algorithm can be downloaded by using the following website: http://www.biomecardio.com/ matlab/evar_doc.html (accessed on 5 December 2022). 3. Damping factor initialization. The damping factor needs to be tuned to obtain the optimal detected result. Here, according to its range in Equation (15), we initialize the damping factor γ. 4. Output signal calculation and evaluation. By substituting a match = 2 √ 2π f 0 γe, b match = a 4 /4D and the corresponding damping factor into Equation (16), we can obtain the output signal x(t). Then, we can calculate the corresponding SNRG as the objective function γ opt = argmax SNRG for optimizing the damping factor. Finally, the optimal γ opt is substituted into Equation (17) to solve the optimal x(t). 5. Signal post-processing. Output the optimal x(t) corresponding to the maximum of SNRG as the detected signal. Here, the frequency spectral analysis is used to process the optimal x(t) for observing the spectral peaks at the fault characteristic frequencies of bearings. coustics 2022, 4 FOR PEER REVIEW above parameter-matched conditions, we can design an SR method with paramete mation to enhance the weak large-parameter fault signature of bearings. The prop method is shown in Figure 1, and its detailed steps are given as below. 1. Signal pre-processing. In general, the bearing vibration signal v(t) is a large-pa ter signal where the bearing fault characteristics are modulated by natural vibr from the machinery itself. Hence, some signal demodulations are performed raw signals. Here, Hilbert envelope demodulation is used to pre-process the ra bration signal of the tested bearing, and the corresponding envelope signal is de as ̃( ). (14), an important parameter , na noise intensity, needs to be estimated from the envelope signal ̃( ) of the raw nal of the tested bearing. Here, we use the principle of maximum likelihood es tion (MLE) [32] to achieve this, and we can obtain = (̃( )). The MLE rithm can be downloaded by using the following website: http://www.b Bearing Fault Experimental Verification The bearing fault experimental setup is shown in Figure 2 where a bearing accelerated degradation test was performed throughout the whole operating life of the bearing, and the corresponding vibration signals were acquired by using two sensors [33]. The two sensors were placed on the housing of the tested bearings and positioned at 90 degrees to each other, i.e., one was placed on the vertical axis and the other one was placed on the horizontal axis. The tested bearing's parameters are given in Table 1. The sampling frequency was 25.6 kHz and 32,768 samples (i.e., 1.28 s) were recorded every 1 min. In addition, the rotating speed was 2100 rpm, and the radial force was 12 kN. and the corresponding vibration signals were acquired by using two sensors [33]. sensors were placed on the housing of the tested bearings and positioned at 90 de each other, i.e., one was placed on the vertical axis and the other one was placed horizontal axis. The tested bearing's parameters are given in Table 1. The samp quency was 25.6 kHz and 32768 samples (i.e., 1.28 s) were recorded every 1 min. tion, the rotating speed was 2100 rpm, and the radial force was 12 kN. In the process of the bearing accelerated degradation test, inner race and ou wear occurred in the tested bearing, as shown in Figure 3. Figure 4 depicts the ho and vertical root mean square (RMS) indicators of the tested bearing throughout i operating life. It can be seen from Figure 3 that horizontal and vertical RMS in were kept unchanged initially and then rose rapidly after 30 min, suggesting th wear may have occurred in the tested bearing. According to the horizontal and RMS indicators, we can deduce that early compound wear may happen in the in and outer race of the tested bearing at the 24 th min. Therefore, the horizontal and raw vibration signals and their spectrum are shown in Figure 5 and Figure 6, resp In the figures, we mark the inner race, outer race, roller and cage fault character quencies in the envelope spectrum by using different colors. According to the test ing's parameters, we can calculate the theoretical inner race, outer race, roller a In the process of the bearing accelerated degradation test, inner race and outer race wear occurred in the tested bearing, as shown in Figure 3. Figure 4 depicts the horizontal and vertical root mean square (RMS) indicators of the tested bearing throughout its whole operating life. It can be seen from Figure 3 that horizontal and vertical RMS indicators were kept unchanged initially and then rose rapidly after 30 min, suggesting that early wear may have occurred in the tested bearing. According to the horizontal and vertical RMS indicators, we can deduce that early compound wear may happen in the inner race and outer race of the tested bearing at the 24th min. Therefore, the horizontal and vertical raw vibration signals and their spectrum are shown in Figures 5 and 6, respectively. In the figures, we mark the inner race, outer race, roller and cage fault characteristic frequencies in the envelope spectrum by using different colors. According to the tested bearing's parameters, we can calculate the theoretical inner race, outer race, roller and cage fault characteristic frequencies as 172.09, 107.91, 72.33 and 13.49 Hz, respectively, by using the following equations: where N is the number of balls, f r is the rotating frequency, d is the ball diameter, D is the mean diameter and θ is the contact angle. where is the number of balls, is the rotating frequency, is the ball diameter, is the mean diameter and is the contact angle. where is the number of balls, is the rotating frequency, is the ball diameter, is the mean diameter and is the contact angle. Compared with the information in the envelope spectrum in Figures 5 and found that there is a clear spectrum peak at 109.4 Hz which is close to the theoretica of the outer race fault characteristic frequency 107.91 Hz, but the spectrum peak harmonics cannot be recognized. Obviously, the above signature demonstrates tha outer race wear occurs, which is consistent with the real experimental results. In ad we can see from Figure 5 that there is a clear spectrum peak at 68.75 Hz, which i roller fault characteristic frequency. Other information cannot be observed from F Compared with the information in the envelope spectrum in Figures 5 and 6, it is found that there is a clear spectrum peak at 109.4 Hz which is close to the theoretical value of the outer race fault characteristic frequency 107.91 Hz, but the spectrum peaks of its harmonics cannot be recognized. Obviously, the above signature demonstrates that early outer race wear occurs, which is consistent with the real experimental results. In addition, we can see from Figure 5 that there is a clear spectrum peak at 68.75 Hz, which is not a roller fault characteristic frequency. Other information cannot be observed from Figures 5 and 6. As a result, we can deduce that early outer race wear occurs in the tested bearing from the raw signal and its spectrum. Although we can observe the weak spectrum peak at the outer race fault characteristic frequency, it cannot recognize the fault information excited by inner race wear. Therefore, the proposed method is used to process the raw vibration signal of the tested bearing. Inner race and outer race fault information can be enhanced by using the proposed method, as shown in Figures 7 and 8. It is found from Figure 7b that there is a clear spectrum peak at 171.9 Hz which is close to the theoretical value of the inner race fault characteristic frequency 172.09 Hz, suggesting that early inner race wear happens in the tested bearing. In fact, an inner race wear has been formed as shown in Figure 3c. Meanwhile, we can also observe the clear spectrum peak at 109.4 Hz from Figure 8b which is close to the theoretical value of the outer race fault characteristic frequency 107.91 Hz, indicating that early outer race wear happens. The period of enhanced signal in Figures 7a and 8a is equal to an inner race or outer race fault period, respectively. acteristic frequency 172.09 Hz, suggesting that early inner race wear happens in the tested bearing. In fact, an inner race wear has been formed as shown in Figure 3c. Meanwhile, we can also observe the clear spectrum peak at 109.4 Hz from Figure 8b which is close to the theoretical value of the outer race fault characteristic frequency 107.91 Hz, indicating that early outer race wear happens. The period of enhanced signal in Figures 7a and 8a is equal to an inner race or outer race fault period, respectively. bearing. In fact, an inner race wear has been formed as shown in Figure 3c. Meanwhile, we can also observe the clear spectrum peak at 109.4 Hz from Figure 8b which is close to the theoretical value of the outer race fault characteristic frequency 107.91 Hz, indicating that early outer race wear happens. The period of enhanced signal in Figures 7a and 8a is equal to an inner race or outer race fault period, respectively. For comparison, the fast kurtogram [34,35], as a widely used method in mechanical fault diagnosis, is applied to process the raw vibration signal of the tested bearing. The kurtogram as shown in Figure 9 selects the optimal filtering frequency band as the optimal carrier frequency 3900 Hz with bandwidth 200 Hz to filter out the weak fault signature excited by the outer race and inner race wear in the tested bearing. The corresponding filtered result is shown in Figure 10. It can be seen from the amplitude spectrum of the filtered signal squared envelope in Figure 10 that there are clear spectrum peaks at 34.68 Hz and even at 75.81 Hz. Obviously, these are not fault signatures excited by inner race and outer race wear of the tested bearing. Therefore, we deduce that the optimal filtering frequency band may be the optimal carrier frequency 7866 Hz, as shown in Figure 9. The corresponding filtered signal is depicted in Figure 11. We cannot also observe clear spectrum peaks at the inner race and outer race fault characteristic frequencies from the amplitude spectrum of the filtered signal squared envelope in Figure 11. As a result, the fast kurtogram fails to detect the weak fault characteristics of the tested bearing in this experiment. The comparison demonstrates the feasibility and superiority of the proposed method. corresponding filtered signal is depicted in Figure 11. We cannot also observe clear s trum peaks at the inner race and outer race fault characteristic frequencies from the plitude spectrum of the filtered signal squared envelope in Figure 11. As a result, the kurtogram fails to detect the weak fault characteristics of the tested bearing in this ex iment. The comparison demonstrates the feasibility and superiority of the propo method. Hz and even at 75.81 Hz. Obviously, these are not fault signatures excited by inner race and outer race wear of the tested bearing. Therefore, we deduce that the optimal filtering frequency band may be the optimal carrier frequency 7866 Hz, as shown in Figure 9. The corresponding filtered signal is depicted in Figure 11. We cannot also observe clear spectrum peaks at the inner race and outer race fault characteristic frequencies from the amplitude spectrum of the filtered signal squared envelope in Figure 11. As a result, the fast kurtogram fails to detect the weak fault characteristics of the tested bearing in this experiment. The comparison demonstrates the feasibility and superiority of the proposed method. Figure 11. The filtered signal with the optimal carrier frequency 7866 Hz. Conclusions Stochastic resonances (SR) have been widely used in mechanical fault diagnosis, but most of them require designing the quantification indicators by virtue of prior knowledge Figure 11. The filtered signal with the optimal carrier frequency 7866 Hz. Conclusions Stochastic resonances (SR) have been widely used in mechanical fault diagnosis, but most of them require designing the quantification indicators by virtue of prior knowledge of the fault signature to be detected, resulting in false SR and further impaired detection performance of SR. Therefore, it is very necessary for us to study the parameter-matched SR method based on the mathematical condition of triggering SR. For this purpose, we propose an SR method with parameter estimation for bearing fault diagnosis. Compared with other SR methods, the tuning parameters of the proposed method can be estimated by virtue of the raw signal of bearings and do not depend on prior knowledge of the defects to be detected. Moreover, the proposed method considers the conditions of triggering SR to avoid false SR and improve the detection performance of SR further. Such an advantage makes the proposed method more suitable in engineering application. In addition, the proposed method has a tuning parameter and a damping factor with clear mathematical limitation, making it more simple and less time-consuming than other SR methods with a large amount of parameter optimization. A bearing fault experiment was performed to verify this. The experimental results indicate that the proposed method can enhance the weak compound fault diagnosis of bearings at an early stage and is superior to the widely used fast kurtogram method.
5,636.4
2023-04-01T00:00:00.000
[ "Engineering" ]
Mass distribution in the Galactic Center based on interferometric astrometry of multiple stellar orbits Stars orbiting the compact radio source Sgr A* in the Galactic Center serve as precision probes of the gravitational field around the closest massive black hole. In addition to adaptive optics-assisted astrometry (with NACO / VLT) and spectroscopy (with SINFONI / VLT, NIRC2 / Keck and GNIRS / Gemini) over three decades, we have obtained 30–100 µ as astrometry since 2017 with the four-telescope interferometric beam combiner GRAVITY / VLTI, capable of reaching a sensitivity of m K = 20 when combining data from one night. We present the simultaneous detection of several stars within the di ff raction limit of a single telescope, illustrating the power of interferometry in the field. The new data for the stars S2, S29, S38, and S55 yield significant accelerations between March and July 2021, as these stars pass the pericenters of their orbits between 2018 and 2023. This allows for a high-precision determination of the gravitational potential around Sgr A*. Our data are in excellent agreement with general relativity orbits around a single central point mass, M • = 4 . 30 × 10 6 M (cid:12) , with a precision of about ± 0 . 25%. We improve the significance of our detection of the Schwarzschild precession in the S2 orbit to 7 σ . Assuming plausible density profiles, the extended mass component inside the S2 apocenter ( ≈ 0 . 23 (cid:48)(cid:48) or 2 . 4 × 10 4 R S ) must be (cid:46) 3000 M (cid:12) (1 σ ), or (cid:46) 0 . 1% of M • . Adding the enclosed mass determinations from 13 stars orbiting Sgr A* at larger radii, the innermost radius at which the excess mass beyond Sgr A* is tentatively seen is r ≈ 2 . 5 (cid:48)(cid:48) ≥ 10 × the apocenter of S2. This is in full harmony with the stellar mass distribution (including stellar-mass black holes) obtained from the spatially resolved luminosity function. Introduction The GRAVITY instrument on the Very Large Telescope Interferometer (VLTI) has made it possible to monitor the positions of stars within 0.1" from Sgr A*, the massive black hole (MBH) at the Galactic Center (GC), with a precision of ≈ 50µas (Gravity Coll. 2017). The GRAVITY data taken in 2017-2019 together with the adaptive optics (AO) and Speckle data sets obtained since 1992 (at ESO telescopes), or since 1995 at the Keck telescopes have delivered exquisite coverage of the 16-year highly elliptical orbit of the star S2, which passed its most recent pericenter in May 2018. Besides the direct determinations of the mass of Sgr A* (M • ) and the distance to the GC (R 0 ), these interferometric data have provided strong evidence for general relativistic effects caused by the central MBH on the orbit of S2, namely, the gravitational redshift and the prograde relativistic precession (Gravity Coll. 2018b, 2019, 2021Do et al. 2019). Due to its short period and brightness, S2 is the most prominent star in the GC, but ever higher quality, high-resolution imaging and spectroscopy of the nuclear star cluster over almost three decades have delivered orbit determinations for some 50 stars (Schödel et al. 2002Ghez et al. 2003Ghez et al. , 2008Eisenhauer et al. 2005;Gillessen et al. 2009Gillessen et al. , 2017Meyer et al. 2012;Boehle et al. 2016). The motions of these stars show that the gravitational potential is dominated by a compact, central mass of ≈ 4.3 × 10 6 M that is concentrated within S2's (3D) pericenter distance of 14 mas (or 120 AU) and 1400 times the event horizon radius R S of a Schwarzschild (non-rotating) MBH for a distance of 8.28 kpc (Gravity Coll. 2019, 2021. S2 passes its pericenter with a mildly relativistic orbital speed of 7700 km/s (β = v/c = 0.026). Based on the monitoring of the star's radial velocity and motion on the sky from data taken prior to and up to two months after pericenter, Gravity Coll. (2018b) were able to detect the first post-Newtonian effects of general relativity (GR), namely: the gravitational redshift, along with the transverse Doppler effect of special relativity. The combined effect for S2 shows up as a 200 km/s residual centered on the pericenter time, relative to the Keplerian orbit with the same parameters. Gravity Coll. (2019) improved the statistical robustness of the detection of the gravitational redshift to 20σ. Do et al. (2019) confirmed these findings from a second, independent data set mainly from the Keck telescope. While the redshift occurs solely in the wavelength space, the superior astrometry of GRAVITY sets much tighter constraints on the orbital geometry, mass, and distance, all the while decreasing the uncertainty on the redshift parameter more than three times relative to data sets from single telescopes. The precession due to the Schwarzschild metric is predicted to lead to a prograde rotation of the orbital ellipse in its plane of ∆ω = 12.1 per revolution for S2, corresponding to a shift in the milli-arcsec regime of the orbital trace on sky; hence using interferometry is particularly advantageous in this case. Gravity Coll. (2020) detected the Schwarzschild precession at the 5σ level. The uncertainties on the amount of precession can then be interpreted as limits on how much extended mass (leading to retrograde precession) might be present within the S2 orbit. Here, we expand our analysis by two more years, up to 2021.6. We combine GRAVITY data from four stars, alongside with the previous AO data. Section 2 presents the new data and Section 3 describes our analysis. In Section 4, we show the combined fits, improving the accuracy of the measured post-Newtonian parameters of the central black hole and the limits on the extended mass (Section 5). In combination with earlier measurements of stars with larger apocenters, we study the mass distribution out to ≈ 3". Section 6 summarizes our conclusions. Observations Interferometric astrometry with GRAVITY has several distinct advantages over single-telescope, AO imaging ( Figure 1): First, the higher angular resolution yields an order of magnitude better astrometric precision for isolated sources. Secondly, for crowded environments, such as the central arcsecond that has a surface density > 100 stars per square arcsecond to K < 17 (and more for fainter limits, Genzel et al. 2003;Baumgardt et al. 2018;Waisberg et al. 2018), interferometric data are much less affected (by a factor of several hundred) by confusion noise. In the context of the GC cluster imaging, this issue was recognized early on (Ghez et al. 2003(Ghez et al. , 2008Gillessen et al. 2017;Do et al. 2019;Gravity Coll. 2020). For "orbit crossings" of modest duration for individual brighter stars, this often means that data over a duration of a few years are affected. The situation is worse at the pericenter passage of S2 (2002,2018), when the star and the variable source Sgr A* are in the same diffraction beam of an 8−10 m class telescope (Ghez et al. 2008). For 2021/2022, our data show explicitly that in addition to Sgr A*, several stars are present in the central beam (see also Gravity Coll. 2021), making single-telescope astrometry even more uncertain or unusable. Third, close to Sgr A*, astrometry with interferometry reduces to fitting the phases and visibilities with a multiple point source model in a single pointing of the interferometric fiber, which is straightforward, once the optical aberrations across the fiber field of view are corrected for (Gravity Coll. 2020, 2021. Interferometric measurements beyond the fibre field of view require double pointings, which, thanks to GRAVITY's metrology system, can be related astrometrically to each other (Appendix A.1.2). The GRAVITY positions are directly referring to Sgr A*, since it is visible in each exposure and since it is one of the point sources in the multi-source model. In contrast, AO astrometry relies on establishing a global reference frame by means of stars visible both at radio wavelengths (where Sgr A* is visible as well) and in the near-infrared. A few such SiO maser stars exist in the GC (Reid et al. 2007) and they reside at larger separations (≈ 20 ). Hence, to establish an AO-based reference frame, it is necessary to correct for the distortions of the imaging system (Plewa et al. 2015;Sakai et al. 2019;Jia et al. 2019). Analysis For a single-star fit, we typically fit for 14 parameters: six parameters describing the initial osculating Kepler orbit (a, e, i, ω, Ω, t 0 ), R 0 and M • , along with five coordinates describing the on-sky position and the 3D velocity of the mass (relative to the AO spectroscopic/imaging frame), and a dimensionless parameter encoding the non-Keplerian effect we are testing for. For the gravitational redshift, we used f gr , which is 0 for Newtonian orbits and 1 for GR-orbits. In Gravity Coll. (2018b) we found f gr = 0.90 ± 0.17; and in Gravity Coll. (2019), we found f gr = 1.04 ± 0.05. Do et al. (2019) reported f gr = 0.88 ± 0.17. For the Schwarzschild precession, we use the first-order post-Newtonian expansion for a massless test particle (Will 1993) and add a factor f SP in the equation of motion in front of the 1PN terms, where f SP = 0 corresponds to Keplerian motion and f SP = 1 to GR. In Gravity Coll. (2020), we found f SP = 1.10 ± 0.19. Similarly, we parameterize an extended mass distribution by including a parameter f Pl in the normalization of the profile. Following Gillessen et al. (2017) and Gravity Coll. (2020), we assume a Plummer (1911) profile with scale length, a Pl , and total mass of f Pl M • . We use a Pl = 1.27a apo (S2) = 0.3 (Mouawad et al. 2005). The enclosed mass within R is M(≤ R) = f Pl M • (R/a Pl ) 3 (1 + R 2 /a 2 Pl ) −3/2 . We fit for the fraction of M • that is in the extended mass, f Pl . Following Gravity Coll. (2018b, 2019, 2020), we find the best-fit values by fitting simultaneously all parameters, including prior constraints. Throughout our study, we used an outlier robust fitting (Sect 3.2 in Gravity Coll. 2020). The inferred uncertainties are affected and partially dominated by systematics, especially when combining data from three or more measurement techniques. Our parameterization keeps correlations between f SP or f Pl and M • or R 0 small, but the two parameters of interest show some degeneracy with the coordinate system offsets. To check the formal fit errors, we carried out a (Metropolis-Hastings) Markov-Chain Monte Carlo (MCMC) analysis. Using 100 000 realizations, we found the distributions and parameter correlations of the respective dimensionless parameter, f SP or f Pl , with the other parameters and test whether they are well described by Gaussian distributions. For more details, see Gravity Coll. (2018b, 2019, 2020, 2021) and Appendix A. In Gillessen et al. (2009Gillessen et al. ( , 2017, we consistently found, based on the AO data, that the basic parameters describing the gravitational potential (M • and R 0 ) and the extended mass (e.g., f Pl for an assumed Plummer distribution) are best constrained by the S2 data. Including other stars only moderately improved the fitting quality and uncertainties. This is because of the superior number and quality of the S2 data compared to those of the other stars. Since the higher resolution GRAVITY astrometry became available, S2 has completely dominated our knowledge about the central potential. Another reason is that it is only for S2 that we have data at or near pericenter, which are the most sensitive component of the data to the mass distribution, as the explicit analysis in Gillessen et al. (2017) andHeißel et al. (2021) shows. This situation changes with the data set used here. We now have GRAVITY data of four stars with comparable pericenters at 12 mas (S29), 14 mas (S2), 26 mas (S38), and 29 mas (S55). Naturally, we need to fit for 4 × 6 orbital parameters (a i , e i , i i , ω i , Ω i , t 0,i ) in addition to the NACO/SINFONI zero points (x 0 , y 0 , vx 0 , vy 0 , vz 0 ), M • and R 0 , as well as f SP and/or f Pl . However, the inclusion of near-pericenter GRAVITY data of S29, S38, and S55 lessens the parameter correlations and uncertainties ( Figure 2); this is also because the orbits are oriented almost perpendicular to each other in at least one of the Euler angles. Furthermore, the orbits of the four stars probe the precession over a wider parameter range in semi-major axis and eccentricity. A comparison of the results obtained with different fitting schemes and codes of the consortium (of MPE, Univ. Cologne and LESIA) underlines the importance of two aspects of the data analysis: First, AO-based astrometry of stars near pericenter is subject to confusion, with the emission from Sgr A* and other neighboring stars contributing to the centroid. These data now can be discarded in favor of better-resolution interferometric data. Second, the NACO-frame zero point and drift on the one hand, and the pro-or retrograde precession on the other hand, are degenerate. To reduce this degeneracy, we can use three sources of additional information: (i) NACO astrometry of flares, (ii) the prior from the construction of the AO reference frame (Plewa et al. 2015), or (iii) the orbits of further stars that have sufficient phase coverage to constrain the zero points. In Gravity Coll. (2020), we combined (i) and (ii). Avoiding the potentially confused flare positions (i), here we use the combination of (ii) and the orbits of 13 further stars (iii). We derive x 0 = 0.57±0.15 mas, y 0 = −0.06±0.25 mas (both epoch 2010.35), vx 0 = 63±7 µas/yr, vy 0 = 33 ± 2 µas/yr, which are consistent with the earlier estimates, but with smaller uncertainties. Schwarzschild precession for S2 Repeating the analysis of Gravity Coll. (2020) (S2 alone but with the updated zero points) and solving for the Schwarzschild precession parameter we find f SP = 0.85 ± 0.16 (χ 2 r = 1.11). This is naturally very similar to our previous results 1 , but the new data have decreased the 1σ uncertainty from ±0.19 to ±0.16. Next, we fit with the four star (S2, S29, S38, S55) data, and find f SP = 0.997 ± 0.144, with χ 2 r = 2.17, (Figure 2). Figure 3 shows the residuals of the best fit and from the corresponding Newtonian ( f SP = 0) orbit. The combination of the nearpericenter GRAVITY data of four stars improves the constraints on the common parameters. The contributions raising χ 2 r > 1 come from the NACO data of S29, S38, and S55 covering the outer parts of their orbits more affected by confusion. Applying MCMC analysis we find the most likely values of f SP = 0.85 ± 0.18 (only S2) and f SP = 0.99 ± 0.15 (S2, S29, S38, S55). Figure B.1 shows the full set of parameter correlations, including the well-known degeneracy between M • and R 0 (Ghez et al. 2008;Boehle et al. 2016;Gillessen et al. 2009Gillessen et al. , 2017. All of the 32 parameters of the four-star fit are well constrained. As discussed by Gravity Coll. (2020), the impact of the high eccentricity of the S2 orbit (e = 0.88) is that most of the precession happens in a short time-frame around pericenter. Due to the geometry of the orbit most of the precession shows up in the RA-coordinate, and the change in ω after pericenter appears as a kink in the RA-residuals. The data are obviously in excellent agreement with GR. Compared to Gravity Coll. (2020), the significance of this agreement has improved from 5 to 7 σ, from the combination of adding two more years of GRAVITY data to the S2 data set and the expansion to a four-star fit. Table B.1 gives the best fit orbit parameters, zero points, M • , and R 0 . As of 2021, S2 is sufficiently far away from pericenter, such that the Schwarzschild precession can now be seen as a 1 Following strictly Gravity Coll. (2020), i.e., using the S2 data plus the flare positions of Sgr A* with the zero point priors of Plewa et al. (2015) yields f SP = 1.23 ± 0.14 (χ 2 r = 1.70). ≈ 0.6 mas shift between the data sets in RA (and less so in Dec) between two consecutive passages of the star on the apocenterside of the orbit. This effect is obvious when comparing the 2021 GRAVITY data to the 2005 NACO data, exactly one period prior (Figure 3 right). This comparison illustrates that the Schwarzschild precession dominates the entire orbit and that there is no detectable retrograde (Newtonian) precession due to an extended mass component (see Heißel et al. 2021). Limits on extended mass In the following, we fix f SP = 1 at its GR value and allow now for an extended mass component parameterized by f Pl . We find f Pl = (2.7±3.5)×10 −3 from a single S2 fit and f Pl = (−3.8±2.4)×10 −3 for the four-star fit, in accordance with the findings of Heißel et al. (2021). The latter 1σ error is consistent, albeit three to four times smaller than that of Gillessen et al. (2017) and 1.7 times smaller than that of Gravity Coll. (2020), corresponding to 2400 M within the apocenter of S2. In Figure 4, we include the 3σ uncertainty as a conservative upper limit, indicating that the extended mass cannot exceed 7500M . As in Gillessen et al. (2017) and Gravity Coll. (2020), we find again that varying a Pl or replacing the Plummer distribution by a suitable power law changes this result trivially. Furthermore, we get a weaker limit by a factor of ≈ 2 when omitting the NACO astrometry, using only GRAVITY & SINFONI data. The impact of an extended mass is naturally largest near apocenter of an orbit (e.g. Heißel et al. 2021). Figure C.2 shows the result of adding various amounts of extended mass on top of the best-fit residuals with a point mass only. Our data are just commensurate with an additional f Pl = 0.25% of M • , but a larger mass is excluded by both the near-peri-and near-apocenter data. The apparent sensitivity of the near-pericenter data in Figure C.2 is the result of referring the residuals to the osculating Keplerian orbit at apocenter in 2010.35, such that the accumulating retrograde precession enters the near-pericenter data. Dec (bottom) between the GRAVITY (cyan filled circles, and 1σ uncertainties) and NACO data (blue filled circles) and the best GR fit (red curve, f SP = 1, Rømer effect, plus special relativity, plus gravitational redshift and Schwarzschild precession), relative to the same orbit for f SP = 0 (Kepler/Newton, plus Rømer effect, plus special relativity, plus redshift). The orbital elements for non-Keplerian orbits (i.e., with f SP 0) are interpreted as osculating parameters at apocenter time, 2010.35. NACO and GRAVITY data are averages of several epochs. The grey bars denote averages over all NACO data near apocenter (2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013). Top right: the same for the residual orbital angle on the sky δφ = φ( f SP = 1) − φ( f SP = 0). Bottom right: Zoom into the 2005/2021 part of the orbit, plotted in the mass rest frame. The earlier orbital trace does not coincide with the current one due to the Schwarzschild precession. A second, independent measurement of the dynamically inferred mass distribution comes from fitting for the central mass using 13 individual stellar orbits with a = 0.1 to 3.8 (Gillessen et al. 2017), with R 0 and zero-points fixed to the best fitting values of the four-star fit (Figure C.1). We then averaged the results in four groups of four stars with 0.11 < a < 0.22 , five stars with 0.27 < a < 0.4 , three stars with 0.55 < a < 1.6 , and two stars with 1.6 < a < 3.8 . Most of the stars in the first two groups are classical "S-stars" (mostly early-type B stars) with typically large eccentricities (e.g., Ghez et al. 2008;Gillessen et al. 2017), while most of the stars in the outer groups are O and B stars in the clock-wise disk (Paumard et al. 2006;Bartko et al. 2009;Lu et al. 2009), with modest eccentricities. The stars in the first two groups indicate that the mass is consistent with M • to within 0.3-0.6%. There is no indication for an extended mass larger than ≈ 25, 000M within 2 a apo (S2) ≈ 0.5 . The outer stars suggests an extended mass of 15, 000M (and conservatively a 3σ limit of 50, 000M ) within 5 − 10 a apo (S2). Figure 4 summarizes the mass distribution within 5 (≈ 20× the apocenter of S2). These estimates and limits are in excellent agreement with the distribution of stars (and stellar mass black holes and neutron stars) contained in this inner region around Sgr A*, as estimated from models and simulations (Alexander 2017;Baumgardt et al. 2018), or from observations of faint stars and diffuse stellar light (Figure 4, Genzel et al. 2010;Gallego-Cano et al. 2018;Schödel et al. 2018;Habibi et al. 2019). In summary, several precise (O(0.1-0.3%), 1σ) determinations show that the mass distribution in the GC within 5" ≈ 5 × 10 5 R S of Sgr A* is dominated by a central, compact mass. This mass is definitely enclosed within the pericenter of S29 (12 mas, ≈ 1200R S ). Taking the gas motions at ≈ 3 − 5 R S (Gravity Coll. 2018a) and the mm-size of Sgr A* (Doeleman et al. 2008;Johnson et al. 2017;Issaoun et al. 2019) into account, the data are in excellent agreement with the MBH paradigm. Conclusions Here, we present GRAVITY data obtained at the VLTI in 2021. Within the central 20 mas, we observe the motions of four stars between March and July, illustrating the power of the interferometric, high spatial resolution. Using the novel astrometry of the stars S2, S29, S38, and S55, along with new radial velocities obtained with GNIRS, we update our orbital analysis. The star S2 has now returned to the part of its 16-year orbit for which good NACO AO-assisted positions were obtained during its previous passage. A direct comparison of the positions confirms that the orientation of the orbital ellipse has indeed shifted in its plane by the 12.1' expected from the prograde Schwarzschild precession induced by the gravitational field of the MBH, as reported in Gravity Coll. (2020). At K = 14.1, S2 is comparably bright. With its increased distance from Sgr A* in 2021, we are now able to map with GRAVITY the immediate vicinity of the MBH to significantly fainter objects. This provided accurate positions for S29, S38, and S55. These stars have previously measured NACO positions when they were further away from Sgr A*. Combining these with the GRAVITY data improves the orbital parameters of the three stars substantially. In particular, S29 is on a deeply plunging (e = 0.97) orbit with a period of ≈ 90 years and pericenter passage in 2021.41, with a space velocity of ≈ 8740 km/s at only 100 AU from Sgr A*. S2, S29, S38, and S55 orbit in the same gravitational potential, and combining their astrometry and radial velocity data improves the accuracy of the determination of the properties of the central MBH. This leads to a 14% measurement precision of the Schwarzschild precession, which is in full agreement with the prediction of GR. The best fit further yields R 0 = (8277 ± 9) pc and M • = (4.297±0.012)×10 6 M (statistical errors, see Gravity Coll. 2021 for a discussion of the systematics that are ≈ 30 pc for R 0 and ≈ 40, 000M for M • ). Any smooth extended mass distribution would lead to a retrograde precession of the S2 orbit relative to the relativistically precessing one and we can thus place a limit on a hypothetical mass distribution. The measurement errors leave room for at most 3000 M in extended mass out to 230 mas. We included 13 stars further out with earlier measurements to trace the mass as a function of radius. The data are fully consistent with a single point mass, and only at r 2.5 does the enclosed mass tentatively exceed M • , which is consistent with the theoretically expected stellar mass distribution. Inside the 100 AU pericenter of S29, the orbits of Sgr A* flares (Gravity Coll. 2018a), together with the coincidence of mass location and light centroid (Plewa et al. 2015;Reid & Brunthaler 2020) constrain the mass distribution further, excluding, for example, dark matter spikes, as proposed by Becerra-Vergara et al. (2020, 2021, and the presence of an intermediate mass black hole as well (Gravity Coll. 2020). Our multi-epoch GRAVITY data also confirm that at any time, there are likely a few stars that are sufficiently close to Sgr A* on the sky to systematically influence its position derived with AO-assisted imaging on single telescopes. In addition, in 2022, two stars will pass the pericenters of their orbits at less than 100 mas distance (S38 and S42). The upgrade of GRAVITY to GRAVITY+ (Gravity+ Coll. 2021) will push the sensitivity limit to K > 20, which may reveal more stars with even smaller orbits. The 39 m ELT equipped with MICADO (Davies et al. 2021) and HARMONI (Thatte et al. 2021) might be the prime choice for obtaining radial velocities of such stars. Yet, GRAV-ITY+ will beat the ELT's angular resolution by a factor three, allowing continued < 50 µas astrometry and going even deeper than what we have demonstrated so far (Gravity Coll. 2021 The full width half maximum (FWHM) of the interferometric field of view (IFOV) of GRAVITY is 70 mas. In consequence, not all stars discussed in this paper are observable simultaneously. The star S2 has moved too far away from Sgr A* compared to 2018 to be observable simultaneously with Sgr A*, while the stars S55 and S29 (and others, see Gravity Coll. 2021) are always observed alongside Sgr A*. Depending on the separation, there are two methods for determining the positions of the stars relative to Sgr A*: single-beam and dual-beam astrometry. Single-beam positions are extracted from pointings in which more than one source is present in the IFOV. The distances between the stars are extracted by fitting a multi-source model to the visibility amplitude and closure phase, each of which are measured at ≈ ten spectral channels for six baselines. This yields positions of the sources with respect to each other. Since Sgr A* is visible in all our central frames, for those pointings the relative positions also are the absolute ones, that is, with respect to the mass center. If the stars are not observable in a single IFOV, we need to observe them separately and apply the dual-beam technique. For the case of two isolated stars, one interferometrically calibrates the first source with the second. The first source serves as a phase reference relative to which offsets of the second source can be measured. Appendix A.1.1: Single-beam astrometry If a star is in the same IFOV as Sgr A* (of particular interest here in 2021 are S29 and S55), we determine the relative separation between the star and Sgr A* by interferometric model fitting to the visibility amplitude and closure phases in the Sgr A* pointing. This methodology is unchanged with regard to the way separations were determined in Gravity Coll. (2021). We thus take into account the effect of phase aberrations as well as bandwidth smearing (Gravity Coll. 2020). Appendix A.1.2: Dual-beam astrometry If the stars are separated by more than the IFOV of GRAVITY, we measure the separation between the two sources by using one of them as the phase reference for the other target, namely, by calibrating the complex visibilities of the object of interest with the ones of the phase reference. We use S2 as the phase reference, which in its interferometric observables is consistent with a single point source. The separation between any star and S2 is determined by two vectors: the vector by which the IFOV was moved between S2 and the star. This vector is measured by the metrology system of GRAVITY monitoring the internal optical path differences the phase center offset in the S2-calibrated star observation. It is determined by fitting the visibility phase. Also for the dual-beam analysis, we account for the effect of phase aberrations and bandwidth smearing when calculating the model visibility phase. The separation is affected by inaccuracies and systematic uncertainties of the metrology. Such telescope-based errors are inherent to the dual-beam part of the measurement. Typically, we find more stars than just one in the IFOV. We thus need to take into account the signatures induced by the ad-ditional stars in the dual-beam measurement. This occurs, for example, for the Sgr A* pointings (where S29, S55, S62, and S300 are present), but also for the S38 pointing (with S60 and S63 being present in the IFOV). We thus fit the interstellar separations and the phase center offset simultaneously in order to take into account their degeneracies. However, the separation vectors are mostly sensitive to the visibility amplitude and the closure phase information from the visibility phase, while the phase center offsets mostly acts as an additional term in the visibility phase. In this way we can relate all positions to our calibrator source, S2. Hence, we can relate the positions also to Sgr A* by subtracting the star-to-S2 and S2-to-Sgr A* separations. Telescope-based errors cancel out in the closure phase and therefore the relative positions of the sources are not affected by phase errors, but the visibility phases carry the information of how different IFOVs are located with respect to one another. We find that by fitting the closure phases and the visibility phases with equal weights, we minimize the effect of the telescopebased errors, while still being sensitive to the phase information. In order to average out the phase errors, we calibrate all N frames of a given pointing with all M available S2 frames individually. For each of the N × M resulting data sets, we determine the phase center position and average the resulting phase center locations. This calibration uncertainty adds a systematic uncertainty of 60 µas, divided by the square root of the number of available calibrations. We further improve the accuracy of our phase center measurement by determining the best fit fringe-tracker and science target separation by fitting the S2 observations with a drifting point source model. This takes into account our imperfect knowledge of the separation prior to the observation. Here, we follow the concepts set out in Gravity Coll. (2020). Appendix A.2: GRAVITY: Deep imaging To obtain deep, high-resolution images of the GC, we developed a new imaging code called GRAVITY-RESOLVE or G R which is drawn from RESOLVE (Arras et al. 2021), a Bayesian imaging algorithm formulated in the framework of information field theory (Enßlin 2019) that is custom-tailored to GRAVITY observations of the GC. Here, we briefly outline the main ideas, (see Gravity Coll. (2021) for a detailed description. With a Bayesian forward modeling approach, we can address data sparsity and account for various instrumental effects that render the relation between image and measurement more complicated than the simple Fourier transform of the van-Cittert Zernike theorem. To this end, the algorithm formulates a prior model which permits to draw random samples, processes them with the instrumental response function, and evaluates the likelihood to compare the predicted visibilities with the actual measurement. This approach can handle the non-invertible measurement equation and allows us to work with non-linear quantities such as closure phases. The exploration of the posterior distribution is done with metric Gaussian variational inference (Knollmüller & Enßlin 2019), and infers the mean image with respect to the posterior jointly with an uncertainty estimate. There are already some imaging tools available for optical/near-IR interferometry that implement a forward-modeling approach such as MIRA (Thiébaut 2008) or SQUEEZE (Baron et al. 2010). Our code differs from those with regard to the details of the measurement equation, the prior model, and how the maximization and exploration of the posterior are performed. In the measurement equation, we implemented all instrumental effects relevant for GRAVITY: coupling efficiency, aber-ration corrections (Gravity Coll. 2021), averaging over finite sized wavelength channels (also known as bandwidth smearing), and the practice in optical and near-IR interferometry to construct the complex visibility as the coherent flux over a baseline divided by the total flux of each of the two telescopes. The latter signifies that the visibility amplitude can be unity at most, but coherence loss can degrade the observed visibility from the theoretical expectation. This we account for by a self-calibration approach where we infer a time-and baseline-dependent calibration factor jointly with the image. An appropriate prior model is essential to address the data sparsity inherent to optical/near-IR interferometry, and we specifically tailor it to the GC observations. There, we see Sgr A* as a point source in addition to some relatively bright stars whose approximate positions are known from orbit predictions. For those objects, we directly infer the position and brightness using a Gaussian and a log-normal prior respectively. The variability and polarization of Sgr A* is accounted for by allowing for an independent flux value in each frame and polarization state observed. In the actual image itself, we expect to see few faint, yet unknown, point sources and thus impose the individual pixels to be independent with their brightness following an Inverse Gamma distribution. We note that all sources other than Sgr A*, that is all non-variable sources, could in principle also be attributed to the image. However, modeling them as additional point sources improves convergence and mitigates pixelization errors. Appendix A.3: GNIRS: Determining radial velocities In 2021 we had four successful observations with the long-slit spectrograph GNIRS using the AO system ALTAIR at the Gemini observatory. We used the long slit in the K-Band with the 10.44 l/mm grating. The slit was positioned such that we observed S2 and S29 simultaneously. To calibrate the data we used the daytime calibration from the day after the observation, which contains a set of dark frames to determine a bad pixel mask, flat frames, and a wavelength calibration. Additionally, a telluric star was observed right after the observation. To determine the velocity of the stars we used template fitting with a high SNR S2 spectrum, in the same way as we extracted the SINFONI velocities (see Gravity Coll. 2018b). We were able to detect a velocity for S2 in all four observing nights. As S29 is significantly fainter than S2 we needed excellent conditions to get a detection, which was only possible in two of the four nights. Appendix B: Fit details In Table B.1, we give the best-fit parameters of the four-star fitdetermining f SP , comparing also with similar fits from the literature. Figure B.1 gives the full posterior of the four-star fit in the form of a corner plot. Appendix C: Additional figures In Figure C.1 we show the orbital data of additional S-stars that were auxiliary in this work. Figure C.2 illustrates that our S2 data are compatible at most with an extended mass component of around 0.1% enclosed within the S2-orbit. Appendix D: List of observations In Tables D.1, D.2, and D.3 we list the observations used in this work. We note that most of the raw data are available from the observatories' archives, most easily found via the program ID numbers provided here. The derived astrometric and spectroscopic data set can be made available upon personal request and after discussing the terms of a collaboration on a case-by-case basis. The line M • [10 6 M ] 8277 pc gives the masses rescaled to a common distance of R 0 = 8277 pc, using M ∝ R 2 0 (Gillessen et al. 2017). the offsets x 0 , y 0 , the distance R 0 , the velocity offsets vx 0 , vy 0 , vz 0 , the precession parameter f SP , followed by four times six orbital elements for each of the four stars used, in the order a, e, i, Ω, ω, t peri for S2, S29, S38, S55. Orbital data for S9, S13, S18 and S21. Middle: S4, S12, S31 and S42. Right: S66, S67, S87, S96 and S97. These data are complemented by multi-epoch spectroscopy for the orbital fitting. Figure 3, with the dashed red curve denoting the f SP = 1 GR curve for the best fitting orbit and mass. In addition, we show orbital models with the same central mass, distance and orbital parameters but now adding an extended mass component assumed to have a Plummer shape (Gillessen et al. 2017) showing the impact of adding a Plummer mass of M ext within the 0.25 apocenter radius of S2. Black, orange and blue solid curves show the changes expected if this extended Plummer mass is 0.1, 0.3 and 0.6% of M • (4.4 × 10 3 , 8.9 × 10 3 and 1.78 × 10 4 M within the apocenter of S2, R apo = 0.24 ). Formal fitting shows that such an extended mass greater than about ≈ 0.1% of M • is incompatible with the data. S2 2003S2 -05-09 Eisenhauer et al. (2003 VLT, NACO AO spectr. rad. vel. S2 2003-06-12 Eisenhauer et al. (2003 VLT
8,804.6
2021-12-14T00:00:00.000
[ "Physics" ]
Heat Transfer Efficiency of Turbulent Film Boiling on a Horizontal Elliptical Tube with External Flowing Liquid Film boiling on a horizontal elliptical tube immersed in external flowing nitrogen liquid is investigated in the present paper. The isothermal wall temperature is high enough to induce turbulent film boiling, and then a continuous vapor film runs upward over the surface. The high velocity of the flowing saturated liquid at the boundary layer is determined by potential flow theory. In addition, the present paper addresses a new model to predict the vapor-liquid interfacial shear on an elliptical tube under forced convection turbulent film boiling. In the results, film thickness and Nusselt number can be obtained under different eccentricity and Froude number. And a comparison between the results of the present study and those reported in previous experimental studies is provided. The results show that there is a good agreement between the present paper and the experimental data. Introduction The pioneering investigator, Bromley [1], conducted the research of film boiling on a horizontal tube.After Bromley's research, many related researches had been reported.In 1966, Nishikawa and Ito [2] analyzed twophase boundary-layer treatment of free-convection film boiling.The theoretical study investigated on film boiling from an isothermal vertical plate and a horizontal cylinder without considering radiative effects.Jordan [3] investigated the laminar film boiling and transition boiling, and the also discussed the separated region.Sakurai et al. [4] presented the pool film boiling on a horizontal cylinder with theoretical solutions.The analytical heat transfer model was based on laminar boundary theory including radiation effects.Besides, Huang et al. [5] conducted the research on the forced convection film boiling.They investigated the flow film boiling across a horizontal cylinder with uniform heat flux.The numerical results agreed with experimental data where the wall temperature did not vary a lot around the heater at high heat fluxes. Laminar film boiling had been widely discussed in published literature, and so has turbulent film boiling.For example, Sarma et al. [6] presented turbulent film boiling with consideration to thermal radiation for the vertical surface.In the research, the assumption of equal shear condition both at the wall and the vapor-liquid interface was reasonable.Later, Sarma et al. [7] presented some theoretical results about the turbulent film boiling on a horizontal isothermal circular cylinder.The analysis compared the theoretical results with previous experimental results, and found that their results were in a good agreement with the experimental data.Hu [8] presented the surface tension effects in boiling heat transfer of cryogenic LN 2 on an ellipsoid.However, the study just researched into a simple theoretical model for turbulent film boiling heat transfer on an ellipsoid under a quiescent liquid.Furthermore, Hu [9] investigated the influences of interfacial shear in turbulent film boiling on a horizontal tube with external flowing liquid. Even though there were many researches about laminar film boiling and turbulent film boiling, there was little publication about the turbulent film boiling on a horizontal elliptical tube which his high velocity liquid was flowing outside.Predicting interfacial shear in a turbulent film boiling system under high velocity liquid was not easy.However, the present paper successfully predicted the vapor-liquid interfacial shear by using Colburn analogy.The present study applied the interfacial shear into the forced balance equation, and then combined the forced balance equation with the energy equation and thermal energy balance equation.At last, both the film thickness and Nusselt number were obtained.Then, the present analysis also included eddy diffusivity, radiation effects and temperature ratio.Finally, a comparison between the results of the present study and those reported in previous experimental studies was provided.It was found that a good agreement exists between the two sets of results. Formulations Consider a horizontal elliptical tube immersed in an up flowing LN2 of the high velocity u ∞ at saturated temperature T s .The wall temperature w T is assumed high enough to induce turbulent film boiling on the surface of the elliptical tube, and then a continuous film of vapor runs upward over the surface.The physical model and the coordinate system adopted in the present study are shown in Figure 1, where the coordinates is use two-dimensional orthogonal curvilinear coordinate system.For thin film flow of turbulent film boiling under the forced convection, the viscosity component and the buoyancy effect are assumed more significant than the inertia force.Then the force balance equation for the vapor film can be expressed as: ( )sin It is assumed the thickness of vapor film is much thinner than the diameter of the tube ( e D δ  ).And it's further assumed the turbulent conduction term across the vapor layer is more significant than the convective term, and hence the convective term can be neglected.The energy equation can be expressed as: The boundary conditions of energy equation under isothermal condition are as follows: For a pure substance, the thermal energy balance equation of the vapor film can be expressed as: ( ) The differential arc length for ellipse can be expressed with the following equation: ( ) where e D is an equivalent diameter based on the equal outside surface area, which is compared with circular tubes. Substitute the dx into thermal energy balance equation, and (Equation ( 4)) can be modified as follows: In the turbulent region the semi-empirical equation which describes heat transfer in the flow parallel to a moderately curved surface may also be used to describe the heat transfer in the flow parallel to an elliptical surface.Jakob [10] proposed that this situation may be described for any fluid by the following expression: where C is a constant in flow configuration, C = 0.034.According to Colburn analogy, the friction factor can be written as the following equation: The mean friction coefficient in the streamwise direction may then be calculated as: Furthermore, the local friction can be obtained as: The turbulent boundary layer exerts a friction force on the liquid-vapor boundary.The shear stress is estimated by considering the external flowing liquid across the surface of the tube when there is no vapor film on the surface.The local shear stress is defined as: According to potential flow theory, when the uniform liquid flow of velocity u ∞ passing a tube, the liquid velocity at the edge of the boundary is as follows: ( ) Combining Equations ( 12)-( 14), the local shear stress can be expressed as: ( ) Incorporating the interfacial vapor shear stress δ τ given by Equation ( 15) into the elemental forced balance equation enables Equation ( 1) to be rewritten in the following form: The forced balance equation Equation ( 17) yields the following dimensionless equation: ( ) It's further assuming the pressure across the boundary layer is constant and the density variation across the boundary layer is given by the following equation: The energy equation Equation (2) yields the following dimensionless energy equation: The dimensionless boundary conditions of Equation (19) are: ( ) ( ) where the absolute viscosity equation µ + in dimensionless energy equation Equation ( 19) is expressed as the vapors of liquid nitrogen at the saturation temperature corresponding to a system pressure under 1 atm.i.e. Besides, the thermal energy balance equation Equation ( 8) can be rewritten in dimensionless form as follows: where the absolute conductivity equation k + in thermal energy balance equation Equation ( 22) is expressed as the vapors of liquid nitrogen the saturation temperature corresponding to a system pressure under 1 atm.: Furthermore, the dimensionless thermal energy balance equation Equation ( 22) requires the velocity profile u + in the vapor film.And u + can be obtained by following equation: The boundary condition is: The eddy diffusivity distribution presented by Kato et al. [11] is expressed as: ( ) The heat transfer of turbulent film boiling can be given by the following equation: ( ) Obviously, the local Nusselt number can be expressed as: The mean Nusselt number for the entire surface of the tube can be written as: Numerical Method The dimensionless governing Equations ( 17), ( 22)-( 26) and ( 28), (29) subject to the relevant boundary conditions given can be used to estimate δ + , e R * and Nu for the vapor film by means of the following procedures by using C ++ : 1) Suitable dimensionless parameters, such as e, r T , S, NR, Fr and Gr are specified. 2) The boundary conditions of velocity and temperature are as follows: 4) Guess an initial value of δ + ; substitute Equations ( 21), (26) into Equation ( 19) and then get the value of If the calculation is a convergence, process the film thickness of next angular position.If the calculation is not a convergence, guess a new thickness and repeat processes ( 4)-( 6). 7) The process above is repeated at the next node position, i.e. Results and Discussion Figure 2(a) plots the three-dimensional local velocity distributions in vapor film for Fr = 500.For each angular position φ on the entire tube surface, it is shown that with an increase in y + , u + will increase to a maximum value, and then it will slightly decrease.The results also show that the dimensionless velocity increases with an increasing angular position on the elliptical surface.Figure 2(b) shows the two-dimensional equi-velocities in vapor film for Fr = 500.The dimensionless velocity at the wall of the elliptical tube is zero because of the noslip condition and it will increase along the y-direction.This can be apprehended that the velocity will become larger due to the shear stress of vapor-liquid interface under the condition of flowing liquid. Figure 3(a) shows the velocity distribution of the vapor film on the entire elliptical tube.Figure 3(b) presents the two-dimensional isothermal lines in the vapor film.For the boundary conditions prescribed in the proposed model, the dimensionless temperature on the tube surface and on the vapor-liquid interface are unity and zero respectively.Furthermore, the assumption of the stagnation flow is imposed at the bottom of the tube.As a result, the temperature variation along the vapor film thickness at 0 φ = is linear.In addition, the interfacial shear with high velocity liquid, and the effects of turbulence are considered in this work.As the angular position increases, the effects of eddy diffusivity get stronger and the non-linear temperature profile of the vapor film appears. Figure 4 displays the variation of the dimensionless vapor film thickness on the elliptical tube along φ .Spe- cifically, the film thickness increases continuously from a minimum value at the bottom of the tube ( ) and reaches its maximum value at the top of the tube ( ) Besides, according to potential theory, the increase in the eccentricity value will lead to a decrease in the liquid velocity.And the decrease in the liquid velocity will lead to the decrease in heat transfer efficiency and evaporative rate.The former phenomenon will bring about the decrease in the film thickness.Besides, the figure also states the influence of Fr on the film thickness.The film thickness will increase when Fr values increase.An increase in Fr will bring out an increase in the interfacial shear stress and then leads to an increase in the evaporative rate.Consequently, the increase of evaporative rate may cause an increase of the vapor film thickness. Figure 5 presents the effects of eccentricity on mean Nusselt number under five different Froude numbers.According to the potential flow theory, the larger the eccentricity parameter is, the smaller the liquid velocity and interfacial shear are.Consequently, both the vapor velocity and the mean Nusselt number will decrease.Besides, under the condition of the forced convective film boiling, increasing Fr will result in an increase of the mean Nusselt numbers.presents the relationship between the mean Nusselt number and the Froude numbers for five values of Grashof number.The figure shows the results of the forced convection film boiling.A higher Froude number will bring an increase in the mean Nusselt.Besides, the Grashof number Gr is also one of the dominant factors, and therefore increasing the Grashof number will bring out an increase in the mean Nusselt. To validate the present model, a comparison is made between this work and previous studies for different cases.For this reason, a modified Rayleigh number is introduced as ( ) which can be further nondimensionalized in the form of ( ) . Figure 7 depicts the effects of Ra on the mean Nusselt numbers on an elliptical tube under e = 0 with special case of a tube subject to turbulent film boiling.It shows that the mean Nusselt number of the present study has a good agreement with previous experimental data [12] under the condition of quiescent liquid (i.e.Fr = 0 or 0 u ∞ = ).Besides, Nu m increases with Ra at fixed radiation parameter.The increase in the radiation parameter, conceivably, will bring out an increase in the mean Nusselt number at a given Ra. Figure 8 shows the correlation of Rayleigh number and mean Nusselt number under five different Froude number.According the figure, the increase the Rayleigh number will bring out the increase the mean Nusselt number.Besides, the larger the Froude number will also increase the mean Nusselt number. Conclusions The following conclusions can be drawn from the results of the present theoretical study: 1) With the help of Colburn analogy, the present research successfully predicts the shear stress of the vaporliquid interface in a film boiling system under liquid of high velocity on an elliptical tube. 2) The increase in the eccentricity parameter of the elliptical tube will lead to a decrease in the mean Nusselt number.Besides, turbulent film boiling under the external flowing liquid with high velocity, the increase in both the Froude number and Grashof number will bring out an increase in the mean Nusselt number. Figure 1 . Figure 1.Physical model and coordinate system. 3 ) Since the * u at the bottom of the tube ( 0 φ = , i = 0) is zero, the dimensionless film thickness δ + is also ze- ).At the next node, i.e. i = i + 1, the value of φ is given by , Equations (23), (24) and (25) into Equation (22), and get the value of , the values of * R can be gotten, and then substitute the values of e R * into Equation (17).6) The criterion for the accuracy of δ + is assessed by Equation (17), and it can be expressed as the following unequal equation: and then subsequently at all nodes within the range 0 π φ ≤ ≤ .8) The local Nusselt number and mean Nusselt number are then calculated. Figure 5 .Figure 6 Figure 5. Effects of the eccentricity parameter on mean Nusselt number. Figure 6 . Figure 6.Effects of the Froude number on mean Nusselt number. 5 Figure 7 . Figure 7.Comparison of the present results with previous data. Figure 8 . Figure 8. Effects of the Ra on mean Nusselt number. gδ semimajor, semiminior axis of ellipse p Cspecific heat capacity, (J/kg•K) D e equivalent circular diameter of elliptical tube (m) acceleration due to gravity (m/s 2 ) h heat transfer coefficient, W/(m 2 •K) h fg latent heat (J/kg) k thermal conductivity (W/m•K) k + dimensionless thermal conductivity, ( ) ( ) to the direction of flow (m/s) V  acceleration due to graviton force (m/s 2 ) x peripheral coordinate (m) y coordinate measured distance normal to tube surface (m) y + dimensionless distance, vapor film thickness(m)
3,568.6
2016-04-29T00:00:00.000
[ "Physics", "Engineering" ]
Toward tunable quantum transport and novel magnetic states in Eu1−xSrxMn1−zSb2 (z < 0.05) Magnetic semimetals are very promising for potential applications in novel spintronic devices. Nevertheless, realizing tunable topological states with magnetism in a controllable way is challenging. Here, we report novel magnetic states and the tunability of topological semimetallic states through the control of Eu spin reorientation in Eu1−xSrxMn1−zSb2. Increasing the Sr concentration in this system induces a surprising reorientation of noncollinear Eu spins to the Mn moment direction and topological semimetallic behavior. The Eu spin reorientations to distinct collinear antiferromagnetic orders are also driven by the temperature/magnetic field and are coupled to the transport properties of the relativistic fermions generated by the 2D Sb layers. These results suggest that nonmagnetic element doping at the rare earth element site may be an effective strategy for generating topological electronic states and new magnetic states in layered compounds involving spatially separated rare earth and transition metal layers. Composition phase diagram of Eu1−xSrxMn1−zSb2 on the structural and magnetic transitions, Eu–Mn moment angle α, and nontrivial Berry phase is presented. A doping of nonmagnetic Sr on Eu site breaks lattice symmetry and induces various Eu spin reorientations that are coupled to quantum transport properties of the relativistic fermions generated the 2D Sb layers. Eu1−xSrxMn1−zSb2 is therefore a new unique material platform for exploring the Dirac band tuning by magnetism. Our study suggests nonmagnetic element doping to the rare-earth element site may be an effective strategy to generate topological electronic states and new magnetic states in layered compounds involving spatially separated rare-earth and transition metal layers. Introduction Dirac/Weyl semimetals have attracted intense research interest due to their exotic quantum phenomena, as well as their promise for applications in next generation, more energy-efficient electronic devices [1][2][3] . Magnetic Dirac/ Weyl semimetals are especially attractive since the coupling of Dirac/Weyl fermions to the additional spin degree of freedom may open up a new avenue for tuning and controlling the resulting quantum transport properties [4][5][6] . To date, several magnetic semimetals have been reported, and most of them were discovered in stoichiometric compounds, such as SrMnBi 2 7 , Mn 3 Sn 8 , RAlGe(R = rare earth) 9 , Co 3 Sn 2 S 2 10,11 , and Co 2 MnGa 12,13 . Finding a strategy to control a topological state by tuning magnetism is highly desirable and requires a clear understanding of the interplay between the magnetism and the topological electronic state. This goal can be achieved by investigating the coupling between the structure and magnetic and electronic phase diagrams in tunable magnetic topological materials. The large family of ternary AMnCh 2 "112" compounds (A = alkali earth/rare earth elements, Ch = Bi or Sb) 6,7,[14][15][16] are particularly interesting since a few of them have been reported to be magnetic Dirac semimetals where the Bi or Sb layers host relativistic fermions. AMnCh 2 (A = Ce, Pr, Nd, Eu, Sm; C = Bi or Sb) 14,[16][17][18] possesses two magnetic sublattices, formed by the magnetic moments of rare earth A and Mn, respectively, in contrast with other compounds, which have only Mn magnetic lattice in this family. The conducting Bi/Sb layers and the insulating magnetic Mn-Bi(Sb) and Eu layers are spatially separated, which makes them good candidates for exploring the possible interplay between Dirac fermions and magnetism. For EuMnBi 2 , both the Eu and Mn moments point in the out-of-plane direction and generate two AFM lattices in the ground state 16 . Previous studies have also shown that when the Eu AFM order undergoes a spin-flop transition in a moderate field range, interlayer conduction is strongly suppressed, thus resulting in a stacked quantum Hall effect. Interestingly, EuMnSb 2 exhibits distinct properties from EuMnBi 2 , and conflicting results have been reported [17][18][19] . The magnetotransport properties reported by Yi et al. 17 are not indicative of a Dirac semimetallic state, while Soh et al. 18 observed linear band dispersion near the Fermi level in Angle-resolved photoemission spectroscopy measurements of EuMnSb 2 and claimed that it may be a Dirac semimetal. Moreover, the magnetic structure of EuMnSb 2 is thought to be distinct from that of EuMnBi 2 , with controversial reports on Eu and Mn moments being perpendicular 18 or canted to each other 19 . It is therefore important to resolve the controversy about the magnetic and physical properties of EuMnSb 2 and to explore whether EuMnSb and its derivatives could host Dirac fermions. Additionally, it is known that in many-layered compounds involving spatially separated rare earth and manganese layers such as RMnAsO (R = Nd or Ce) 21,22 and RMnSbO (R = Pr or Ce) 22,23 , the moment of rare earth elements ordered at low temperatures usually drives Mn spin reorientation to its moment direction. Given that there are two magnetic sublattices of Eu and Mn with an expected 4f-3d coupling between them in EuMnSb 2 , the chemical substitution of Eu by nonmagnetic elements may achieve interesting magnetic states by tuning the magnetic interactions, which may control the transport and magnetotransport properties. In this article, we report comprehensive studies on a tunable Dirac semimetal system Eu 1−x Sr x Mn 1−z Sb 2 , which exhibits a variety of novel magnetic states tunable by the Eu concentration, temperature, and magnetic field. The evolution of the magnetic states of this system is found to be coupled to the quantum transport properties of Dirac fermions. Through single-crystal X-ray diffraction, neutron scattering, magnetic and high-field transport measurements, we established a rich phase diagram of the crystal structure, magnetism, and electronic properties of Eu 1−x Sr x Mn 1−z Sb 2. The increase in Sr concentration in Eu 1−x Sr x Mn 1−z Sb 2 induces not only lattice symmetry breaking and surprising Eu spin reorientation to the Mn moment direction but also topological semimetallic states for x ≥ 0.5. Furthermore, the quantum transport properties can be tuned by the different Eu spin reorientations to collinear AFM orders induced by the temperature and external magnetic field. The in-plane and out-of-plane components of the canted Eu magnetic order are found to influence the intralayer and interlayer conductivities of Dirac fermions generated by the 2D Sb layers, respectively. These results establish a new unique material platform for exploring Dirac band tuning by magnetism. Crystal growth The Eu 1−x Sr x Mn 1−z Sb 2 single crystals were grown using a self-flux method. The starting materials with stoichiometric mixtures of Eu/Sr, Mn, and Sb elements, i.e., EuMnSb 2 , Eu 0.8 Sr 0.2 MnSb 2 , Eu 0.5 Sr 0.5 MnSb 2 , and Eu 0.2 Sr 0.8 MnSb 2 , were put into small alumina crucibles and sealed in individual quartz tubes in an argon gas atmosphere. The tube was heated to 1050°C for 2 days, followed by subsequent cooling to 650°C at a rate of 2°C/ h. Plate-like single crystals were obtained. The compositions of all the single crystals were examined using energy-dispersive X-ray spectroscopy. The composition of the x = 0 parent compound was also characterized by fitting to single-crystal X-ray diffraction data. Single-crystal X-ray and neutron diffraction measurements and neutron data analysis A crystal of x = 0 was mounted onto glass fibers using epoxy, which was then mounted onto the goniometer of a Nonius KappaCCD diffractometer equipped with Mo Kα radiation (λ = 0.71073 Å). After the data collection and subsequent data reduction, SIR97 was employed to provide a starting model, SHELXL97 was used to refine the structural model, and the data were corrected using extinction coefficients and weighting schemes during the final stages of refinement 24,25 . To investigate the crystal and magnetic structures, neutron diffraction measurements were conducted with the four circle neutron diffractometer (FCD) located in the High Flux Isotope Reactor at Oak Ridge National Laboratory. To further distinguish between tetragonal and orthorhombic structures for x = 0, neutrons with a monochromatic wavelength of 1.003 Å without λ/2 contamination are used via the silicon monochromator from (bent Si-331) 26 . For other Eu 1−x Sr x Mn 1−z Sb 2 (x = 0.2, 0.5, 0.8) crystals, we employed neutrons with a wavelength of 1.542 Å involving 1.4% λ/2 contamination from the Si-220 monochromator using its high resolution mode (bending 150) 26 . The crystal and magnetic structures were investigated in different temperature windows. The order parameter of a few important nuclear and magnetic peaks was measured. Data were recorded over a temperature range of 4 < T < 340 K using a closed-cycle refrigerator available at the FCD. Due to the involvement of the highabsorbing europium in the Eu 1−x Sr x Mn 1−z Sb 2 crystals, proper neutron absorption corrections to the integrated intensities of the nuclear/magnetic peaks are indispensable. The dimensions of the faces for each crystal were measured, and a face index absorption correction of the integrated intensities was conducted carefully using the WinGX package 27 . The SARAh representational analysis program 28 and Bilbao crystallographic server 29 were used to derive the symmetry-allowed magnetic structures and magnetic space groups. The full datasets at different temperatures were analyzed using the refinement program FullProf suite 30 to obtain the structure and magnetic structures. Magnetization and magnetotransport measurements The temperature and field dependence of the magnetization were measured in a superconducting quantum interference device magnetometer (Quantum Design) in magnetic fields up to 7 T. The transport measurements at zero magnetic field were performed with a four-probe method using Physical Property Measurement Systems (PPMS). The high-field magnetotransport properties were measured in 31 T resistivity magnets at the National High Magnetic Field Laboratory (NHMFL) in Tallahassee. The magnetic fields were applied parallel to the out-of-plane direction to study the in-plane and out-of-plane magnetoresistance. The ρ in samples were made into Hall bar shapes, and the ρ out samples were in the Corbino disk geometry. The Berry phase was extracted from the Landau fan diagram. The integer Landau levels are assigned to the magnetic field positions of resistivity minima in SdH oscillations, which correspond to the minimal density of state. Crystal structures Both single-crystal X-ray and neutron diffraction reveal that the parent compound EuMnSb 2 crystallizes in a tetragonal structure with space group P4/nmm (Figs. 1a and S1e) and nonstoichiometric composition EuMn 0.95 Sb 2 . The structural parameters of EuMn 0.95 Sb 2 obtained from the single-crystal X-ray diffraction refinement at 293 K are summarized in Tables SI and SII. Note that the structure of EuMn 0.95 Sb 2 is similar to that of CaMnBi 2 31 but different from the I4/mmm in the tetragonal structure of EuMnBi 2 16 and the previously reported orthorhombic structure of EuMnSb 2 17,19 . The energy-dispersive X-ray spectroscopy analysis shows that there are also less than 5% Mn deficiencies in the Sr-doped compounds with z∼0.01, 0.05, and 0.02 for x = 0.2, 0.5, and 0.8, respectively. Interestingly, the Sr-doped Eu 1−x Sr x Mn 1−z Sb 2 (x = 0.2, 0.5, and 0.8) shows a clear lattice distortion and crystallizes in the orthorhombic structure with the space group Pnma, with a doubled unit cell along the out-of-plane direction (Figs. 1b, c and S1f), similar to SrMnSb 2 6 . Thus, the Sr doping at the Eu site in EuMn 0.95 Sb 2 induces symmetry breaking from tetragonal P4/nmm to Pnma. Our systematic studies on Sr-doped EuMn 1 −zSb 2 and comparison with previous reports on the parent compound suggest that the structural difference between our x = 0 sample and the samples reported in the literature 17,19 arises from the nonstoichiometric compositions and/or flux-induced chemical doping. The sample reported in ref. 17 involves Sn doping at the Sb sites due to the use of Sn flux, which yields a composition of Eu 0.992 Mn 1.008 Sb 1.968 Sn 0.73 . In ref. 19 , the composition was reported to be EuMn 1.1 Sb 2 , which implies that a significant amount of Mn antisite defects may exist at the Sb sites. In contrast, our parent compound x = 0 is characterized by only a small degree of Mn deficiency. Such composition differences from the previously reported samples explain why our x = 0 sample is tetragonal, whereas the samples reported in the literature are orthorhombic. This also indicates that chemical doping at the Eu, Mn, or Sb sites in EuMnSb 2 could induce orthorhombic distortion. The structural parameters of Eu 1−x Sr x Mn 1−z Sb 2 (x = 0, 0.2, 0.5, and 0.8) at 5 K obtained from the fits to neutron diffraction data are summarized in Table 1. It can be seen that Sr doping induces a slight decrease in the out-ofplane lattice constant and an increase in the in-plane lattice constants. More details about the determination of crystal structures of all the Eu 1−x Sr x Mn 1−z Sb 2 compounds can be found in the Supplemental Information. Determination of magnetic structures In general, determining the complicated magnetic structures in Eu-containing compounds is difficult due to the strong neutron absorption of europium. Proper neutron absorption correction of the neutron diffraction data is critical. We employed single-crystal neutron diffraction to solve the complicated magnetic structures of Eu 1−x Sr x Mn 1−z Sb 2 below 340 K. The refined moments, Mn-Eu canting angle, and reliability factors of the refinements of the neutron data after neutron absorption correction are summarized in Table 2 (see the Supplemental Information for more details). Figure 2a-d shows the temperature dependences of a few representative nuclear and/or magnetic reflections of Eu 1−x Sr x Mn 1−z Sb 2 . For the x = 0 parent compound, the presence of the pure magnetic peak at (100) T below T1 at 330 K indicates one magnetic transition. The absence of an anomaly at T 1 in susceptibility measurements (see Fig. 3a) may be ascribed to the possible strong spin fluctuations above T 1 that tend to smear out any anomalies in the susceptibility as in other Mn-based compounds 6,20,22 . For T < T 1 , a C-type AFM order of Mn spins (AFM Mn ) with the propagation vector k = (0,0,0) T and the moment along the c T axis is determined without Eu ordering, as illustrated in the left panel of Fig. 1a. Upon cooling below T 2 at 22 K, there is an increase in magnetic peak intensities such as (100) T and (101) T with k = (0,0,0) T and, simultaneously, new magnetic reflections with a propagation vector k = (0,0,1/2) T from the Eu sublattice appear. Interestingly, we observed strong magnetic peaks (0, 0, L/2) T (L = odd number) below T 2 (see the inset of Fig. 2a). This excludes the possibility of Eu moments pointing in the out-of-plane axis seen in EuMnBi 2 16,32 . The determined magnetic structure for T < T 2 denoted by AFM Mn, Eu,⊥ is shown in the right panel of Fig. 1a. Whereas Mn preserves a C-type AFM order with an increased moment due to Eu-Mn coupling along the c T axis, the "+ + − −" Eu spin ordering with the moment along the a T axis breaks the magnetic symmetry along the c T axis and leads to observed magnetic reflections with k = (0,0,1/2) T . Such a magnetic structure is consistent with the susceptibility measurements in Fig. 3a, where χ c increases slightly and χ ab decreases rapidly for T < T 2 , suggesting an AFM moment oriented along the a T b T plane. Note that the magnetic structure determined here is different from the "+-+-" A-type Eu order proposed on the basis of diffraction experiments on a polycrystalline sample of EuMnSb 2 , for which no k = (0,0,1/2) T magnetic peaks were observed below T 2 . The Eu moment canting proposed in ref. 19 is not found in our crystal for T < T 2 (see the Supplemental Information for a detailed discussion). In the x = 0.2 compound, the temperature dependence of the pure magnetic peak (010) O in the orthorhombic structure, corresponding to the (100) T in the tetragonal notation, shows a clear magnetic transition at the T 1 of 330 K, as shown in Fig. 2b. A similar C-type AFM order (AFM Mn ) with k = (0,0,0) O was determined and is displayed in the left panel in Fig. 1b. Upon cooling below T 2 at 21 K, new magnetic peaks indexed by (H, K, L) (H = odd integers), for instance (700) O , corresponding to (0 0 3.5) T , are observed (see inset of Fig. 2b). All the magnetic peaks can be described by the AFM order at k = (0,0,0) O in the orthorhombic notation due to the doubled unit cell in contrast to x = 0. Within the temperature range of T 3 < T < T 2 , we find a canted and noncollinear Eu spin order confined within the a O c O plane with a "+ + − −' component along the c O axis and a "+ − + −" component along the a O axis, coexisting with the C-type Mn AFM order with moments along the a O axis (denoted by AFM Mn,Eu,C1, the middle panel in Fig. 1b). This is consistent with the susceptibility measurement shown in Fig. 3b, where both χ a and χ bc decrease below T 2 , implying that Eu spins may form a canted AFM order. Note that such a canted Eu order is not applicable in the corresponding T < T 2 temperature region of the x = 0 parent compound. At 10 K, the canting angle between Mn and Eu is 41(9)°. The susceptibility measurements show that χ a increases but χ bc decreases anomalously below T 3 at 7 K, indicative of another magnetic transition. Interestingly, there is a decrease in the (300) O peak intensity, with a concurrent increase in the intensity of the nuclear peak Lattice constants . When x is increased to 0.5 or 0.8, the Eu lattice exhibits only a single AFM transition as revealed from the Table 2 Refined magnetic moments, Mn-Eu angles, and reliable factors of Eu 1−x Sr x Mn 1−z Sb 2 with x = 0, 0.2, 0.5, and 0.8 at different temperatures. Fig. 2c and Fig. S5a, b). Furthermore, there is an increase in the peak intensity (300) O due to the magnetic contribution but no obvious change in the peak intensities of (200) O or (600) O . These features are similar to those at x = 0.2. We indeed obtain similar magnetic structures in the x = 0.5 sample, as shown in the left panel (AFM Mn ) and middle panel (AFM Mn,Eu,C1 ) in Fig. 1b for T 2 < T < T 1 and T 3 < T < T 2 , respectively. Note that the canting angle between Eu and Mn moments decreases to 24(8)°at 5 K. As x increases to 0.8, the Mn magnetic transition occurs at a T 1 of 330 K as identified from the intensity of (010) O , and a C-type Mn order AFM Mn is determined (see the left panel of Fig. 1c). Another increase in (010) O is found below T 2 ≈ 7 K. There is no appearance of magnetic scattering at the (300) O and (200) O or (600) O Bragg positions below T 2 (see the inset of Fig. 2d and Fig. S5c in SI), indicating that Eu moments may point to the a O axis. We find a coexistence of C-type Mn AFM order with the "+ − + −" Eu order with an oriented moment along the same a O axis as the Mn moment (AFM Mn,Eu, , see the right panel of Fig. 1c), consistent with susceptibility measurements. As shown in Fig. 3d, X bc keeps increasing, but X a decreases rapidly upon cooling below 8 K, showing behavior opposite to that of x = 0. This indicates that the Eu moment mainly points in the out-of-plane a O direction at x = 0.8. Electronic transport properties Next, we present the evolution of the electronic transport properties with Sr doping in Eu 1−x Sr x Mn 1−z Sb 2 . As shown in Fig. 3e-h, both the in-plane longitudinal resistivity ρ in and out-of-plane resistivity ρ out exhibit metallic transport properties. At 2 K, ρ out /ρ in reaches 128, 198 and 322 for x = 0, x = 0.2 and x = 0.8, respectively. Such a rapid increase in electronic anisotropy indicates that Sr doping reinforces the quasi-2D electronic structure. In the x = 0 sample (see Figs. 3e and S7a), the slope of ρ out and ρ in decreases below T 2 , indicative of the coupling between the emergence of Eu order and the transport properties, suggesting that the in-plane Eu "+ + --" order leads to suppressed metallicity. The metallic behavior in our EuMn 0.95 Sb 2 sample is different from the insulating behavior observed in the Sn-or Mn-doped nonstoichiometric samples 17,19 . This indicates that chemical doping at Sb or Mn sites induces a metal-insulator transition that is distinct from the effect of Sr substitution for Eu. However, the x = 0.2 sample exhibits transport behavior distinct from that of the x = 0 sample. We observe a rapid decrease in ρ out and a slight increase in ρ in below T 2 (see Figs. 3f and S7b), suggesting that the Eu canting to the a O axis with the Eu "+ − + −" component significantly increases the interlayer conductivity along the a O direction between Sb layers but suppresses the intralayer conductivity on the b O c O plane, in contrast with the effect of the sole in-plane Eu order on the transport properties described above. Below T 3 , there are no obvious changes in the out-of-plane resistivity, but an anomalous decrease in the in-plane resistivity is observed. This can be attributed to the SR of Eu from noncollinear to collinear order. Below T 3 , the out-of-plane Eu order is kept at "+ − + −", which is not expected to influence the interlayer conductivity. In contrast, the switch of the in-plane component from "+ + − −" to "+ − + −" induces an anomalous increase in the intralayer conductivity. When x increases to 0.5, the "+ − + −" component of the Eu order along the a O axis direction also induces an increase in the interlayer conductivity below T 2 (see Figs. 3g and S7c), but the increase is weaker than that at x = 0.2. Furthermore, the weak decrease in the intralayer conductivity at x = 0.2 is hardly observed near T 2 at x = 0.5. Both are ascribed to the reduction in Eu occupancy to ≈ 50% at x = 0.5, which weakens the effect of Eu order on the transport properties. For x = 0.8, the Eu ordering does not obviously influence the resistivity below T 2 , as shown in Figs. 3h and S7d, which can be ascribed to the low Eu occupancy (≈ 20%). Thus, our results reveal an intimate coupling between the Eu magnetic order and transport properties in Eu 1−x Sr x Mn 1−z Sb 2 . 0)) under high magnetic fields applied along the out-of-plane direction. For x = 0, Δρ out /ρ out is negative, whereas the in-plane Δρ in /ρ in is positive. The magnitudes for both Δρ out /ρ out and Δρ in /ρ in are small, and no strong Shubnikov-de Haas (SdH) oscillations are observed. For x = 0.2, weak SdH oscillations are observed in both Δρ out / ρ out and Δρ in /ρ in . As the field increases, there is a sign reversal in ρ in /ρ in , whereas Δρ out /ρ out remains positive. Remarkably, at 1.8 K, which is below T 3 , a large jump in Δρ out /ρ out up to 4500% occurs above a µ 0 H t of 18 T. The dramatic changes in Δρ out /ρ out near µ 0 H t of 18 T are ascribed to a field-induced metamagnetic transition. Since this phenomenon does not occur in the T > T 2 temperature regime (e.g., 50 K), the field-induced magnetic transition does not originate from the Mn magnetic sublattice but is related to the Eu magnetic sublattice, which is indicative of the vital role that the Eu magnetic order plays in the magnetotransport properties. The most likely origin of the enhanced Δρ out /ρ out above µ 0 H t of 18 T is the field-induced Eu SR transition from the canted moment direction in the a O c O plane to the c O axis, while the A-type "+ − + −" Eu order remains, thus strongly suppressing interlayer conductivity, as illustrated in the inset of Fig. 4b. Note that this is different from the fieldinduced spin-flop transition of the "+ + − −" Eu order from the out-of-plane c O axis to the in-plane direction in EuMnBi 2 16 . Above ∼ 28 T, the rapid decrease in Δρ out / ρ out may indicate the full polarization of Eu spins to the external field direction, i.e., the a O axis, similar to the scenario seen in EuMnBi 2 16 . Further high-field magnetization measurements are required to confirm these metamagnetic transitions. Nontrivial Berry phases An increase in the Sr doping level significantly enhances SdH oscillations in both Δρ out /ρ out and Δρ in / ρ in for x = 0.5 and 0.8, respectively, with much higher oscillation amplitudes at high magnetic fields. Δρ out /ρ out reaches ≈ 18,000% at 31.5 T for x = 0.8. We further analyze the Berry phase (BP) ϕ B accumulated along cyclotron orbits and are able to extract ϕ B for x = 0.5 and 0.8. Based on the field dependence data of ρ in measured in a 14 T PPMS, which show well-resolved SdH oscillations in Fig. S8a, we obtain the second derivative of resistivity -d 2 ρ in /dB 2 and the oscillatory component of ρ in after background subtraction. The oscillation peaks and valleys obtained from both analyses are well-matched, as shown in Fig. S8b. With six oscillation valleys assigned to integer Landau levels (LLs) and five peaks assigned to half integer LLs, a Landau index fan diagram can be established, from which a nontrivial Berry phase of 0.8 π can be unambiguously extracted, as displayed in the inset of Fig. 4c. As shown in Fig. 4d, we extract a Berry phase of 0.88 π for the x = 0.8 compound. The Berry phases in both the x = 0.5 and 0.8 samples are apparently close to an ideal Berry phase for a quasi-2D system. The nontrivial Berry phase provides evidence that x = 0.5 and 0.8 harbor relativistic Dirac fermions. Our results clearly show that the substitution of Eu by nonmagnetic Sr induces Dirac semimetallic behavior that is closely associated with the controllable Eu magnetic order. Unlike the x = 0.2 sample, the x = 0.5 and 0.8 samples do not show large jumps in ρ out /ρ out in the field up to 31 T. This indicates the absence of field-induced metamagnetic transitions in both compounds. Therefore, the nontrivial Berry phase may be intrinsic for x = 0.5 and 0.8 compounds. In addition, compared to SrMnSb 2 , with only an ordered Mn moment, the x = 0.5 and 0.8 samples exhibit distinct Eu orders coexisting with Mn orders, and the increase in Eu canting angle is accompanied by stronger quantum oscillations. Composition phase diagram From the combination of single-crystal X-ray diffraction, neutron diffraction, magnetization, and magnetotransport measurements, we are able to establish the structural, magnetic, and electronic phase diagram, as illustrated in Fig. 5. While the x = 0 parent compound with Mn deficiency is tetragonal with the space group P4/ mmm, Sr doping induces an orthorhombic distortion. This is consistent with previous reports on the orthorhombic structure in doped nonstoichiometric samples 17,19 . Notably, our EuMn 0.95 Sb 2 sample forms a magnetic structure with perpendicular Mn and Eu moments at the ground state and does not exhibit topological semimetallic behavior, different from previous reports on samples with different compositions [17][18][19] . Sr substitution for Eu in EuMnSb 2 induces a slight decrease in T 1 but suppresses T 2 significantly. Furthermore, an increase in Sr concentration drives an unusual Eu SR from the in-plane to the out-of-plane direction and simultaneously induces the appearance of Dirac semimetallic behaviors. A higher Eu canting angle characterized by a smaller Eu-Mn angle is accompanied by stronger quantum SdH oscillations. Our results show that Eu spin canting can be driven by chemical doping, which could explain the observation of Eu canting in a doped Fig. 1b- − α), i.e., a smaller α, is accompanied by stronger quantum SdH oscillations. All the compounds exhibit metal-like transport properties as a function of temperature, and they are also coupled to the Eu order at T 2 and T 3 . The nontrivial Berry phases indicative of Dirac semimetallic behaviors emerge for x ≥ 0.5. nonstoichiometric sample 19 . Note that no other magnetic transition is observed at T 3 in ref. 19 For our x = 0.2 compound, a 2nd type of Eu SR from a noncollinear canted spin order to a collinear A-type canted spin order is found at lower temperature (denoted by AFM Mn,Eu,C2 in Fig. 5). Furthermore, the Eu order at the base temperature can be easily tuned by the external magnetic field to another type of SR, leading to a canted AFM state with the moments oriented to the possible c O axis. The established phase diagram for Eu 1−x Sr x Mn 1−z Sb 2 as well as the comparison with the previous reports we made above [17][18][19] indicate that the structure, magnetic order, and electronic properties of EuMnSb 2 are easily perturbed by chemical doping at any of the Eu, Mn, and Sb sites, indicating that the lattice, spin and charge degrees of freedom are strongly coupled in this material. This could account for the conflicting results reported in the literature [17][18][19] regarding the structure, magnetic, and electronic transport properties of EuMnSb 2 and implies that the nonstoichiometry must be taken into account to understand the intrinsic crystal and magnetic structure and magnetotransport properties of EuMnSb 2 . While chemical doping at Sb or Mn sites 17,19 in nonstoichiometric samples induces a tetragonal-orthorhombic structural transition, as in our Eu 1−x Sr x Mn 1−z Sb 2 (x > 0), such doping induces a metal-insulator transition yielding insulating behavior. This indicates that doping at the Sb or Mn sites may be detrimental to forming semimetallic behavior in EuMnSb 2 derivatives. In contrast, our phase diagram clearly shows that Sr doping at the Eu site is the driving force of the Dirac semimetallic behavior in Eu 1−x Sr x Mn 1−z Sb 2 , as discussed below. First, Sr doping at the Eu site lowers the lattice symmetry and modifies the structural parameters, as summarized in Table 1, which could in turn change the electronic band structure. Second, the different types of Eu spin reorientations driven by Sr doping, temperature, or magnetic field significantly influence the electronic transport and magnetotransport properties, indicating that the band structure is sensitively dependent on the magnetism of the Eu sublattice. As such, the phase diagram presented in Fig. 5 offers an excellent opportunity to explore the intimate interplay between the band relativistic effect and magnetism. Origin of various Eu spin reorientations Finally, we discuss the origins of the complicated magnetic structures, in particular, the Sr-doping and temperatureinduced Eu SR transition in Eu 1−x Sr x Mn 1−z Sb 2 . A common SR in rare earth elements occurs because the rare earth element drives the Mn moment parallel to its moment direction once the rare earth spins are ordered with preferred in-plane orientation at low temperatures, as reported for several compounds such as RMnAsO (R = Nd or Ce) 20,21 and RMnSbO (R = Pr or Ce) 22,23 . However, Sr doping in Eu 1−x Sr x Mn 1−z Sb 2 generates a novel Eu SR where the moment changes from the in-plane direction to the outof-plane direction while the Mn moment direction remains along the out-of-plane a O axis. The Mn 2+ moment, which commonly displays very weak single-ion anisotropy as expected for the L = 0 of Mn 2+ (S = 5/2), favors orientation along the out-of-plane direction [20][21][22] , i.e., the c T axis in the tetragonal structure or the a O axis in the orthorhombic structure, forming C-type AFM order in T 2 < T < T 1 of Eu 1−x Sr x Mn 1−z Sb 2. The in-plane checkerboard-like AFM structure of the C-type order suggests that the NN interaction J 1 is dominant, whereas the in-plane next-nearest-neighbor (NNN) interaction J 2 is very weak. In the context of the J 1 -J 2 -J c model 33 , we conclude that J 1 > 0, J 2 < J 1 /2 and outof-plane J c < 0 with negligible spin frustration in the Mn sublattice. Upon cooling to T < T 2 , Eu-Eu coupling starts to come into play and induces Eu ordering with a preferred orientation of Eu 2+ (S = 7/2) in plane 34,35 , either the a T b T plane in the tetragonal structure or the b O c O plane in the orthorhombic structure. Simultaneously, the Eu-Mn coupling also plays an important role by exerting an effective field that has the tendency to influence the Mn/Eu moment directions. The increase in Sr concentration on the Eu site weakens Eu-Eu coupling and destabilizes the preferred orientation of the Eu spins. Thus, as x increases to 0.2, the effective field from Eu-Mn coupling tends to drive the Eu moment toward the Mn moment direction. The competition of Eu-Eu and Eu-Mn couplings induces spin frustration in the Eu sublattice and leads to a canted Eu order with the moment in the ac plane stabilized in T 3 < T < T 2 . An increase in Sr doping has a tendency to further drive the Eu moment tilt toward the a O axis due to weakened Eu-Eu coupling, as shown by a lower Eu-Mn angle for x = 0.5. As the Sr doping increases to 0.8, the Eu-Mn coupling overwhelms the weak Eu-Eu coupling, which leads to an SR of Eu to the same moment direction as the Mn moment. This could account for the unusual Eu SR induced by Sr doping. As the temperature decreases below T 3 for x = 0.2, a temperature-induced SR transition occurs. This may be ascribed to another type of Eu-Eu coupling that comes into play below T 3 . This retains the "+ − + −" out-ofplane component but switches the in-plane component from "+ + − −" to "+ − + −", leading to a collinear A-type AFM order of Eu spins in T < T 3 . Thus, the striking Eu spin reorientation driven by Sr doping and temperature indicates strong Eu-Mn (4f-3d) couplings and results from their competition with Eu-Eu couplings. To summarize, we report the composition phase diagram of the crystal and magnetic structures and electronic transport properties of Eu 1−x Sr x Mn 1−z Sb 2 and the realization of tunable topological semimetallic behavior by controlling various spin reorientations by chemical substitution, temperature, and/or an external magnetic field. The structure, magnetic order, and electronic properties of the parent EuMnSb 2 are easily perturbed by chemical doping, and therefore, the nonstoichiometry must be taken into account to determine its intrinsic structure and physical properties. While we found that nearly stoichiometric EuMnSb 2 is not a topological semimetal, doping of nonmagnetic Sr on the Eu site induces an intricate coupling between the structure, various Eu spin reorientations, and quantum transport properties, indicating that Eu 1−x Sr x Mn 1−z Sb 2 is a wonderful platform for the study of the interplay between magnetism and the topological properties of the electronic band structure. The present study may inspire a search for semimetallic states and interesting magnetic states in the large AMnCh 2 (A = rare earth elements, such as Ce, Pr, Nd, Sm; Ch = Bi/Sb) family and other layered compounds involving spatially separated rare earth and transition metal layers by tuning the competition of 4f-3d and A-A magnetic couplings.
8,510.6
2022-03-11T00:00:00.000
[ "Materials Science" ]
Dielectric properties of semi-insulating Fe-doped InP in the terahertz spectral region We report the values and the spectral dependence of the real and imaginary parts of the dielectric permittivity of semi-insulating Fe-doped InP crystalline wafers in the 2–700 cm−1 (0.06–21 THz) spectral region at room temperature. The data shows a number of absorption bands that are assigned to one- and two-phonon and impurity-related absorption processes. Unlike the previous studies of undoped or low-doped InP material, our data unveil the dielectric properties of InP that are not screened by strong free-carrier absorption and will be useful for designing a wide variety of InP-based electronic and photonic devices operating in the terahertz spectral range. Semi-insulating (SI) iron-doped indium phosphide (InP:Fe) is widely used in electronic and photonic devices operating in the terahertz spectral range (THz range, 0.1~10 THz), including Schottky diode detectors 1 , high-electron mobility transistors 2 , photomixers 3 , and quantum cascade lasers (QCLs) [4][5][6] . Nominally updoped InP crystals always contain different unintentional impurities due to the growth processes with the concentrations up to 5·10 15 cm −3 that results in shallow donor or acceptor energy levels within the energy gap. Iron doping provides acceptor levels in the mid-gap region of InP that compensate residual shallow donors and produce material with virtually no free carriers 7 . High-resistivity semiconductors are highly desired for devices operating in the THz spectral range as the optical losses at THz frequencies are typically dominated by the free-carrier absorption. Given the importance of InP material to photonics and electronics, its optical properties have been extensively studied across the electromagnetic spectrum [8][9][10][11][12][13][14][15][16][17] . However, THz optical properties of SI InP have not been reported in the literature yet. The dielectric constants of nominally undoped and low-doped InP have been studied in the THz spectral range recently 10,11 ; however, they are strongly affected by a free carrier plasma that screens the intrinsic characteristics of SI InP material, especially at frequencies below 5 THz. In this study, we present detailed investigation of the electrodynamic response of SI InP:Fe in the 2-700 cm −1 (0.06-21 THz) spectral region. The studied samples are obtained from commercial vendors and have resistivities exceeding 5 × 10 6 Ω·cm that corresponds to a free-carrier concentration below 10 9 cm −3 . As expected for such low free carrier concentration, our optical data show no plasma effect in the entire spectral range of interest. Given the importance of SI InP to a variety of applications 1-6 , we believe that this report will be useful to a wide range of groups involved in the microwave and THz semiconductor devices research and development. Experimental details Materials. Two semi-insulating InP:Fe wafers with the nominal thicknesses of 350 ± 25 and 1000 ± 25 μm, obtained from two different vendors (AXT and Wafer Technology Ltd., respectively) were studied. Approximately 1 × 1 cm 2 square sections were cut from the wafers and the thicknesses of the squares were measured at multiple locations to be 360 ± 2 μm and 991 ± 2 μm. The difference in the dielectric data obtained for the two samples was within the measurement error. Below we present and discuss the data obtained with the thicker sample. Experimental setup and data processing. The measurements of the dielectric properties in the ν = 0.21-3.00 THz range (7-100 cm −1 ) were performed using a pulsed THz TeraView TPS-Spectra-3000 time-domain spectrometer. A spectrometer based on monochromatic and frequency-tunable continuous-wave (CW) backward-wave oscillators 18 was used for the measurements in the 0.06-0.30 THz range (2-10 cm −1 ). Both spectrometers provide a possibility to derive the spectra of dielectric parameters -complex permittivity, dynamic conductivity, refractive index, etc., directly by measuring the values of the amplitude and the phase of the electric field of the radiation/electromagnetic wave passed through the plane-parallel sample. Additionally, we have performed high resolution (Δν < 0.3 cm −1 ) measurements of transmission and reflection coefficients of our samples in the 0.9-21 THz range (30-700 cm −1 ) using a standard vacuum Fourier-transform infrared spectrometer (FTIR) (Bruker Optics Vertex 80 v). For the data shown in the figures, we combined the data obtained with the backward-wave oscillators in the 2-10 cm −1 range with the data obtained by the TeraView system in the 10-100 cm −1 and the data obtained by the FTIR in the 100-700 cm −1 range. The results obtained by different measurement techniques in the overlapping spectral regions were the same within the experimental uncertainties. Due to low absorption, the spectra of the transmission and reflection coefficients of our samples contain pronounced maxima and minima that arise due to the interference of the radiation within plane-parallel slabs (a well-known Fabry-Perot effect 19,20 ). These spectra allow for the most precise determination of the dielectric parameters of the samples at the frequencies of the interference maxima where the interaction of the probing radiation with the material is the most effective. This approach was used in the present research: the dielectric parameters of the studied InP:Fe samples were determined by modeling the measured transmission coefficient maxima based on the Fresnel expressions that describe the optical properties of a plane-parallel layers. At frequencies where the samples were not transparent due to strong phonon or impurity absorption resonances, the dielectric parameters of the samples were determined by simultaneous processing both transmission and reflection spectra. To get the parameters of each resonance, we have modeled them with the Lorentzian lineshapes: Here εʹ(ν) = n 2 (ν) − κ 2 (ν) and ε″(ν) = 2nκ are the real and imaginary parts of the complex dielectric permittivity ε * (ν) = εʹ(ν) + iε″(ν), n is the real and κ is the imaginary part of the complex refractive index n* = n + iκ, f j = Δε j ν j 2 is the oscillator strength of the j-th resonance, Δε j is its dielectric contribution, ν j represents the resonance frequency, and γ j is the damping factor. Results and Discussion The combined transmission and reflection spectra are shown in Fig. 1 (dots), together with the spectral modeling results (lines). Below 150-200 cm −1 the interferometric Fabry-Perot oscillations are clearly resolved with the maxima in the transmission spectrum corresponding to the minima in reflectivity. The interferometric effect is also seen at frequencies above 500 cm −1 where the absorption of the crystal is not too strong. A number of absorption bands are observed as pronounced minima in the transmission spectrum above 100 cm −1 and weaker absorption features are seen at lower frequencies. The origin of these absorption peaks is discussed below. where ω is the circular radiation frequency and c is the speed of light. The data reveals a complex structure with a set of absorption lines with the most intensive one located at 304 cm −1 (see Table 1). The strongest absorption line shown separately in the inset is a well-known transverse optical (TO) phonon 21 . This mode is responsible for the low transmission values between ≈150 cm −1 and ≈500 cm −1 in Fig. 1a and for the corresponding characteristic dispersion in the reflectivity spectrum in Fig. 1b. There might be weaker absorption bands in the frequency interval of the reststrahlen band, 300-350 cm −1 that are, however, not resolved in the spectra due to the dominant effect of the TO phonon absorption. Less intensive absorption bands on the left and right wings of the phonon resonance are assigned to multi-phonon (summation or differential 22 ) processes (indicated in Fig. 2b with vertical arrows) or to the electron/hole transitions involving impurities. Nominally pure InP crystals contain different unintentional impurities due to the growth processes with the concentrations up to 5·10 15 cm −3 . The most common unintentional impurities in the SI InP are Si, S, Zn, C 15,17 . In order to trap the free carriers produced by these impurities, InP crystals are doped with iron that creates acceptor levels in the mid-gap region of InP 7 . The activation energy of Fe impurity is 640 meV 16 (frequencies above 5100 cm −1 ) and it is not expected to affect our measured spectra. The energies of the transitions related to the defects, i.e. indium and phosphorus vacancies and antisites are higher than 100 meV 16 (frequencies above 800 cm −1 ), and are also not expected to show up in our spectra. However, the ionization energies of shallow donors (Si, S, Sn, Ge) are about 7.65 meV 23 , and of shallow acceptors (Zn, C, Si, Mn, Be, Mg) are about 25-40 meV 13,24 . These transitions are likely to produce absorption features seen in our spectra. The lowest-frequency absorption band at about 30 cm −1 in the spectra in Fig. 2 was previously observed in ref. 25. In accordance with the temperature dependence of the oscillator strength, this band was associated with the two-phonon absorption; however, the authors in ref. 23. did not assign the line to a specific phonon type and location. Koteles and Datars 26 predicted an absorption line associated with the differential LO-TO phonon absorption that should be located either at the L or "hex" (see below) points of the Brillouin zone and that is expected to appear in the region 15-35 cm −1 . The authors of ref. 26. also observed a number of two phonon summation and differential absorption lines which they attributed to phonons located at the X point, L point, and a point, located somewhere on the (111) hexagonal face of the Brillouin zone boundary, designated as "hex". The next absorption band at about 64 cm −1 could be a manifestation of differential phonon absorption LA-TA2 (hex) 24 and/or an impurity transition. The energy of the residual donor impurities is reported to be about 7.65 meV (61.7 cm −1 ) 23 . The small difference between the observed location of the band and the literature data of the shallow donor impurity transition can be explained by the dependence of the energy gap of InP on the donor content and by the mutual influence of several impurities presented within its crystal lattice. The key parameters of the absorption lines are presented in Table 1 (cf. Eq. (1)). The inset in panel (a) shows the dielectric permittivity near the transverse optical phonon resonance. The two lines observed at 79 and 88 cm −1 could also be attributed to a differential phonon absorption. According to the data in ref. 26, the phonon positions are determined with an uncertainty of ±10 cm −1 which explains the difference between the peak positions obtained in this study and in the literature (see Table 1). The line at 163 cm −1 is apparently some acceptor impurity contribution. It is unlikely that this line is a manifestation of an absorption due to a two-phonon or a compensated shallow donor process, since its dielectric contribution and oscillator strength are about an order of magnitude higher than those of the neighboring lines. According to the literature data 24 , Ge impurity has the ionization energy of 21 meV and could correspond to the line at about 163 cm −1 . At 235 cm −1 (29.1 meV), a relatively strong absorption band is registered. Due to its high intensity (oscillator strength), it is unlikely to be attributed to a multi-phonon absorption process. This line may be a manifestation of a shallow acceptor. Taking into account the position of the band, the most possible impurities could be Mn, Si, Be or Mg. We note that this line may also be produced by Zn contamination. In Table 2 we present the values of the real and imaginary parts of the dielectric permittivity, conductivity, refractive index, extinction coefficient, loss tangent tan(δ) = ε″/ε′ and power absorption coefficient α at selected frequencies. The values are listed for various frequencies ν between 2 cm −1 and 100 cm −1 . In Fig. 3 we show the same data for the real and imaginary permittivities, absorption coefficient and dynamical conductivity σ(ν) = νε″(ν)/2 (inset) in spectral form. It is seen from Fig. 3 that, at THz frequencies below ≈15 cm −1 , the measured absorption is larger than the absorption expected from impurities-or phonon-related processes (solid lines). The additional absorption at low THz frequencies should be related to the hopping conductivity of residual quasi-free charge carriers that is described by the Mott's dispersion σ(ν) ∝ ν s with s ≤ 1 27 and was previously observed in InP crystals (see, for example ref. 28). Dotted line in the inset of Fig. 3 shows that the low-frequency contribution to the conductivity is well reproduced by the Mott's expression with s = 0.9. Table 1. Conclusion In conclusion, we have performed room-temperature measurements of the dielectric properties of semi-insulating (SI) InP:Fe crystals in the 0.06-21 THz spectral range. Unlike the previous studies of undoped and low-doped InP material, our data unveil the dielectric properties of intrinsic InP that are not screened by the strong free-carrier absorption. A number of absorption resonances are discovered and their origin is analyzed. The values of the dielectric parameters of SI InP:Fe at frequencies between 2 and 700 cm −1 (0.06 and 21 THz) are presented. The data reported here is expected to be useful in designing and improving the performance of numerous microwave and terahertz semiconductor devices based on SI InP:Fe.
3,106.6
2017-08-04T00:00:00.000
[ "Materials Science", "Physics" ]
COMPOSITE SOLID FUEL : RESEARCH OF FORMATION PARAMETERS Involving of local low-grade fuels resources in fuel and energy balance is actual question of research in the present. In this paper the possibility of processing low-grade fuel in the solid fuel composite was considered. The aim of the work is to define the optimal parameters for formation of the solid composite fuel. A result of researches determined that dextrin content in the binder allows to obtain solid composite fuel having the highest strength. The drying temperature for the various fuels was determined: for pellets production was 20-80 °C, for briquettes – 2040 °C. Introduction One of the most important problems in modern energy is ensuring of energy safety countries in general and regionsparticularly.Even in the Russian Federation, reached natural and fuel resources, more than 45 % of regions are dependent on the fuel delivery [1,2].It is also important that the fuel delivery is accompanied by transportation costs, which increased the fuel cost by several times.As a result, energy supply of such regions is provided by higher electricity tariffs and in conditions of constant depending on the integrity of traffic arteries. The solution of this problem is involving of local organic raw materials, which often are low grade type of fuel.Such raw materials include peat, low-quality brown coal, biomass and household waste.The main problems of involving low grade raw materials to fuel and energy balance are its low thermotechnical characteristics and strength (brittleness and crumbliness).These problems lead to extraction difficult in the winter and high operational costs at traditional combustion in furnaces with grate firing: the need of raw materials drying, the high value of sifting, incomplete combustion and underburning, as a result the low boiler efficiency [3][4][5]. One of the most popular directions to low grade raw materials recycling for energy use is forming of fuel briquettes or pellets, called as solid composite fuel (SCF).The solid composite fuel is prepared from low grade raw materials by pressing [6] or products of its preliminary elevation, using the binder and forming equipment [7]. The forming parameters (components rating and drying temperature) of solid composite fuel from thermal elevation products of low grade raw materials according to the technology [6] are researched in this work. Material and methods The research results for technology of fuel briquettes production from law grade raw materials, published in [8][9][10], and the aspiration to full using of thermal recycling products at the fuel briquettes production lead to necessary of binder addition on the stage of briquette forming. As initial low grade fuel was researched the peat with following thermal and technical characteristics: moisture content (W t r ) -72.8 %, ash per dry mass (A d ) -9.1 %, yield of volatile substances per fuel weight (V daf ) -71.6 %, lowest calorific value (Q i r ) -3.1 MJ/kg.The thermal recycling of peat was performed at a temperature 400 °C.The weight yield of thermal recycling products (relative the dried weight of raw materials): carbon residue -43.4 %, pyrolysis condensate -26.2 % (including tar -6.3 %), gas -30.4 %. As it turns that a tar yield from researched peat is insufficient for based on its binder ability involving of all carbon residue to solid composite fuel.Additional of extra binder is necessary. Dextrin was researched as one of the most accessible binders because costs for binders at production are very important factor, determined the SCF cost, and the binder process effects on their thermal and technical, strength characteristics. Dextrin is polysaccharide, obtainable by thermal recycling potato or corn starch.It is applied mainly for adhesive means producing, as well as in food, light industry and foundry engineering. The binder was obtained by dextrin addition to pyrolysis condensate, heated to a temperature of 50-70 °C.A binder with 5-%, 10-%, 15-%, 20-%, 30-% dextrin content (by weight) was researched.The carbon residue milled to particles of not more than 1 mm, after that mixed with binder.The ratios of carbon residue and binder was considered: (1 : 4); (1 : 3); (2 : 3); (1 : 1); (3 : 2).It is established that the binder is insufficient for forming at the using of ratios (1 : 1); (3 : 2), the mixture is too dry.The ratios (1 : 4); (1 : 3) is not allowed to form SCF due to lack of viscosity -the mixture is too liquid because a binder content is very high.The optimal ratio of carbon residue and binder in this case is (2 : 3).The water diluting is possible at low yield of pyrolysis condensate.As a result, SCF from contents in table 1 was obtained. Using of 5% solution as a binder allows to involve all semi-coke, at the same time a homogeneous and sticky mixture was obtained.It is not resisted to forming and saved the received form.The 10-and 15% solutions using was shown the relevant or inclined to its results. At the testing of more concentrated solutions of dextrin the forming mixture was obtained the inhomogeneous, consisting of the layers that have negative affect for SCF formation because of addition efforts to shaping. Briquettes and pellets sizes for SCF production were chosen according by GOST 54248-2010 "Peat briquettes and pellets for heating purposes.Specifications": pellets of 20 mm in diameter and 20 mm of height, briquettes of 50 mm in diameter and 50 mm of height.The drying of formed fuel (pellets) with different dextrin content was carried out at a temperature 20-40 °C.The results of mechanical compression tests for SCF according by GOST 21289-75 "Coal briquettes.Methods for the determination of mechanical strength" shows in figure 1: the highest mechanical compressive strength (P max ) was observed in solid composite fuel with dextrin content 5-10 %, the continued increase of dextrin share resulted to decrease of strength characteristics.From practical and economical point of view [11] for SCF production is expedient to use a binder with minimal dextrin concentration, because a binder with 5 % its content has been used in further. In example of pellets, made based on binder with 5% dextrin content in pyrolysis condensate, experimentally researched the drying temperature of SCF in the range of 20 to 140°C.The moisture losses during to drying of produced pellets shows in fig.2a.Fig. 2a shows that moisture from SCF intensively evaporated at high drying temperature (100-140 °C).It leaded to emergence and development of surface porosity.The mechanical tests (table 2) show that all SCF examples have resistance to fracture at dropping.The results of compression test show that pellets, dried at 20-80 °C, had the highest strength.The increasing of drying temperature resulted to decrease pellets strength due to emergence pores on surface.The noticeable splits appeared at a temperature more than 120 °C (fig.2b); it indicated to very high drying speed, sharply reduced the strength characteristics of pellet (table 2).However, the increase of SCF size from pellets to briquettes contributed the changes to select of drying temperature.Drying of briquettes at a 80 °C lead to emergence of visible splits on their surface (fig.3).The splits emergence explained by increase of briquettes diameter (as compared with pellets to 2.5 times), which lead to nonuniform heating.Briquettes heated from outside surface to centre are unevenly due to low thermal conduction.A heated outside surface of briquette was solidified.In the process of continued drying the moisture inside briquette evaporated and vapor yield through an outside solidified furnace accompanied by pores formation.At the high drying speed intensity yield of evaporated moisture formed splits on surface of fuel briquette. Conclusions The experiments of test dextrin in pyrolysis condensate as a binder in solution is allow to determine ratio of components for SCF forming and temperature of subsequent drying.The dextrin content 5-10 % in pyrolysis condensate at forming was ensured maximal fuel strength to compression.From practical and economical point of view is expedient to use a binder with minimal dextrin concentration (5 %).At the same time the ratio of semi-coke and binder based on dextrin was (2 : 3).The water diluting is necessary at low yield of pyrolysis condensate. The drying temperature of SCF for pellets production was 20-80 °C, for briquettes -20-40 °C.The higher drying temperature lead to forming of pores and splits on surface, reducing the mechanical strength of SCF.The fixed interval for pellets production is allows to producer choose the drying temperature in this range.The increased temperature up to 80 °C, allowing to reduce drying time, is preferred at high productivity.The lower temperature (20 °C) does not require of additional costs for drying, but increase the residence time of fuel at this production stage and requires the larger area for drying of SCF. Fig. 2 . Fig. 2. The results of pellets drying: a) a moisture loss of pellet (G m ) from time (τ) at the different dry temperature; b) pellet, dried at 140 °C. Table 1 . Content of produced SCF. Table 2 . The mechanical strength of the TCP depending on the drying temperature.
2,063.6
2016-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Ensuring minimum duration of transient processes in switched voltage regulators with digital control This paper describes a solution suggested to minimize the finite transient duration of a switched voltage regulator (SVR) for step changes in load current. SVR control laws aimed at minimizing the transient time are synthesized, and the microprocessor-based architecture and operating algorithms of the control system are designed. The prototype of the SVR digital control unit is implemented on the field-programmable gate array integrated circuit Cyclone III EP3C120F780 using the NIOS II soft-processor core. Embedded software is developed to calculate the control pulse duration for power switches in accordance with the synthesized control laws taking into account the feedback loop signal. A case study of the prototype shows that it provides the duration of transients caused by a load current step change, equal to 3-4 conversion periods at the frequency of 120 kHz. It confirms the suitability of the developed models, algorithms and control laws for ensuring the minimum transient duration. * Corresponding author. Email<EMAIL_ADDRESS> Introduction ** Electric and radio equipment requires stable power supply with low voltage deviations in stationary and dynamic operating modes.Requirements to power supply quality, including voltage stability, are regulated by such industry ** Co-funded by the Erasmus+ programme of the European Union: Joint project Capacity Building in the field of Higher Education 573545-EPP-1-2016-1-DE-EPPKA2-CBHE-JP "Applied curricula in space exploration and intelligent robotic systems".The European Commission support for the production of this publication does not constitute an endorsement of the contents which reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.branch standards as National Aeronautics and Space Administration (NASA), European Space Agency (ESA) and European Cooperation for Space Standardization (ECSS) electrical power supply standards [1].Voltage stabilization is carried out by switching voltage regulators (SVRs) [2,3].Power quality assurance is especially difficult for systems with multiple power sources and/or energy consumers [4].To enhance voltage stability, efficient SVR control laws are being developed [5][6][7].These control laws are usually implemented by a pulse width modulation (PWM)-based controller.Generally, PWM controllers are manufactured in the form of application-specific integrated circuits (ASICs) [8][9][10].In addition to SVR control and voltage stability assurance, PWM controllers can perform a wide range of service functions, such as temperature control, circuit overload protection, etc. [11,12].O.V. Nepomnyashchiy et al. Problem statement Requirements to power supply quality, including voltage stability, are becoming increasingly stringent.One of the most promising ways to meet them is to increase the performance of SVR, which allows us to reduce output voltage deviations for dynamic modes of operation.This is possible through developing new, more effective control laws.However, implementing an improved control algorithm with the use of existing series-produced PWM controllers is usually impossible due to mismatch of an internal controller structure with the new control law.For this purpose, small-scale integrated circuits and discrete components can be used, but this will decrease the reliability and increase the intrinsic power consumption of the SVR control unit, as well as its mass and size.Such negative changes in the characteristics of SVR are unacceptable. The most appropriate and relevant to modern approaches should be considered the implementation of the SVR control unit based on a microprocessor platform with statically and dynamically reconfigurable architecture.Such a system can be developed as a "system-on-a-chip" on the base of ASIC, or partially or wholly field-programmable integrated circuit with an embedded microprocessor core.Reconfigurable microprocessor platforms allow performing both different algorithms of the SVR control and additional functions of SVR self-diagnostics, telemetry, etc.In addition, the proposed approach improves power supply system reliability in critical applications by including more than one SVR and allows implementing maximum power tracking algorithms for renewable energy sources, etc. [13,14]. Of particular interest is a promising method for the synthesis of SVR control laws, described in [15], because it ensures high performance in output voltage stabilization.The proposed method includes:  Representation of power circuits of SVR with PWM control in the mode of small deviations by a relevant pulse-amplitude model for describing adjustable components of the process. Synthesis of a control law using polynomial equations for designing pulse-amplitude modulation (PAM) systems [16]. Implementation of the designed control law in a SVR taking PWM features into account. The proposed method allows designing a control law that provides the minimum transient time in a step-down SVR.The schematic diagram of such a step-down SVR, or buck converter, is shown in Figure 1.It is based on the conventional SVR structure [17].The difference is that the control unit (CU) is implemented usin a digital integrated circuit with an embedded microprocessor core. Control law synthesis It is proposed to use the SVR control law synthesized on the base of described above method [15] as optimal in performance, and the SVR implementing this control law is to be called a high-performance SVR.By adjustable components of the process, we mean deviations of variable parameters of the SVR from their values in the stationary mode, which are caused by an increase in the current pulse duration t p.adj relative to the stationary duration t p .fix . Let us consider selecting an adjustable component on the example of timing diagrams of the buck converter.(stationary) duration of control pulses t p.fix of the adjusting element.The second is due to increment of the control pulse duration t p by the amount of t p.adj relative to the stationary duration t p.fix .Useful information about the control process is only contained in adjustable components.In case of small deviations of the control pulse duration, the following condition is met: t p.adj <<T, where T is the commutation cycle duration. In a system utilizing PWM, the adjustable component is given by voltage pulses u L.p (t) with duration t p.adj (t), that affect on the inductor from the adjusting element.To pass from PWM to PAM, they are replaced by δ-functions, which are equivalent in terms of the volt-second "area".It is acceptable in case of small deviations. In [18], it is shown that the influence of the load conductance and the inductor internal resistance can be neglected for pulse amplitude SVR model in the case of use the optimal in performance control law.This allows us to represent a pulse-amplitude SVR model as connected an ideal pulse element and elements with transfer functions 1/pL and 1/pС in series (Figure 3). The adjustable voltage component on the buck converter filter input represents bipolar pulses with amplitude u L.adj (t).The pulse width t p.adj (t) is equal to the deviation of the pulse width t p (t) from its stationary value t p.fix .This determines the input signal of an ideal pulse element for the adjustable components of the pulse-amplitude modulation for the buck converter as: Equation (1) corresponds to the proposed pulse amplitude buck converter model: where δ*(t) is a sequence of δ-functions with the period T, described by the following expression: PWM-regulation for adjustable components in the field of lattice function transforms is determined by the equation: where U out *(p) and S*(p) are lattice function transforms for adjustable components the output voltage and the control signal respectively; W 0 *(p) is the discrete transfer function determined by the transfer function of the continuous part Wcp(p) [15]. Only the adjustable components of the state variables carry useful information about the pulse-width control process.The constant component does not contain such information, and the pulsation of variable states is an information disturbance.This means that control laws for SVR should be synthesized on the basis of the expression (2) which describes the process of pulse-width control using adjustable components.Pulsations of state variables should be taken into account when implementing the control law. Transformation of the pulse-width regulation to the amplitude-pulse regulation in the neighborhood of a stationary mode allows applying a well-developed mathematical apparatus of the PAM [15] for the analysis and synthesis of SVR with PWM control and to develop pulsebased control laws. The block diagram of the designed pulse system is shown in Figure 3. Figure 3. The block diagram of the synthesized pulse system To synthesize the optimal in performance control law for the SVR, we use the third polynomial equation of synthesis [16].This equation provides for synthesis of the discrete transfer function W C (p) for a feedforward compensator according to the condition of the minimum finite duration of processes in a closed system depending on external influences and in case of the deviation of the feedforward compensator parameters from calculated values.It ensures the practical feasibility of where: L and C are the inductance and the capacitance of the output filter respectively. Developed control methods of the switched voltage regulator The control unit for a SVR with PWM control have been designed in accordance with the synthesized transfer function (3), taking specifics of pulse-width modulation into account.Two solutions are considered [15].The first solution is control depending on instantaneous values of state variables.It suggests preliminary studying relationships between the stationary pulsation and the increment of duration of control pulse t p.adj (t), formed by the modulator, with considering this dependence thereafter (at the next step) [18].A study of this solution proves achieving the transient time of 3-4 conversion periods, which is close to the theoretical limit of 2 periods for a second-order system with pulse-amplitude modulation [18].At the same time, the study shows a significant static error of stabilization in the output SVR voltage.This is due to the fact that the gain of the feedback loop: , defined by expression (3), is obtained through parameters of the SVR power circuit.Therefore, its increase leads to decrease in performance [19].In this case, astatism of the output voltage is provided by introducing a second integrating voltage feedback loop [20]. Thus, the input of the pulse-width modulator is the following: It is the sum of two signals.The first, u d (t), provides dynamical properties of the SVR.It is produced by the feedforward compensator according to equation (3).The second, u int (t), provides zero offset of the SVR output voltage.It is obtained by integrating the error signal e(t) of the SVR output voltage: where e(t) = u out (t) -U 0 , u out (t) is the SVR output voltage, U 0 is the reference voltage, K p << 4K opt / (2rcC + T), where K opt is the feedback gain factor that determines SVR dynamic properties, rc is the internal resistance of the output filter capacitor, C is the capacitance of the output filter capacitor. The second solution is based on extracting adjustable components of SVR state variables at the beginning of a commutation cycle by special sample-and-hold elements.In the case of SVR implementation using microprocessor platform, this approach is preferable considering that it suggests the single-time sampling of signals at the beginning of each commutation cycle.In comparison, the first solution requires multiple analog-to-digital conversions of information signals.According to the second solution, the PWM input signal, which describes dynamic properties of the SVR, is described by the following equation: and its increment by the commutation cycle T is defined as follows: where U in is the SVR input voltage.The first difference of the SVR output voltage is determined as: The PWM gain factor is described by the following equation: where Δt c.adj is the increment of the control pulse duration, ΔU in.M (mT) is the increment of the PVM input signal, U M is the amplitude of the PWM sawtooth voltage [19].Existing microprocessor technologies allow implementing the second approach to the SVR design [20], in correspondence with the optimal in performance control law and with the use of an integrating voltage feedback loop under control of the digital PWM controller. Minimization of transient time For the considered solution, minimization of the transient time is reduced to calculating the input signal of the PWM controller and producing through this signal the actual width-modulated pulse control signal for the SVR power switch, performed by a microprocessor unit.The PWM input signal is calculated using expression (10), which is obtained from equation ( 4) taking into account the discrete nature of signal processing: The signal U d (mT) that determines SVR dynamic properties is defined by equations ( 6)- (8).The first difference of the adjustable component of the SVR output voltage, which is described by equation (8), can be described by the following expression: where E(mT) is the discrete value of the loop error signal on the SVR output voltage, which is determined taking into account (5) as follows: This replacement is acceptable because the SVR output voltage, presented as: Therefore, when calculating the first difference U C.adj (mT) using equation ( 11) and taking into account (12), the fixed component U fix (mT), the output voltage ripple U ripple (mT) and reference voltage U 0 , biased in relation to each other by the commutation cycle T, are mutually subtracted.The signal U int (mT) providing astatism of the SVR, can be calculated by the expression: It is distinct from the expression (5) taking into account the replacement of integration by summation.Information signals necessary for the SVR control, are the voltage at the input and output of the SVR.Their sampling is carried out at the time instants mT.Digital conversion and further calculations take the time interval ts << T. In the considered case, the SVR power switch performs modulating the leading edge of a control pulse.The delay in determining the control pulse width of the SVR power switch does not have a significant effect on the calculation result, because calculations are performed at the time interval when the power switch is turned off. Figure 4 presents the block diagram of the signal processing sequence performed by the microprocessor control system.Computing unit C3 defines the signal U int (mT) which is responsible for astatism of the SVR output voltage.Then, the adder calculates U in.pwm (mT) by summing U d (mT) and U int (mT) according to (10).After that, the PVM controller defines the control pulse width tc.adj according to the equation t c.adj = K M V in.pm defined by (9).The obtained width will take effect during the next time period to time instants mT.The microprocessor control system synchronizes the falling edge of a control pulse with the time instant (m+1)T. Implementation of designed control laws and control algorithms To confirm the theoretical results obtained, a prototype of the buck converter with digital control is implemented.It is designed in accordance with the structure presented in Figure 1.The functional diagram of the buck converter is depicted in Figure 5.The prototype is designed to determine dynamic characteristics close to the maximum possible, in combination with the output voltage astatism in static operating modes. The control unit implements the algorithm of generating control signals for the buck converter power switch at the microprogram level.The algorithm is presented in Figure 7.In calculations, the average inductance is assumed 150 µH. Embedded software for the microprocessor control unit has been developed.At the first stage, the software time delay of the first sample is introduced.This is performed in order to finish transients of digital and analog modules and reach operating mode of power supply electronic components.During this delay, the variables are initialized, the control subsystems are set, the system constants are defined, such as the initial value of the control signal pulse duration (t c.st ), the pulse width, the modulation period of the signal, the value of the reference voltage U 0 , etc. Hereinafter, time interval lines are implemented at the hardware-software level using programmable timercounters. At the end of the software delay, a sync signal is received from the main timer.After receiving the sync signal, the signal for controlling power switches is generated.In the first program cycle, the width of the power switch control pulse t c.adj (MT) is set by the initial value tc.st, which is necessary for starting the buck converter.Then, input signals are synchronously converted by ADC 1 and 2 (Figure 4) into the digital format of the values U in (mT) and U out (mT) respectively.ADCs operate in parallel; the beginning of the conversion is synchronized.The ADCs perform conversion during a small offset interval τ <0.25T, i.e. during the first quarter of the period.There may be some temporary mismatch in the completion of the conversion steps.However, difference of digitization time is insignificant, of the order of 10 -12 seconds.This does not affect results in the calculation time intervals and is synchronized when receiving data from the ADC by the microprocessor unit in the direct memory access mode. At the end of the measurements, correction values are calculated.Next, the duration of power switch control pulse is calculated for the next program cycle, corresponding to the time instant mT + 1.The calculation is performed in accordance with the formula (10) by the expression: Experimental results To study buck converter prototype, a test facility was developed.The functional diagram of the installation is depicted in Figure 8.The test facility includes: buck converter prototype (1), power supply with adjustable supply voltage range from 40 V to 110 V (2), current sensor in the form of a resistive Rsh shunt with a resistance of 0.2 Ohm (3), a load unit consisting from constantly connected and periodically switched resistors (4) and a four-channel oscilloscope (5). The results of studying the buck converter prototype are shown in Figures 9-11.9.The constant component of the load current is 1.4 A, and the step load current increment is 2.8 A. Figure 3 proves out that the control process meets the zero offset requirements because the output voltage returns to its initial value after finishing the transient process.Further tests confirm the absence of the output voltage offset even if the buck converter input voltage varies in the range mentioned above.Figure 10 and Figure 11 show step load current decrease and step load current increase processes on a large scale respectively.The transient time is approximately 5-6 commutation cycles for the step load current decrease process (fig.6) that is close to the minimum possible transient time.The transient time is approximately 6-8 commutation cycles for the step load current increase process (figure 7).Increase in the transient time relative to its minimum value is explained by the determined limitation of the maximum control pulse width at 0.75T.The reason of limiting the pulse width is the need for an additional time interval for demagnetization of the inductor current sensor based on the transformer, analog-todigital converting informational signals and calculating the width of the buck converter control pulse. Conclusion In this paper, a solution is proposed for minimizing finite duration of transients in a step-down switched voltage regulator (buck converter) with step changes of the load current.The solution implements a method of control law synthesis, which, according to preliminary studies, should provide high performance in stabilizing the SVR output voltage.The proposed method uses the representation of the 8 SVR power circuits in the mode of small input voltage deviations by a relevant pulse-amplitude model for describing adjustable components of the process.The synthesis of SVR control laws is performed using polynomial equations for designing PAM control and implementation of the designed control law in the SVR considering PWM features.This method allows designing a control law that provides the minimum transient time in a step-down switched voltage regulator.A microprocessor platform with statically and dynamically reconfigurable architecture is suggested as the most relevant hardware platform for the SVR control unit.The prototype of the buck converter with digital control unit is implemented using a FPGA circuit with an integrated processor core. Experimental studying the prototype proves the efficiency of the proposed method.In case of a step change of the load current, the described solution provides the minimum finite duration of transients equal to 5-6 conversion periods, approaching the lowest theoretically possible limit.The experiments show that single sampling and analog-to-digital conversion of input signals, and calculating correction values of the control pulse during a commutation cycle T, releases significant time resources of the microprocessor control unit.The released time resource can be utilized to diagnose the buck converter or to distribute the load current between several buck converters operating in parallel for the common load and other maintenance functions, which is an additional advantage of the proposed buck converter control method.The use of switching power supply with digital control unit implemented by means of high-performance microprocessor systems provides significant strategic advantages over analog systems. Figure 1 . Figure 1.The schematic diagram of the buck converter Figure 2 presents variations with time of the load current I load , the inductor current I L and its stationary I L.fix component, the output capacitor current I C and its components: stationary I C.fix and adjustable I C.adj , the adjustable component of the output capacitor voltage U C.adj, the conversion period Figure 2 . Figure 2. Timing diagrams of the buck converter Endorsed Transactions on Energy Web 06 2019 -10 2019 | Volume 6 | Issue 24 | e6 the adjustable component U C.adj (mT) the fixed component U fix (mT) and the ripple component U ripple (mT) that do not vary during a commutation cycle T. O.V. Nepomnyashchiy et al.EAI Endorsed Transactions on Energy Web 06 2019 -10 2019 | Volume 6 | Issue 24 | e6 Figure 4 . Figure 4.The block diagram of the signal processing sequence performed by the microprocessor control system Figure 5 . Figure 5.The functional diagram of the buck converter prototype Figure 7 . Figure 7.The algorithm of the embedded software for buck converter controller Figure 8 . Figure 8.The functional diagram of the test facility Figure 11 . Figure 11.Step increase in output current Ensuring minimum duration of transient processes in switched voltage regulators with digital control EAI Endorsed Transactions on Energy Web 06 2019 -10 2019 | Volume 6 | Issue 24 | e6 T, the pulse width t p and its components: stationary t p.fix and adjustable t p.adj , and sawtooth voltage of the PWM U glv .
5,076.4
2019-10-16T00:00:00.000
[ "Engineering", "Computer Science" ]
Characterising hydrothermal fluid pathways beneath Aluto volcano, Main Ethiopian Rift, using shear wave splitting. Journal of Volcanology and Geothermal , Geothermal resources are frequently associated with silicic calderas which show evidence of geologically- recent activity. Hence development of geothermal sites requires both an understanding of the hydrothermal system of these volcanoes, as well as the deeper magmatic processes which drive them. Here we use shear wave splitting to investigate the hydrothermal system at the silicic peralkaline volcano Aluto in the Main Ethiopian Rift, which has experienced repeated uplift and subsidence since at least 2004. We make over 370 robust observations of splitting, showing that anisotropy is confined mainly to the top ∼ 3km of the volcanic edifice. We find up to 10% shear wave anisotropy (SWA) is present with a maximum centred at the geothermal reservoir. Fast shear wave orientations away from the reservoir align NNE–SSW, parallel to the present-day minimum compressive stress. Orientations on the edifice, however, are rotated NE–SW in a manner we predict from field observations of faults at the surface, providing fluid pressures are sufficient to hold two fracture sets open. These fracture sets may be due to the repeated deformation experienced at Aluto and initiated in caldera formation. We therefore attribute the observed anisotropy to aligned cracks held open by over-pressurised gas-rich fluids within and above the reservoir. This study demonstrates that shear wave splitting can be used to map the extent and style of fracturing in volcanic hydrothermal systems. It also lends support to the hypothesis that deformation at Aluto arises from variations of fluid pressures in the hydrothermal system. These constraints will be crucial for future characterisation of other volcanic and geothermal systems, in rift systems and elsewhere. © 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Background, regional setting and Aluto Geothermal resources worldwide are very frequently associated with active or recently-active volcanoes (e.g., Glassley, 2010). In these locations, a magmatic heat and fluid source is present, and an accompanying hydrothermal system has developed (e.g., Grant and Bixley, 2011), which permits the extraction of heat for immediate use or electricity generation. For reasons of public benefit and scientific understanding, therefore, it is vital to understand both how such resources may have developed through time as volcanic systems, and the current and future state of the geothermal system. Locations where recently-active volcanoes can be studied in terms of their geothermal potential and volcanic history include the world's major rift systems, such as Iceland (Arnórsson, 1995), northern New White blue-headed arrow shows the present-day extension direction (Saria et al., 2014). Brown lines here and in (c) are faults mapped by Agostini et al. (2011). Panel (c) location indicated by red rectangle. (c) Earthquakes and seismic stations at Aluto. Circles are event locations from Wilks et al. (2017) of earthquakes which yielded at least one splitting measurement of quality '2' or better, or null. Events are coloured by hypocentral depth, where depth is measured below sea level. (Median elevation in this plot is 1924 m above sea level.) Triangles are seismic stations, coloured by the number of good shear wave splitting measurements made at that station. Large black double-headed arrows show the fast orientations of Keir et al. (2011), plotted at the event-station midpoint, with the location of the respective seismic station shown by an inverted black triangle nearby. The black hatched square shows the approximate location of the Aluto-Langano geothermal power plant, which defines the 'centre' of the study region in this work. Black lines are Aluto faults mapped by Hutchison et al. (2015). Note particularly the elliptical caldera fault inferred from fumarole locations. (d) North-south section showing the earthquakes used in this study as circles colour-coded with depth, and the entire catalogue of Wilks et al. (2017) as grey circles. Horizontal axis shows elevation above sea level (asl). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) by fluids moving along recent (<2 Ma) faults which accommodate present-day spreading. These NNE-SSW-striking faults were identified by Mohr (1962) as the Wonji Fault Belt (WFB), and have been identified with the emplacement of dykes. (See Keir et al., 2015 and references therein.) Further signs of Aluto's continuing magmatic activity has come from geodetic observations, which show up to 15 cm vertical displacement over periods of just 6 months centred on Aluto's edifice (Biggs et al., 2011;Hutchison et al., 2016a). In common with other MER volcanoes, this has been occurring since at least 2004, the ARTICLE IN PRESS earliest point at which remote observations have been made (Biggs et al., 2011). Despite the interest in the role of magmatism in rifting in the MER, and the hazards arising from MER volcanoes (e.g., Vye-Brown et al., 2016), little geophysical monitoring is present. This study uses a network of seismometers deployed as part of the Aluto Regional Geophysical ObservationS (ARGOS) project, which included a magnetotelluric (MT) survey (Samrock et al., 2015), global navigation satellite system geodesy, plus geological, petrological and geochemical studies (Hutchison et al., 2015;Gleeson et al., 2017;Hutchison et al., 2016a,b,c;Braddock et al., 2017), alongside the seismic network (Wilks et al., 2017) used here. Shear wave splitting Here, we investigate the structure of the volcano-geothermal system beneath Aluto using shear wave splitting in waves from local volcano-tectonic (VT) earthquakes. Splitting occurs when a shear wave travels through an anisotropic medium, and the energy partitions into two orthogonally-polarised waves with different velocities (e.g., Musgrave, 1954). The polarisation of the faster shear wave, 0, and the delay time between the arrival of the two shear waves, dt, can be measured and provide information about the style of anisotropy in the subsurface. Shear wave splitting is increasingly used to investigate volcanological processes (e.g., Gerst, 2004;Baird et al., 2015, amongst many) often by assuming that temporal changes in the splitting parameters (0 and dt) reflect changes in stress within the volcano. This is on the basis that the subsurface contains microcracks oriented in all directions, and that those microcracks which are oriented favourably open up in the direction of the minimum horizontal compressive stress (e.g., Crampin and Booth, 1985). In this way, 0 will tend to point along the regional maximum horizontal compressive stress, since fractures will be elongated in this direction. Lithological layering or elongated inclusions (e.g., melt pockets on the metre scale) will also induce splitting if they are smaller than the seismic wavelength. In this case, interpretation of 0 may be more closely related to pre-existing structure and temporal changes may not be visible (Boness and Zoback, 2006). In either case, the amount of splitting along a given ray path is determined by the density and aspect ratio of cracks or fractures, amongst many other parameters. Because dt depends on the distance travelled by a shear wave, shear wave anisotropy (SWA) is used, which is the amount of splitting normalised by velocity and ray path length. Previous observations Anisotropy within the MER has been explored using teleseismic SKS waves (e.g., Ayele et al., 2004;Kendall et al., 2005), but these measurements average out structure over the entire upper mantle and crust. Keir et al. (2011), however, made two measurements of splitting in our study region ( Fig. 1) from local events. The northern observation was made at station E77, located near Lake Ziway, from an event located beneath the lake. The southern observation was made at E79 from an event beneath Lake Langano. They found 0 to be along NNE-SSW directions, using events at 8 km depth, with dt = 0.12 s, corresponding to SWA ≈ 6%. They interpreted their results as due to aligned fractures and faults within dykes, which have been emplaced from the Aluto volcanic centre during the Quaternary period. However, no surface expression of dyking has been recorded in these specific areas (Hutchison et al., 2015;Wolde-Gabriel et al., 1990;Kebede et al., 1984), leaving room for alternative explanations for the observed splitting. Here we investigate shear wave splitting and fracture-induced anisotropy, for the first time at Aluto, using a much richer dataset from a dense seismic monitoring network. Data and event locations The data used in this study are as described by Wilks et al. (2017). They are recordings from 18 three-component Güralp 6TD broadband seismometers located around the edifices of the Aluto and Corbetti volcanoes, with a minimum, mean and maximum station spacing respectively of about 1, 10 and 20 km at Aluto (Fig. 1). The network operated from January 2012 to January 2014. Wilks et al. (2017) located 2162 earthquakes during the network's operation, of which 1361 have been termed 'Aluto events', occurring within 15 km of the centre of the edifice. Events were detected by manual inspection of seismograms and picking of P and S wave arrivals times, which were in turn inverted for the events' onset times and locations. We use these recorded S wave arrivals in this study, which number 1454. Events were located in two main depth groups: between the surface and sea level (depth of 0 km; 56% of events); and from 4 to 10 km below sea level (bsl; 23%). The remainder (13%) of events are between 0 and 4 km bsl, with a small proportion (7%) deeper than 10 km. Spatially, events cluster around the edifice, which is an effect of both detection bias and true propensity for seismicity to occur along the Artu Jawe fault system. These results are supported by more recent joint hypocentre-velocity inversions (Wilks et al., submitted). Shear wave splitting analysis We use the 'minimum-eigenvalue' method of (Silver and Chan, 1991), as modified by Walsh et al. (2013) and implemented in the SHEBA program (Wuestefeld et al., 2010), to retrieve splitting parameters which best linearise the particle motion on the horizontal components of the seismograms. Delay times of the slow shear wave relative to the fast of up to 0.4 s were permitted in the grid search. To minimise manual processing of the data, we used the multiple window analysis method of Teanby et al. (2004) to automatically choose time windows around the arrival based on 100 trial analysis windows. Initially, start times of windows were trialled between 0.22 and 0.04 s before the S wave onset, with end times of between 0.15 and 0.27 s after the S wave pick. These were determined by manual picking of a subset of the data. All results were manually inspected and classified as quality '1' (the best), '2', '3' or 'null' (clearly no splitting present). In some cases, analysis windows were repicked manually where these automatic times did not properly contain the arriving S wave energy, and the results updated. Quality '1' results satisfied the following criteria: 1s uncertainty in fast orientation, D0 ≤ 10 • ; 1s uncertainty in delay time D(dt) ≤ 0.01 s; signal-to-noise ratio SNR ≥ 5; clearly elliptical particle motion before correction with optimum splitting operator; clearly linear particle motion thereafter. Quality '2' results satisfied the same criteria, but with D0 ≤ 20 • , D(dt) ≤ 0.02 s and SNR ≥ 3, whilst for quality '3' results D0 ≤ 30 • and D(dt) ≤ 0.04 s. For the latter, results were only retained where results were unambiguous and clearly positive, but which suffered large uncertainties because of the limited frequency content of the seismic recording in the analysis window (see e.g., Walsh et al., 2013). We recommend that category '3' results be used only in combination with higher-quality results at the same station because of the large stated uncertainties. Null results were classified when particle motion was clearly linear before analysis, and SNR ≥ 5. ARTICLE IN PRESS We limited our analysis to event-station pairs whose straightline angle of incidence was less that 55 • from the vertical. Although this appears to be outside the shear wave window within which shear wave polarisations remain unaffected by interaction with the free surface (e.g., Booth and Crampin, 1985), this angle is an overestimate as we do not perform ray-tracing on each path. P-and S-wave velocities in the near surface are believed to be approximately 3.3 and 1.9 km s −1 respectively (Daly et al., 2008;Wilks et al., 2017), and likely lower still in the top few km, so it is expected that significant deviation of the ray will occur and this range of values will ensure accurate shear wave splitting observations. Manual inspection of a number of seismograms at the largest straight-line incidence angles showed no evidence of a free-surface effect on the shear wave polarisation. This was diagnosed by converting the traces via the free-surface transform (Kennett, 1991) into P, SV and SH components for a variety of assumed subsurface velocities at the predicted ray slownesses and investigating for the removal of any significant P-S coupled energy. Result quality Of the 1454 S-waves in the catalogue, 1207 yielded splitting parameters. About 13% were quality '1', 14% quality '2' and the remainder, 73%, '3'. Just 26 results were clearly null. The location of earthquakes giving reliable (qualities '1', '2' and 'null') results and the entire event catalogue (Wilks et al., 2017) are shown in Fig. 1. Fig. 2 shows the distribution of signal-to-noise ratio (SNR) for all acceptable ('1'-'3' and 'null') shear wave splitting measurements made at all stations. We define SNR as where A w is the peak amplitude on the horizontal components within the shear wave splitting analysis window (A = A 2 N + A 2 E 1/2 for the two horizontal components, N and E), and A n is the peak amplitude in the 'noise window' before the S-wave onset. Here we use a noise window of length 1 s. This is a simpler measure than that of Restivo and Helffrich (1999), for whom Fig. 2. Observed signal-to-noise ratio (SNR) for all S-waves at all stations (blue bars), and SNR of only those yielding good shear wave splitting measurements (orange), using the SNR measure defined in Section 3.1. SNR is also shown using the measure of Restivo and Helffrich (1999) where R max is the maximum absolute amplitude of the 'R' component within the shear wave splitting analysis window, s T is the standard deviation of the amplitudes on the 'T' component within the window, and both of these quantities are measured on the traces after correction by the best-fitting splitting operator recovered in the analysis. The R component is parallel to the incoming wave polarisation direction at the station, and T is perpendicular thereto. The SNR we use here can be computed without first performing shear wave splitting analysis, which for high-frequency data can be an expensive operation. It also does not depend on the successful retrieval of accurate splitting parameters. We therefore propose our measure as a potential pre-filter before performing analysis. We note that using both SNR and SNR RH , measurements which were manually assigned as 'acceptable' were possible when SNR was at least between 2 and 3 times the mean of the Poisson distribution which fits the observed SNR for all signals. Although outside the scope of this study, this may be of interest for future studies into automatic determination of useful shear wave splitting measurements. Strength and depth of anisotropy The average splitting delay time isdt = (0.11 ± 0.06) s. We compute shear wave anisotropy (SWA) according to Thomas and Kendall (2002), assuming straight line raypaths between the event and station, and taking an average velocity along the ray from the 1D model used by Wilks et al. (2017). This leads to values of SWA up to 15%, with a median of 1.2%. (Fig. S1 in the Supplementary information shows the distribution of SWA.) These are likely mild overestimates because the rays will not be straight, though this is less of a problem for the shallower events where less bending will occur. Even for the deepest events with the largest epicentral distances, the overestimation in SWA due to ray bending is only about 1%. Uncertainty in SWA arising from inaccuracy in the velocity model is estimated to be less than 1% based on bootstrap modelling where we randomly perturbed the velocity model at each station separately and recalculated SWA. There is no significant trend of dt with either event depth or straight-line distance between the event and station ( Fig. 3a-b). SWA also falls off rapidly with event-station distance. Both of these features show that the bulk of the splitting is accrued near the surface, or at least that any coherent anisotropy is present from the surface only to a shallow depth. If coherent splitting occurred throughout the volume sampled by the waves, then SWA would be constant with distance, and dt would increase with distance. The maximum lower limit of anisotropy is likely around 3 km bsl. This is constrained by the fact that, for the events with the steepest incidence angles (≤20 • from the vertical), dt remains approximately constant with event depth, and the shallowest events giving these observations are at ∼3 km bsl. We observe no clear trend in dt or SWA with the ray azimuth or angle relative to the vertical. Under the assumption that all splitting is accrued above 3 km bsl, the mean shear wave anisotropySWA increases to 2.8% for all cases in Table 1. The distribution of SWA values with and without this assumption is shown in Supplementary Fig. S1. SWA in the upper crust is commonly thought to range between 0 and 5% when caused by cracks (e.g., Crampin, 1987), and the majority (97% of the total) of the values we find lie in this range. However, some of our observations of SWA here (<1%) are over 10%. Although large, similar and indeed larger values have been inferred seismically before, for instance in carbonate hydrocarbon settings (e.g., Potters et al., 1999, who find values up to 20%). With the supposition that anisotropy is concentrated near the surface, we search for evidence that there are lateral trends in SWA in events shallower than 3 km bsl. By restricting ourselves to the shallower events for this exercise, we are minimising the scatter in SWA arising from paths which only spend a portion of their time within the region responsible for most of the splitting we observe. Fig. 4 shows the variation of SWA for these events, whose median value is 4%. We take a grid of points separated by 250 m in northerly and easterly directions and include all event-station midpoints within a 2 km radius of each grid point. We then average the SWA within that point, rejecting any points where fewer than 3 observation midpoints occur within that bin. We only include raypaths with lengths less than 10 km, and only those defined as 'Aluto' events whose epicentres are within 15 km of the 'centre' of Aluto (defined by the geothermal power plant; Fig. 1). A very similar picture is produced when using grid sizes between 100 m and 1 km, a search radius between 1 km and 5 km, a minimum number of observations per bin between 2 and 5, or raypaths less than 5, 10 or 20 km in length. ARTICLE IN PRESS Although caution is needed in interpreting this figure because of uneven azimuthal coverage of ray paths and the tradeoff between depth, path length and SWA, it appears that these events experience stronger splitting beneath the volcanic edifice, up to mean SWA of ∼4%, whilst paths outside the edifice have values of 1 to 2%. The central high-SWA feature is robust to varying the selection criteria across a range of distances, path lengths, incidence angles and grid size as described above. It is not clear if the high-SWA region in the southwest of Fig. 4 is significant, but as events in this region have the longest raypaths on this diagram, it is unlikely to be caused by high apparent SWA due to short path lengths. This region is in fact known for hot springs (Kebede et al., 1984;Hutchison et al., 2015;Braddock et al., 2017), which may suggest an increased contribution to anisotropy from fluids near the surface. A second outlying high-SWA region in the northeast is constrained by only two observations, and we accordingly do not seek to interpret the signal there. Firstly, the pink circle shows the approximate location and extent of the clay cap atop the geothermal reservoir, inferred from the mineralogy in boreholes (Gianelli and Teklemariam, 1993) and subsurface resistivity from the inversion of magnetotelluric data (Samrock et al., 2015). It is clear that the region of high-SWA encompasses the clay cap and is centred at a similar location. Secondly, we show in dashed purple lines the −25% contour of V P /V S (=1.32) from the local tomographic model of Wilks et al. (submitted) at 3 km bsl. There is again a large overlap between the regions defined by this V P /V S contour and that of SWA > 4%. Finally, we show the E-W-elongated caldera ring fault inferred from volcanic vent locations (Hutchison et al., 2015). In this case, the correlation between high values of SWA and the inferred ring fault is striking. Fast orientation trend Fast orientations for the whole dataset are shown in Fig. 5. The overall mean value for fast orientations0 = (13 ± 7) • is not significantly different (at the 95% confidence limit) from either the trend of seismicity (∼15 • ), or the local trend of the Wonji fault belt (∼12 • ) as found by Agostini et al. (2011).0 is also approximately perpendicular to the plate-spreading direction of ∼100 • (e.g., Bendick et al., 2006;DeMets et al., 2010;Saria et al., 2014). We investigate the lateral variation in 0 (Fig. 6). We show polar histograms for numbered bins of side length 0.4 • where the observations have been grouped laterally by the event-receiver path midpoint. This presentation significantly reduces the scatter by not assuming that the contribution to the splitting comes primarily from the source or receiver location. The bin size is chosen to permit a sufficient number of observations in each bin, such that trends and multimodality can be seen. Bins south of the edifice (numbered 16-30) generally have modal 0 values close to north, whilst those above the edifice and to the north (numbered 1-15) have modal 0 ∼ NE. The difference between these two sets of orientations is significant at the 95% level when tested using the U 2 statistic (Watson, 1961). The exception to this dichotomy is bin 8. Here, the rays whose midpoints are contained within this bin show a preponderance of fast orientations ∼ENE-WSW. This trend arises primarily from events to the north and northeast of the stations on the edifice, where ray paths traverse the northern part of the edifice. This corresponds to the location of an inferred caldera ring fault mapped by Hutchison et al. (2015), which here strikes approximately ENE-WSW, parallel Cause of anisotropy We have shown that shear wave anisotropy of up to 10% in the top ∼10 km beneath Aluto is concentrated in the top few km. This correlates both with the increased rate of seismicity and elevated b-values-up to b = 2.5-above about 1 km below sea level (Wilks et al., 2017). High values of b in the Gutenberg-Richter relationship (Gutenberg and Richter, 1944) imply that there is a preponderance for smaller-magnitude events compared to the CMER regional trend of b = 1.1 (Keir et al., 2006), which in turn suggests that rocks above sea level at Aluto are significantly weaker, or that pore pressures are significantly elevated, or both. Neither of these inferences are surprising given that significant outgassing of CO 2 has been measured at the edifice (Hutchison et al., 2015) related to extensive fumarolic activity (Hutchison et al., 2015;Kebede et al., 1984;Braddock et al., 2017), and the control on this imposed by significant faulting across the volcano (e.g., Kebede et al., 1984;WoldeGabriel et al., 1990;Hutchison et al., 2015). It therefore seems highly likely that, in common with most other crustal settings, the significant seismic anisotropy we see is due to sub-seismic-wavelength cracks in the brittle upper crust (e.g., Crampin and Booth, 1985), held open in this case by supercritical or gas-rich fluids (see Section 4.4). It seems likely here that the regional stress field and the orientation of pre-existing fractures combine to preferentially align the open fractures. The radial scale of the histograms is saturated such that the minimum radius represents a count of five in any bin, and the maximum is not constrained, meaning that the least populous histograms appear intentionally smaller. Dashed line shows inferred caldera ring fault by Hutchison et al. (2015). Supplementary Table S1 gives bin edge coordinates and mean orientations. Notice that the trend shifts from NNE-SSW to NE-SW north of 7.76 • N (bins 1-15), and that significant deviations from the regional trends occur in bin 8. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Off-edifice fast orientations The fast orientations revealed in this study show that paths which do not sample the edifice itself, to the south and extreme north of the study area, show a NNW-SSE trend (0 = (11 ± 2) • for eventstation midpoints which lie further than 5 km from the centre of the edifice). This orientation is parallel to the lineaments of the 'Wonji Fault Belt' (Mohr, 1962;Fig. 5), a set of smaller-offset faults inside the rift which are thought to have accommodated strain from 2 Ma, especially strain arising from magma emplacement (Ebinger and Casey, 2001). The WFB orientations also reflect the current strain field (Bendick et al., 2006;DeMets et al., 2010), and the dominant trend of T-axes in focal mechanisms of the events used in this study (Wilks et al., 2017). Off-edifice lineaments of fissures and craters mapped at the surface by Hutchison et al. (2015) are also parallel to the WFB. Although the overall trend as described holds, there are potential variations in the modal off-edifice 0 values. Boxes 24, 18 and 29 may show the influence of the Harorese Rhomboidal Fault System (Corti, 2009, and references therein), where the WFB and older border fault systems are believed to interact. Here, a small number of 0 values are west of north, which coincides with the trend of faults mapped in this region (Acocella, 2007, and Fig. 1). All these lines of evidence suggest that the fast orientations we observe for paths mostly travelling away from the edifice are caused by the regional stress field allowing cracks to open preferentially ARTICLE IN PRESS with their long axes between azimuths of −5 • and 25 • clockwise from north. On-edifice fast orientations In contrast to the off-edifice splits, those observations with paths travelling primarily through the volcanic edifice have a NE-SW trend (0 = (29 ± 2) • for paths with midpoints less than 5 km from the centre), parallel to the faults apparent to the east of the study region and called 'border faults' by Agostini et al. (2011) (Fig. 5). These faults are believed to have been primarily active during initial early breakup, at 6-8 Ma in the CMER (e.g., WoldeGabriel et al., 1990) and are oblique to the present-day spreading direction. The trend of the border faults is not, however, reflected in any major field observations on the edifice itself. Mapping of Aluto reveals that the primary lineaments are still sub-parallel to the WFB, including the Artu Jawe fault zone (AJFZ; Fig. 4) (Kebede et al., 1984;Hutchison et al., 2015Hutchison et al., , 2016c, the major fault within the structure which seems to control the surficial pattern of outgassing, and probably provides the primary pathway for geothermal fluids to circulate from depth. Although there is a lack of evidence for border fault-parallel structures, Hutchison et al. (2015) find that craters and fissures, as well as volcanic vents, have a second modal azimuth of ∼90 • (east-west). This matches the long axis of the inferred caldera ring fault mapped at the surface and inferred from vent locations (Kebede et al., 1984;WoldeGabriel et al., 1990;Hutchison et al., 2015Hutchison et al., , 2016c. Based on this observation, we suggest that the rotation of 0 is due to the interaction of two primary fracture sets in the shallow (<1 km bsl) subsurface. To show this mechanism might explain the observed rotation of 0, we perform simple modelling of the effect of two vertical fracture sets on the anisotropy of an otherwise isotropic medium, using the theory of Grechka (2007), and the expressions of Hudson (1981) to compute the excess compliances associated with dry (i.e., gas-rich) ellipsoidal cracks. This describes a fractured medium in terms of the dimensionless fracture density of ellipsoidal cracks where N is the number of cracks per volume V, and a is the mean crack radius. For details of the modelling, the reader is referred to Verdon et al. (2009), noting that in this case, because we show values of SWA and 0 for vertical rays, the isotropic background velocities have no effect on our results here. We impose on the background medium one set of fractures parallel to the WFB and to the off-edifice 0, with azimuth 12 • , and a second set with azimuth 90 • . The fracture density of the first, WFBparallel set is fixed at n 1 = 0.15, determined through testing a range of n 1 values, whilst the relative density a of the second set is varied from 0 to 1, giving an absolute fracture density of the second set n 2 = an 1 . a then is a measure of how relatively many E-W cracks are held open in the rock mass through which shear wave splitting is accrued. The results are shown in Fig. 7, which indicates that our model agrees with observations when a is in the approximate range 0.5-0.6 and n 1 = 0.15. (The same is true for the range 0.10 ≤ n 1 ≤ 0.20.) This implies that the secondary E-W fracture set seen at the surface, as well as WFB-parallel fractures, are held open to the depths at which most splitting is accrued, which in this case is < 3 km bsl. A dimensionless fracture density of 0.15 is high, though by no means unprecedented. For example, recent analysis of fractures in boreholes at a comparable location, the andesitic geothermal field in Rotokawa, New Zealand (Massiot et al., 2017), shows a range of n (P 33 in their notation) up to 0.24. These measurements were made at depths >2 km, similar to the depths at which we believe splitting is accrued at Aluto. Mechanisms supporting east-west fractures at Aluto In the locality of Aluto, it is at first glance unintuitive that an east-west fracture set should be created, as the regional stress field, acting alone, clearly produces a minimum horizontal compressive stress subparallel to this. Indeed, we see no evidence of such E-W microstructures away from Aluto's edifice, which generally follow the WFB. Several authors have suggested mechanisms by which edifice loading and magmatic overpressure can create differential stresses which might open up fractures (e.g., Muller and Pollard, 1977;Pinel and Jaupart, 2003;Bagnardi et al., 2013;Muirhead et al., 2015;Wadge et al., 2016), though these generally act either circumferentially or radially. Alternatively, evidence from sand-box experiments of caldera formation supports the idea that multiple fracture sets necessarily form within the plug of collapsed material (e.g., Walter and Troll, 2001) in order to accommodate the increased space available when ring faults dip outwards, though recent 2D discrete-element method modelling implies inward-dipping faults are also possible (Holohan et al., 2015). Further cycles of deformation, such as observed at Aluto, may enhance these fracture sets further. (See Acocella, 2007 for a review.) Elliptical calderas such as Aluto's have often been interpreted as caused simply by the presence of a differential horizontal stress on collapse (e.g., Bosworth et al., 2003), but it has also been suggested that pre-existing crustal structures determine the long axis (Acocella et al., 2002;Robertson et al., 2016;Wadge et al., 2016). Although there is no clear evidence of this being that case at Aluto, the presence of pre-existing E-W structures would also encourage the creation of the second fracture set that is observed at the surface, if for instance a damage zone is associated with the structure. This is compatible with the observed concentration of anisotropy near the surface, as any E-W fractures would need to be held open by fluids at high pressures at greater depth, which we consider unlikely away from the geothermal system, and so anisotropy from any cross-rift structure would fall away with depth. Pre-existing cross-rift structures are observed to control the hydrothermal system and surface deformation at Corbetti caldera, MER (Lloyd et al., 2018). Fig. 8. Schematic diagram of the hydrothermal and magmatic systems beneath Aluto. The Artu Jawe Fault Zone (AJFZ; grey plane) provides the primary pathway for magmatic fluids (red arrows) to ascend from depth, as well as the route by which most meteoric water (blue arrows) is returned to the surface. Away from the geothermal reservoir (orange spheroid), the AJFZ and the co-planar Wonji faults determine the crack orientations which lead to AJFZ-parallel fast shear wave orientations (shown as large black double-headed arrows beneath red seismic stations). The geothermal reservoir is fed with heat from the magma body (hazy red spheroid) below via the advection of fluids, and contains multiple fracture sets (black lines) held open by gas-rich fluid overpressure, which in turn cause fast shear waves to be oblique to the AJFZ within the caldera. A hypothesised ring fault is shown, delineating the boundary of the fractured geothermal reservoir. The clay cap (green spheroid) insulates the reservoir below, though does not contribute significantly to shear wave splitting. Dashed lines are example seismic ray paths and blue-yellow spheres are earthquakes hypocentres. The surface topography is shown exaggerated three times, but other features are approximately to scale. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) ARTICLE IN PRESS Even if multiple fracture sets are present, in the presence of the regional stress field alone, only one set would be held open. We suggest therefore that over-pressurised fluids are present beneath the edifice, and the overpressure is sufficient to hold open both fracture sets. This agrees with both the elevated b-values found near the surface (Wilks et al., 2017) and local earthquake tomography showing V p /V S to be in the range 1.45-1.65 (Wilks et al., submitted). Indeed, V P /V S is extremely low (∼1.3) in the region where SWA is largest (Fig. 4). These results further imply that the over-pressurised fluid is gas-rich: V P /V S falls in rocks bearing over-pressurised gas because the bulk modulus is significantly reduced, which reduces V P more than V S (e.g., Ito et al., 1979), and this has been observed to occur in geothermal reservoirs similar to Aluto's (e.g., Lees and Wu, 2000). Hence the combined geophysical observations strongly suggest the presence of gas within the reservoir, which agrees well with the observations of gas-liquid ratios made by Gianelli and Teklemariam (1993). Fig. 8 depicts our suggested mechanism for anisotropy beneath Aluto. Alternative causes of anisotropy Whilst upper crustal anisotropy is considered to be most likely caused by aligned, fluid-filled fractures in almost all settings, there are other potential causes of anisotropy which we discuss now. Firstly, there is ample evidence that the lower crust and upper mantle in the MER contains significant volumes of silicate melt (e.g., Kendall et al., 2005;Hammond et al., 2014), which when aligned in pockets would cause significant anisotropy and high electrical conductivity. However, we rule this out as a cause for the anisotropy we observe, because wells drilled down to 1.5 km bsl at Aluto show no evidence for present-day melt at the relevant depths (Gianelli and Teklemariam, 1993;Gizaw, 1993;Teklemariam, 1996), and on the basis of the MT-derived resistivity structure (Samrock et al., 2015), which shows high resistivity beneath the edifice outside the limited region of an inferred clay cap. Secondly, the alignment of intrinsically-anisotropic mineral grains would also lead to bulk anisotropy (termed lattice-preferred orientation, LPO). This is a major feature of shale-dominated basins, since shale minerals can be very anisotropic and crystals are typically aligned very well by the depositional process (Valcke et al., 2006;Lonardelli et al., 2007). Mapping of Aluto (Kebede et al., 1984;Hutchison et al., 2015Hutchison et al., , 2016c does not appear to show significant LPO in the dominant eruptive products since much is glassy or pumiceous. LPO may be important, however, where significant alteration of eruptive products has occurred, creating anisotropic clay minerals and potentially aligning the crystals. Teklemariam et al. (1996) inferred a dome of hydrothermally altered clay-rich material beneath Aluto at ∼500 m above sea level (asl) from mineral assemblages in wells drilled across the edifice down to 2.5 km below the surface. Samrock et al. (2015) find a conductive region beneath the centre of the edifice at ∼1 km asl which correlates well with this laterally as well as in depth. Such caps are common in high-enthalpy geothermal systems (e.g., Grant and Bixley, 2011). Primarily, we do not consider the presence of a clay cap itself to be a significant cause of the anisotropy because of its inferred depth-our results do not show a marked increase in SWA at 1 km asl, as would be expected. In addition to this, it is not clear why LPO in this region would produce the observed values of 0. If the fabric had a horizontal foliation, as is likely if the maximum compressive stress is vertical as expected, then all values of 0 would be perpendicular to the backazimuth, which is not the case-the circular mean difference is (23 ± 1) • at 1s. It would also cause a very strong dependence of SWA on incidence angle. If on the other hand the fabric was vertical and had a strike parallel to the AJFZ, then it would only produce fast orientations as for the off-edifice results. In fact, we see a rotation of 0 and we do not observe a strong variation in SWA with ray inclination. Any major contribution to splitting from LPO in the clay cap would have to be explained by a foliation at an angle to the AJFZ, for which we can see no evidence. Finally, layering of seismically heterogeneous material at a scale smaller than the seismic wavelength would lead to anisotropy. This might arise from the layering of volcanic material. However, lava flows and tephra deposits are laid down subhorizontally, and this ARTICLE IN PRESS again would generate shear wave splitting much more in waves travelling near-horizontally, and no splitting in vertical waves, coupled with an SH-fast 0. As before, this is in contrast to what we observe. Frozen igneous intrusions might also be present, and in the case of dykes, would produce values of 0 parallel to their strike. Effective medium modelling (Postma, 1955) requires the edifice to be heavily dyked (over 10% by volume) with dykes of velocities reduced by 10% to match the SWA seen. Dykes with greater velocity than the surrounding medium would generate more splitting, but it is once more not clear why intrusions should strike obliquely to the WFB and current extension direction, rather than radially or circumferentially. Mapping of the surface also does not suggest such extensive intrusion (Kebede et al., 1984;Hutchison et al., 2015Hutchison et al., , 2016c. Causes of unrest at Aluto Aluto's pattern of periodic rapid uplift and slower subsidence, since at least 2004 (Biggs et al., 2011;Hutchison et al., 2016a), has been attributed to a number of different causes. Samrock et al. (2015) discuss the possibility that swelling in the clay cap at ∼1 km asl and thermal expansion in the geothermal reservoir at ∼1 km bsl might lead to the observed ground deformation. However, whilst it may be possible that the magnitudes could be reproduced by these mechanisms alone, more recent modelling by Hutchison et al. (2016a) using additional InSAR observations suggests that a source as shallow at 1 km asl could not be responsible, and places the inflationary source at ∼3 km bsl. Similarly, the lateral extent of the resistivity anomaly is not sufficient to reproduce the ground deformation observations. This would argue against any major contribution from the clay cap. This study suggests that most anisotropy is concentrated above ∼3 km bsl, in a highly fractured, over-pressurised region which likely in part makes up the geothermal reservoir. This correlates with a source of subsidence at 1.5 to 2 km bsl suggested by Hutchison et al. (2016c), whereby the geothermal reservoir deflates over time due to thermal effects, but also the loss of fluids, after the initial inflationary period due to the injection of more volatile-rich silicate melt or magmatic fluid at ∼3 km bsl. This model would predict a temporal variation in the flux of fluids through the near-surface, correlated to the deformation signal, and a concomitant variation in splitting, but regrettably our data are not numerous enough to resolve any temporal trends. We suggest that analysis of a longer time series of shear wave splitting data may be able to address this issue in the future. Conclusions Using broadband seismic recordings at Aluto volcano, in the Main Ethiopian Rift, we make approximately 370 high-quality, robust shear wave splitting measurements from local earthquakes up to 40 km deep. We find pervasive splitting which does not increase with depth, showing that shear wave anisotropy of between 0.2% and 10% is present, and is confined to the top 5 km or less. The fast orientations we see outside the main edifice align with the Wonji Fault Belt, a series of faults which accommodate present day strain, consistent with these fractures being used as conduits of fluids in the near surface. Beneath the edifice, observed fast orientations are NE-SW and can be explained by the combination of the two dominant fracture sets observed in the field, one of which is parallel to the current extension direction (E-W), and the other the WFB. Anisotropy varies very strongly laterally, and is highest beneath the edifice, collocated with the geothermal reservoir beneath Aluto which may be bounded by the inferred caldera ring fault. We suggest that overpressure of gas or gas-rich fluids in the geothermal reservoir, sourced from a deeper magma supply, maintains fluid pathways in multiple fracture sets. This study shows the effectiveness of using shear wave splitting as a measure of fracture density, allowing us to image regions of high permeability within the volcano and its hydrothermal system.
9,944.2
2018-05-01T00:00:00.000
[ "Geology" ]
10 Ultra-Broadband Time-Resolved Coherent Anti-Stokes Raman Scattering Spectroscopy and Microscopy with Photonic Crystal Fiber Generated Supercontinuum Optics, one of the oldest natural sciences, has been promoting the developments of sciences and technologies, especially the life science. The invention of the optical microscope eventually led to the discovery and study of cells and thus the birth of cellular biology, which now plays a more and more important role in the biology, medicine and life science. The greatest advantage of optical microscopy in the cellular biology is its ability to identify the distribution of different cellular components and further to map the cellular structure and monitor the dynamic process in cells with high specific contrast. In 1960’s, with the invention of laser, an ideal coherent light source with high intensity was provided. Since then the combination of optical microscopy with laser has been expanded. Many novel optical microscopic methods and techniques were developed, such as various kinds of fluorescence microscopy. In fluorescence microscopy, the necessary chemical specificity is provided by the labeling samples with extrinsic fluorescent probes [1, 2]. With the developments of ultra-short laser, fluorescent labeling technique and modern microscopic imaging technique, the fluorescence spectroscopy and microscopy with high spatial resolution, sensitivity and chemical specificity has become the key promotion of the life science and unveiled many of the secrets of living cells and biological tissues [3, 4]. In particular, the confocal fluorescent microscope (CFM), with the confocal detection [5] and multi-photon excitation [6, 7], can obtain the 3D sectioning images of cells and tissues with high spatial resolution. Today, fluorescence microscopy has become a powerful research tool in life science and has achieved the great triumph. Nevertheless, the disadvantages of fluorescence microscopy, such as the photo-toxicity and photo-bleaching, can not be ignored [8]. Furthermore, some molecules in cells, such as water molecule and other small biomolecules, can not be labeled until now. Finally, for the biological species that do not fluoresce, the extrinsic fluorescent labels will unavoidablely disturb the original characteristics and functions of biological molecules, which will limit the applicability of fluorescence microscopy. Therefore, it is very necessary to develop some complementary Introduction Optics, one of the oldest natural sciences, has been promoting the developments of sciences and technologies, especially the life science. The invention of the optical microscope eventually led to the discovery and study of cells and thus the birth of cellular biology, which now plays a more and more important role in the biology, medicine and life science. The greatest advantage of optical microscopy in the cellular biology is its ability to identify the distribution of different cellular components and further to map the cellular structure and monitor the dynamic process in cells with high specific contrast. In 1960's, with the invention of laser, an ideal coherent light source with high intensity was provided. Since then the combination of optical microscopy with laser has been expanded. Many novel optical microscopic methods and techniques were developed, such as various kinds of fluorescence microscopy. In fluorescence microscopy, the necessary chemical specificity is provided by the labeling samples with extrinsic fluorescent probes [1,2] . With the developments of ultra-short laser, fluorescent labeling technique and modern microscopic imaging technique, the fluorescence spectroscopy and microscopy with high spatial resolution, sensitivity and chemical specificity has become the key promotion of the life science and unveiled many of the secrets of living cells and biological tissues [3,4] . In particular, the confocal fluorescent microscope (CFM), with the confocal detection [5] and multi-photon excitation [6,7] , can obtain the 3D sectioning images of cells and tissues with high spatial resolution. Today, fluorescence microscopy has become a powerful research tool in life science and has achieved the great triumph. Nevertheless, the disadvantages of fluorescence microscopy, such as the photo-toxicity and photo-bleaching, can not be ignored [8] . Furthermore, some molecules in cells, such as water molecule and other small biomolecules, can not be labeled until now. Finally, for the biological species that do not fluoresce, the extrinsic fluorescent labels will unavoidablely disturb the original characteristics and functions of biological molecules, which will limit the applicability of fluorescence microscopy. Therefore, it is very necessary to develop some complementary In 1965, CARS was first reported by P. D. Maker and R. W. Terhune [27] . They found that a very strong signal could be obtained when two coherent light beams at frequencies ω 1 and ω 2 were used to drive an active vibrational Raman mode at frequency Ω R =ω 1 -ω 2 . It was named as the CARS process, whose signals are 10 5 stronger than spontaneous Raman scattering process [28] . As a tool for spectral analysis, the CARS spectroscopy was extensively studied and used in physics and chemistry [29][30][31][32] . The first CARS microscopy with noncollinear beams geometry was demonstrated by M. D. Duncan et al in 1982 [21] . In 1999, A. Zumbusch and his colleagues revived the CARS microscopy by using an objective with high numerical aperture, collinear beams geometry and detection of CARS signals in forward direction that boosted wide research in the improvements of CARS microscopy [22] . The distinguished research works of Prof. S. Xie and his colleagues proved that the CARS microscopy was a very effective noninvasive optical microscopic approach with high spatial resolution and imaging speeds matching those of multi-photon fluorescence microscopy and had a great prospect in biology, medicine and life science [22,33] . In a traditional CARS microscopy, the contrast and specificity are based on single or few chemical bonds of molecule due to the limitation of laser line-width. It is not adequate to distinguish various biological molecules with just single chemical bond. In order to effectively distinguish different molecules, a method for simultaneously obtaining the complete molecular vibrational spectra is required. For this purpose, many methods for extending the simultaneously detectable spectral range of CARS spectroscopy and 171 microscopy are presented [34][35][36][37][38] . With the advent and the progress of supercontinuum (SC) generated by photonic crystal Fibre (PCF) pumping with ultra-short laser pulses [39] , the broadband CARS microscopy based on SC has been developed that provided better feasibilities [40][41][42][43][44] . As well known, CARS microscopy is not background-free [29] . Strong nonresonant background signals (NRB), from the electronic contributions to the third-order susceptibility of sample and the solvent, always company with the CARS signals in a broad spectral range. It often interferences or overwhelms CARS signals from small targets that limits the spectral accuracy, imaging contrast and systemic sensitivity. In the broadband CARS with a broadband pump source, it is not beneficial to simultaneously distinguish various biological molecules because of the influence of strong NRB. In this chapter we briefly introduce the theoretical basics of the CARS process and characteristics of CARS spectroscopy and microscopy. The classical and quantum mechanical descriptions of Raman scattering and CARS processes are qualitatively reviewed in order to be helpful for understanding the physical mechanisms of these two light scattering processes. The main characteristics and applications of a CARS spectroscopy and microscopy are specifically emphasized, such as the condition of momentum conservation; the generation and suppression of NRB noise. In order to simultaneously obtain the complete molecular vibrational spectra, SC laser is a proper pump source. We will briefly review the history and characteristics of PCF. A summarization of the theoretical analysis of SC generation with PCF pumping by ultra-short (picosecond or femtosecond) laser pulses will be outlined. The necessities, developments and characteristics of the broadband CARS spectroscopy and microscopy will be briefly described. An ultrabroadband time-resolved CARS technique with SC generated by PCF will be especially emphasized. By this method, the complete molecular vibrational spectra without NRB noise can be obtained. During recent years, the ways to improve spatial resolution of CARS microscopy have become one of attractive questions all over the world. We will briefly review and outlook the feasible ways. Theories of CARS process As a coherent Raman scattering process, CARS is a typical three-order nonlinear optical process. In order to well understand the mechanism of the CARS process, the brief classical and quantum mechanical discussions of Raman scattering are necessary. Based on the theoretical analysis of Raman scattering, a theoretical description of the CARS process and the conditions for the generation of CARS signals will be outlined. The general classical description will give an intuitive picture. And the quantum mechanical description will enable the quantitative analysis of the CARS process. Here, we just carry out the qualitatively theoretical analysis, and the descriptions of physical mechanism will be highlighted. The emphases are the main characteristics of a CARS microscopy, such as the conditions for generation of CARS signals, and generation and suppression of NRB noise. Raman scattering process When a beam of light passes through a media, one can observe light scattering phenomenon besides light transmission and absorption. Most of elastically scattered photons (so called Rayleigh scattering) from atom or molecule have the same energy (frequency) as the incident photons, as shown in figure 1 (a). However, a small part of the photons (approximately 1 in 10 million photons) are inelastically scattered with the frequencies different from the incident photons [45] . This inelastic scattering of light was theoretical predicted by A. Smekal in 1923 [46] . In 1928, Indian physicist Sir C. V. Raman first discovered this phenomenon when a monochromatic light with frequency of ω P was incident into a medium. He found that the scattered light components contained not only Rayleigh scattering with frequency of ω P , but also some weaker scattering components with frequencies of ω P ±Ω R , which come from the inelastic scattering phenomenon of light named as Raman scattering or Raman effect [47,48] , as shown in figure 1 (b) and (c) in energy level diagram. Raman scattering originates from the inherent features of molecular vibration and rotation of individual or groups of chemical bonds. The obtained Raman spectra contain the inherent molecular structural information of the medium and can be used to identify molecules. Because of this significant feature, it has been widely used as a tool for analyzing the composition of liquids, gases, and solids. Energy level diagram of the elastic Raleigh scattering (a), the inelastic Stokes Raman scattering (b) and anti-Stoke Raman scattering (c), where ω P , ω S , ω AS and Ω R represents the frequency of incident light, Stokes scattering light, anti-Stoke scattering light and resonance respectively. The ground state level, the vibrational level and the virtual intermediate level is labeled with g, v, and j respectively. Classical description of raman scattering The Raman scattering phenomenon arises from the interactions between the incident photon and the electric dipole of molecules. In classical terms, the interaction can be viewed as fluctuation of the molecule under the influence of the applied optical field. With the used optical field of frequency ω P expressed as     P it Et Er e    , the induced dipole moment of a molecule is: where (t) is the polarizibility of the material. When the incident optical field interacts with the molecules, the polarizibility can be expressed as a function of the nuclear coordinate Q, and expanded to the first order in a Taylor series [49] : On the right-hand side of equation (2.3), the first term corresponds to Rayleigh scattering with the same frequency of incident light. The second term describes the Raman frequency shift of ω P ±Ω R . Because the Raman frequency shifting term depends on ∂ /∂Q, Raman scattering occurs only when the incident optical field induces a polarizibility change along the specific molecular vibrational mode. This specific mode is an active Raman mode, which is the basis of the selection rule of Raman spectroscopy. The differential scattering cross-section is one of key parameters to express the intensity of Raman scattering signal. In a solid angle Ω, it can be defined as the amount of scattered intensity divided by the incident intensity: where I Raman is the intensity of Raman scattering light in Ω, V is the volume of scattering medium, N is the molecular density, I P is the intensity of incident optical field. Therefore, the total intensity of the Raman scattering light in whole solid angle can be described as the summation of contributions from all N molecules: Obviously, the spontaneous Raman is a linear optical process, because the intensity of scattering signal has the linear relationship with the intensity of incident optical field and number of scattering molecules respectively. The classical description of Raman process only provides a qualitative relationship between the Raman scattering cross-section and intensities of the Raman signals. In order to achieve the quantitative study for the Raman process, a quantum mechanical description is necessary. Quantum mechanical description of raman scattering When the interaction between incident optical field and medium is studied with quantum mechanical method, the molecular system of medium should be quantized. The Raman scattering is a second-order process in which two interaction processes between incident optical field and medium are involved. The quantum mechanical explanation of the Raman scattering process is based on the estimation of the transition rate between the different molecular states. In quantum physics, the Fermi's golden rule is a common way to calculate the first-order transition rate between an initial state g and a final state  that is proportional to the square modulus of the transition dipole vg . But in order to describe the Raman process, we need to calculate the second-order transition rate. The second-order transition rate -1 can be presented as [50,51] : where e is the electron charge, 0 is the vacuum permittivity,  is Planck's constant, n R is the refractive index at Raman frequency, and is the Dirac delta function. ω P is the frequency of incident optical field and ω R is the frequency of Raman scattering light. The frequencies ω and ω j are the transition frequencies from the ground state to the final state  and intermediate state j , respectively. In Raman scattering process, an incident optical field first converts the material system from the ground state g to an intermediate state j , which is an artificial virtual state. Then, the transition from the intermediate state j to the final state  happens that is considered as an instantaneous process. A full description of Raman scattering thus incorporates a quantized field theory [52] . From the quantized field theory, we can find the number of photon modes at frequency of ω R in volume of medium V [52] , and perform the summation over R in equation (2.6). From the argument of the Dirac delta function, the only nonzero contributions are the ones for which the emission frequency ω R =ω P -ω , the redshifted Stokes frequencies. With diff =d/dΩ(V/cn R ), the expression for the transition rate can be directly presented with the differential scattering cross-section [ This is the Kramers-Heisenberg formula, which is very important in quantum mechanical description of light scattering [50] . For Raman scattering process, the differential scattering cross-section for a particular vibrational state  can be simplified to: where the Raman transition polarizibility R can be written as: (2.9) From the above discussion, we can know that Spontaneous Raman scattering is a weak effect because the spontaneous interaction through the vacuum field occurs only rarely. Although Raman scattering is a second-order process, the intensity of Raman signal depends linearly on the intensity of the incident optical field. When the frequency approaches the frequency of a real electronic state of molecule, the Raman scattering is very strong, which is known as the resonant Raman and is one of effective methods to improve the efficiency of Raman scattering. If the spontaneous nature of the j→ transition can be eliminated by using a second field of frequency ω R , the weak Raman scattering can also be enhanced, such as CARS process. www.intechopen.com CARS process The disadvantage of the Raman scattering is the low conversion efficiency due to the small scattering cross-section. Only 1 part out of 10 6 of the incident photons will be scattered into the Stokes frequency when propagating through 1cm of a typical Raman active medium. It makes Raman spectroscopy and microscopy more complex and costly that limits its broad applications. As one of nonlinear techniques with coherent nature, intensity of CARS signal is about 10 5 stronger than spontaneous Raman. Therefore, the CARS spectroscopy and microscopy have been widely used in physics, chemistry, biology and many other related domains [22][23][24][25][26] . In the CARS process, three laser beams with frequencies of ω P , ω P' and ω S are used as pump, probe and Stokes, the energy level diagram of CARS is shown in figure 2. The primary difference between the CARS and Raman process is that the Stokes frequency stems from an applied laser field in the former. We can simply consider the joint action of the pump and Stokes fields as a source for driving the active Raman mode with the difference frequency ω P -ω S . Here, we will first describe CARS process with the classical model, after that a quantum mechanical explanation will be applied for finding the correct expression of the third-order nonlinear susceptibility. (a) Classical description of CARS The classical description of an active vibrational mode driven by the incident optical field is a model of damping harmonic oscillator. The equation of motion for the molecular vibration along Q is [53] : where is the damping constant, m is the reduced nuclear mass, and F(t) is the external driving force of the oscillation from the incident optical fields. In the CARS process, F(t) is provided by the incident pump and Stokes fields: From the equation (2.12), we know that the amplitude of molecular vibration is proportional to the product of the amplitudes of the driving fields and the polarizability change. When the frequency difference of the pump and the Stokes fields equals to the resonant frequency ω , the molecular vibration of active Raman mode will be resonantly enhanced. When a probe field with frequency of ω Pr passes through the medium, it will be modulated by the resonant enhanced molecular vibrational mode, resulting in a component at the anti-Stokes frequency, ω Pr +ω P −ω S . The total nonlinear polarization is the summation of all N dipoles: In order to simplify the experimental system, the pump field provides the probe field. The frequency of generated anti-Stokes signal is ω AS =2ω P −ω S . The total nonlinear polarization can be written as: (2.14) With (2.12), (2.13), and (2.14), we can deduce the amplitude of total nonlinear polarization: From above discussion, we know that the amplitude of the total nonlinear polarization is proportional to the product of three incident optical fields. Here, we define the vibrational www.intechopen.com Ultra-Broadband Time-Resolved Coherent Anti-Stokes Raman Scattering Spectroscopy and Microscopy with Photonic Crystal Fiber Generated Supercontinuum 177 It is an inherent property of medium that describes the medium's response to the incident optical fields. When the frequency difference of incident optical fields matches with frequency of a vibrational mode, The intensity of CARS signal is proportional to the square modulus of the total polarization: It scales quadratically with the pump intensity, linearly with the Stokes intensity, and quadratically with the third-order susceptibility of the medium. Although the classical description of CARS can provide a picture of CARS process and a simplified relationships among the medium, intensities of incident optical fields and CARS signals, it is unable to account for the interaction of the fields with the quantized states of the molecule. More accurate numerical estimates can only be achieved with a quantum mechanical description of the CARS process. Quantum mechanical description of CARS The quantum mechanism of CARS process can be effectively described by the timedependent third-order perturbation theory. In the quantum mechanical description, the system is usually expressed in terms of the density operator: where the wave functions are expanded in a basis set   n with time-dependent coefficients c n (t), and The expectation value for the electric dipole moment is then given by: The third-order nonlinear susceptibility for the CARS process is found by calculating the third-order correction to the density operator through time-dependent perturbation theory and with the relation   where is the vibrational decay rate that is associated with the line width of the Raman mode R. The amplitude A can be related to the differential scattering cross-section: where n P and n S are the refractive index at the pump and Stokes frequency, gg and is the element of the density matrix of the ground state and vibrationally excited state, respectively. The CARS signal intensity is again estimated by substituting equation (2.20) into (2.7). The quantum mechanical description of CARS process can be qualitatively presented by considering the time-ordered action of each laser field on the density matrix nm (t). Each electric field interaction establishes a coupling between two quantum mechanical states of the molecule, changing the state of the system as described by the density matrix. Before interaction with the laser fields, the system resides in the ground state gg . An interaction with the pump field changes the system to jg . Then the system is converted into g by the following Stokes field. The density matrix now oscillates at frequency ω vg =ω jg −ω vj that is a coherent vibration. When the third incident optical field interact with medium, the coherent vibration can be converted into a radiating polarization kg , which propagates at ω kg =ω jg +ω vg . After emission of the radiation, the system is brought back to the ground state. As a coherent Raman process, the intensity of CARS signal is more than five orders of magnitude greater than that of spontaneous Raman scattering process. Because the radiating polarization is a coherent summation, the intensity of CARS signal is quadratic in the number of Raman scattering. Because of the coherence, the CARS signal is in certain direction that allows a much more efficient signal collection than Raman scattering. CARS signal is blue-shifted from incident beams, which avoids the influence from any one-photon excited fluorescence. Source of nonresonant background signals From the theory of the CARS process, we can know that CARS signal comes from the thirdorder nonlinear susceptibility. The total CARS signal is proportional to the square modulus of the nonlinear susceptibility [46] : The total third-order nonlinear susceptibility is composed of a resonant (  3 r  ) and a nonresonant (  3 nr  ) part: and represents the Raman response of the molecules. When the frequency difference between the pump and Stokes fields equals to the vibrational frequency of an active Raman mode, a strong CARS signal is induced. It provides the inherent vibrational contrast mechanism of CARS microscopy. However, it is not the only components in the total anti-Stokes radiations. In the absence of active Raman modes, the electron cloud still has oscillating components, at the anti-Stokes frequency ω AS =2ω P -ω S , coupling with the radiation field. It is the purely electronic nonresonant contribution from   polarization at 2ω P -ω S is established via field interactions with virtual levels. When 2ω P is close to the frequency of a real electronic state, it is a nonresonant two-photon enhanced electronic contribution, as shown in figure 2 (c). The nonresonant contribution is a source of background that limits the sensitivity of CARS microscopy. For weak vibrational resonances, the nonresonant background may overwhelm the resonant information. In biological samples, the concentration of interested molecules usually is low, while the nonresonant background from the aqueous surrounding is generally ubiquitous. The mixing of the nonresonant field with the resonant field gives rise to the broadened and distorted spectral line shapes. Therefore, the suppression of the nonresonant contribution is essential for practical applications. Suppression of NRB noise Several effective methods have been developed in order to suppress the NRB noise. Here, we briefly discuss several widely used techniques for suppressing the NRB noise. Epi-detection [54,55] In samples, every object will be the source of NRB noise. The aqueous environment produces an extensive NRB noise that may be stronger than the resonant CARS signal from a small object in focus. Because the epi-CARS (E-CARS) has the size-selective mechanism, the NRB noise from the aqueous surrounding can be suppressed while the signal from small objects will be retained. It should be noted that the NRB noise can not be directly reduced in E-CARS. When samples have comparative sizes or in a highly scattering media, such as tissues, this method will not work. Polarization-sensitive detection The polarization-sensitive detection CARS (P-CARS) is based on the different polarization properties of the resonant CARS and nonresonant signals to effectively suppress the NRB noise [56][57][58] . According to the Kleinman's symmetry, the depolarization ratio of the nonresonant field is (3) (3) 1221 1111 13 nr nr nr    [59] . However, the depolarization ratio of resonant field is CARS has been successfully applied in spectroscopy and microscopy [60] . A schematic of a typical P-CARS system is shown in figure 3. In a P-CARS system, an analyzer in front of the detector is used to block the nonresonant signal, while the portion of the differently polarized resonant signal passes through the analyzer. Although the P-CARS can effectively suppress the NRB noise, the acquisition time is longer because of the loss of resonant signals Time-resolved CARS detection In the time-resolved CARS detection (T-CARS), the ultra-short laser pulse is used as excitation laser. The resonant and nonresonant contributions are separated in the time domain due to the different temporal response characteristics [61,62] . Because of the instantaneous dephasing characteristics of nonresonant signal, it exists only when three laser pulses temporally overlap. In T-CARS, a pair of temporally overlapped laser pulses is used as the pump and Stokes pulse to resonantly enhance the molecular vibration. A laser pulse with time delay is used as the probe pulse. The resonant CARS signal decays in the finite dephasing time of the vibrational mode. The dephasing time is related to the width of spectral line of the corresponding Raman band and is typically several hundred femtoseconds (in solid) to a few picoseconds (in gas or liquid) [63] . Therefore, the NRB noise can be eliminated by introducing a suitable time delay between the pump/Stokes and probe pulses [64] . The detail discussions will be given in the next section. Phase control In the phase control method, a phase-mismatched coherent addition of nonresonant spectral components is introduced with phase shaping of the femtosecond laser pulses to suppress the nonresonant signal [65][66][67] . For CARS imaging with picosecond pulses, the phase control can be achieved by heterodyning the signal with a reference beam at the anti-Stokes wavelength [68,69] . With the heterodyne CARS interferometry, the imaginary part of the third-order nonlinear susceptibility (Im{ r (3) }) can be separated to suppress NRB noise [70][71][72] . www.intechopen.com Condition of momentum conservation: Phase-matching Unlike fluorescence or spontaneous Raman microscopy, the CARS process is a parametric process, in which the conditions of energy and momentum must conserve. The generation of CARS signals thus relies on not only the intensity of focused incident optical fields, but the phase conditions of focused fields. l<<l C = /| k|, (2.26) where l is the effective interaction length, l C is the coherent length, the wave-vector mismatch k=k AS -(k P +k P' -k S ), k P ,k P' ,k S and k AS is the wave-vector of pump, probe, Stokes and anti-Stokes field respectively. Under the tight focusing condition, in the CARS microscopy with collinear geometry, a small excitation volume and a large cone angle of wave-vectors compensate the wave-vector mismatch induced by the spectral dispersion of the refractive index of the sample, and the phase matching condition can easily be fulfilled [73,74] . Therefore, the collinear geometry is the best configuration choice of CARS microscopy. counterpropagating CARS (C-CARS) (c) with collinearly propagating geometry, large wavevector mismatch is introduced and is | k| = 2|k AS | = 4n / AS , and | k| = 2|k S | =4n / S . Here, n is the refractive index of medium assumed to be independent of frequency. Therefore, the latter two CARS microscopes have higher sensitivity for object of much smaller than the interaction length. Applications of CARS spectroscopy and microscopy As one of noninvasive research tools with high sensitivity, specificity and resolution, CARS microscopy has attracted more and more attention and been widely used in physics, chemistry, biology, medicine and life science et al. The capabilities and availability of CARS microscopy has been further improved with the recent technique's advances. Many exciting results have been presented in many literatures. Because of the label-free characteristic of CARS microscopy, it has been regarded in the biological research, especially in the unstained cells. The first CARS microscopy was used to obtain the structural image of epidermal cells of onion immersed in D 2 O [21] . The water diffusion in live dictyostelium cells was researched with a broad vibrational resonance centered at 3300 cm -1 , which could not be observed with fluorescence microscopy [75] . These early experimental results have proved that the CARS microscopy is an effective complementary method of fluorescence microscopy. Since many cellular processes take place on a subsecond timescale, high temporal resolution is required. By improving the temporal resolution, it is possible to image the chromosome distribution during mitosis using the symmetric stretching vibration of the DNA phosphate backbone [76] . Because of the good detectability of lipids, the structural and functional images of various living cells were obtained with CH bond of lipid [75,[77][78][79] . The sensitivity of CARS microscopy is high enough to detect lipid vesicles with sizes smaller than 300 nm in diameter [79] . Compared with fluorescence microscopy, CARS microscopy allows long-term investigations of cell without photobleaching. Therefore it can be used to long-term track biological molecules, such as lipid droplets, in living cells [80] . Nan and associates used the CARS microscopy to study the growth and transport of lipid droplets in live cells [79] . By tuning to the CH 2 lipid vibration, Cheng and his colleagues observed the apoptosis, and identified different stages in the apoptotic process [76] . Potma and his associates visualized the intracellular hydrodynamics with the CARS signal of the O-H stretching vibration of water [81] . On the basis of cell imaging, the CARS microscopy is used in the living animal's tissue imaging, in which the tissue's optical properties, such as absorbability and scattering, are of obvious concern. The method of epi-detection is a good solution in tissue imaging with CARS. CARS microscopy has been successfully used for imaging of nonstained axonal myelin in spinal tissues in vitro [82] . Both the forward and backward CARS signals from the tissue slab were detected. The lipid distributions in skin tissue of live animals have been observed [83] . These all preliminary experimental results show us a vast potential of CARS microscopy in biomedical imaging and early diagnosis of diseases. Supercontinuum with photonic crystal fiber As we have discussed in previous sections, in a CARS spectroscopy or microscopy, it is necessary that two ultra-short laser pulses with high peak power and different frequencies reach focus at the same time. In order to quickly distinguish different molecules in a complex system with the complete CARS spectra, such as various biological molecules in cells, it is required that the output of source must have not only a wide enough spectral range, but the spectral continuity and simultaneity of various spectral components [84] . Spectral broadening and the generation of new frequency components are inherent features of nonlinear optics. When ultra-short laser pulses propagate through a nonlinear medium, a dramatic spectral broadening will happen. This particular physical phenomenon, known as www.intechopen.com Ultra-Broadband Time-Resolved Coherent Anti-Stokes Raman Scattering Spectroscopy and Microscopy with Photonic Crystal Fiber Generated Supercontinuum 183 supercontinuum (SC) generation, was first demonstrated in the early 1970s [85][86][87] . With the advent of a new kind of optical waveguides in the late 1990s, photonic crystal fiber (PCF) has led to a great revolution in the generation of SC with ultra-broad spectral range and high brightness [39,88,89] . In this section, we will introduce the SC generation with PCFs by theoretical analysis and modeling. Based on the requirements of CARS, the method and conditions for realizing an ideal SC source are discussed. Photonic crystal fiber used for supercontinuum generation SC generation involves many nonlinear optical effects, such as self-and cross-phase modulation, four-wave mixing (FWM), stimulated Raman scattering (SRS) and solitonic phenomena, which add up to produce a output with an ultra-broadband spectra, sometimes spanning over a couple of octaves. With the developments of theories and techniques of modern nonlinear optics, various optical materials are realized and widely used in various fields. A photonic crystal fiber (PCF) (also called holey fiber (HF) or microstructure fiber (MF)) [90][91][92] , based on the properties of two-dimension photonic crystal, is a kind of special optical fiber, which can confine the incident light passing through the entire length of fiber with its tiny and closely spaced air holes. Different arrangements of the air holes make PCFs with various optical characters, such as the single-mode propagation, high nonlinearity, and controllable dispersion. According to different guiding mechanism, there are mainly two categories of PCF, photonic bandgap (PBG) PCF [93] and the total internal reflection (TIR) PCF, as shown in figure 5. The PBG PCF is usually used for transmission of high-energy laser pulses and optical signals. The major energy propagates through the hollow core of a PBG PCF with low loss, dispersion and nonlinear effects. The TIR PCF is used for the SC generating with wide spectral range. When high-intensity laser pulses with narrow line-width propagate in a TIR PCFs, the SC, sometimes spanning over a couple of octaves, could be generalized because of its high nonlinearity and group velocity dispersion effects. Numerical modeling The process of SC generation is a synthesis result of a variety of the nonlinear optical effects, when ultra-short laser pulses with high intensity propagate in a PCF [94,95] . Used as the source for CARS, we have mostly concerned about, however, the single-mode propagation and temporal distributions of various spectral components of SC, called temporal-spectral distribution [96] . For a PCF with a given structure, a number of numerical modeling and computational methods have been constructed and reported to obtain the entire properties of a PCF. Here, we carry out an entire analysis on the SC generation with a common method that is mainly divided into three steps. Firstly, as one of most effective methods, the finite element method (FEM) can be used to obtain the coefficients of the chromatic dispersion (the effective propagation constant eff ) based on the structural parameters of PCFs (the diameter of air-hole, and the pitch between two holes). The dispersion coefficients k (k≥2) can be derived by the Taylor series expansion at the central frequency ω 0 , and the nonlinear coefficient can be approximately calculated by =n 2 ω 0 /cA eff , with n 2 the nonlinear-index coefficient for silica, c the speed of light in vacuum, and A eff the effective core area. Secondly, a propagation equation is used to calculate the SC generation during the propagation of ultra-short laser pulses, although the generalized nonlinear Schrödinger Equation (GNLSE) is not the only way to realize it. The process of the pulses propagation was simulated with the split-step Fourier method (SSFM) to solve GNLSE [94] . In equation (3.1), the linear propagation effects on the left-hand side and nonlinear effects on the right-hand side are given, where and A are the loss coefficient and the spectral envelope with the new time frame T=t-1 z at the group velocity 1 -1 . R(T) presents the Raman response function. The noise R , which affects the spontaneous Raman noise, is neglected, R =0. It has more detailed explanation in the paper [94]. For a CARS spectroscopy or microscopy, the temporal-spectral distribution of SC is also an important factor. Therefore, thirdly, we have to figure it out in order to fully understand the temporal distribution of various spectral components in SC, although the spectral envelope of SC can be obtained in the second step. To obtain the temporal-spectral distribution of SC, cross-correlation frequency resolved optical gating method (XFROG) was applied for characteristic of SC and could be proved by an experimental instrument of XFROG [97] . The two-dimensional XFROG spectrogram can be plotted by using two electromagnetic fields and the following equation: where E(t) is the calculated envelope of the SC with the variable t, and E gate (t-τ) is the gating pulses with the delay time τ between the seed laser pulses and the SC. It can be concluded that XFROG measurement is a good way to characterize the temporal and spectral evolution of the SC generation and interpret the particular time and frequency domain information of the optical effects. With the above introduced method, we carried out simulation analysis in www.intechopen.com Ultra-Broadband Time-Resolved Coherent Anti-Stokes Raman Scattering Spectroscopy and Microscopy with Photonic Crystal Fiber Generated Supercontinuum 185 order to find out a way to achieve an ideal SC source for CARS spectroscopy and microscopy. Some representative results will be shown in the next section. Supercontinuum generation with photonic crystal fiber Some of our computational results are shown here in order to account for the whole processing course clearly. We have simulated the SC generation by using a PCF with two zero dispersion wavelengths (ZWD) [98] . The calculated group velocity dispersion (GVD) curve is shown in figure 6. By solving GNLSE with SSFM, the temporal and spectral distributions of the SC generation along the whole length of PCF are shown in figure 7. With the XFROG trace, the results of temporal-spectral distributions of PCFs with different lengths are described in figure 8. Fig. 6. Group velocity dispersion curve of the PCF with two ZWDs [98] . Fig. 7. Time (a) and spectrum (b) evolution of SC along the entire length of the PCF with the input pulse width 30 fs and peak power 10 kW [98] . In figure 8(c), the spectral range of generated SC is 500nm by using a PCF with two ZDW under proper pumping conditions. In SC, the spectral continuity, simultaneity and intensity of red-shifted SC components are all good enough for a source of CARS. But for this purpose, an ultra-short pulse laser system with pulse width of 30fs is needed, which is not easily sustainable during practically experimental operations. Therefore, we have tried to seek a www.intechopen.com Photonic Crystals -Innovative Systems, Lasers and Waveguides 186 simpler way to generate favorable SC for CARS applications. The simulation results are shown in figure 9, where we can see that the SC generated by a PCF with two ZDW is quite good for CARS applications when the laser pulse width is 300fs, as shown in figure 9 (c). Broadband CARS spectroscopy and microscopy In a traditional CARS microscopy, two or three ultra-short laser pulses with narrow linewidth and different frequencies are used as excitation beams. It permits high-sensitivity imaging based on a particular molecular bond, called single-frequency CARS. But for a mixture with various or unknown components, it is not adequate to distinguish the interested molecules from a complex based on the signal of a single active Raman bond. The broadband even complete molecular vibrational spectra will be beneficial for obtaining the accurate information of various chemical compositions. Although it can be achieved by sequentially tuning the frequency of Stokes beam, it is time-consuming and unpractical for some applications. This problem can be circumvented by using the multiplex CARS (M-CARS) or broadband CARS spectroscopy with simultaneously detected wider band. Introduction to broadband CARS The M-CARS spectroscopy was first demonstrated by Akhamnov et al., a part of CARS spectra of a sample can be simultaneously obtained [34] . In M-CARS, a broadband laser beam is used as the Stokes beam for providing a required spectral range. A narrow line-width laser beam is used as the pump and probe beam that determines the spectral resolution of the system. The multiplex molecular vibrational modes of a sample can be resonantly enhanced, the corresponding CARS signals can be detected simultaneously, the energy diagram of M-CARS shown in figure 10. In the previous works, a narrowband and a broadband dye laser was used for the pump/probe and the Stokes beams respectively [36][37][38] . The recent progress in wavelengthtunable ultra-short pulse laser has been giving a powerful momentum to the development of M-CARS. The M-CARS micro-spectroscopy has been developed for fast spectral characterization of microscopic samples [35,99,100] . But because of the used laser limitation to the line-width, M-CARS is still unable to simultaneously obtain wider molecular vibrational spectra as required. With the progress of SC generation technique [85][86][87] , especially with the advent of PCF [39] , much wider spectra of M-CARS can be simultaneously obtained by broadening the spectral range of Stokes pulses with nonlinear optical fiber, such as the tapped optical fiber [101] or PCF [43,44,102] . By using a specially designed and achieved SC source, the simultaneously detectable spectral range of M-CARS spectroscopy and microscopy is greatly widened, which can be called the broadband CARS. Wider simultaneously detectable spectral range makes it possible to quickly distinguishing various components and real-time monitoring slight variations in a mixture [103][104][105] . At the same time, the system of the broadband CARS with SC is simplified and cost is reduced. Suppression of NRB noise in broadband CARS with SC In the M-CARS, NRB noise can not be avoided and many methods for suppressing it in a single-frequency CARS can also be used, but they can not be easily applied in the broadband CARS with SC, because of the complex polarization and phase of various spectral components in SC, as shown in section 3. The NRB noise can be eliminated with numerical fitting method by regarding it as a reference signal, but the Raman spectra of the samples are needed in advance [106] . As presented in above section, the time-resolved detection method can effectively eliminate the NRB noise by introducing a temporal delay between pump/Stokes pulses and probe pulse in order to temporally separate the resonant and nonresonant signals. In a T-CARS, three laser pulses, with frequencies at ω P , ω P' , and ω S , are used as the pump, probe and Stokes pulses respectively. The generation process of CARS signals can be described in three phases [107] . In the first phase, the inherent molecular vibration of active Raman mode is driven by simultaneous pump and Stokes pulses and is resonantly enhanced when Ω R =ω P -ω S . The amplitude of resonantly enhanced molecular vibration is [108] : where Q v is the amplitude of molecular vibration driven by the incident optical fields, T 2 is the dephasing time of the resonant enhanced molecular vibrational state. When the simultaneous ultrashort laser pulses are used as the pump and Stokes pulses, the intensity changes of Q v with time is shown in figure 11. Q v increases during the period of incident laser pulses and reaches its maximum when the pump and Stokes pulses just disappear. In the second phase, with the disappearance of incident laser pulses, the resonantly enhanced molecular vibration will rapidly return to its original state that can be regarded as a free relaxation process. Equation (4.1) can be rewritten as [107] : The solution of equation (4.2) is [107] : where A is an integration constant. We can see that Q v decays exponentially with time immediately after disappearance of the pump and Stoke pulses. Assuming the dephasing time is 10ps, the relaxation process of Q v is shown in figure 12. Fig. 11. Intensity of Q v amplitude versus time, when the vibration of active Raman mode is resonantly enhanced by incident pump and Stokes pulses [107] . Fig. 12. The free relaxation process of a molecular vibration mode immediately after disappearance of the pump and Stokes laser pulses [107] . In the third phase, when the probe pulse reaches the focus with the delay time t D , it will be modulated by the resonantly enhanced molecular vibration. The signal field at anti-Stokes frequency is generated and can be expressed as [107] :  that disappears with the end of the pump and Stokes pulses simultaneously. When the phase-matching condition is satisfied, k=0, the intensity changes of the CARS signal with time is shown in figure 13. Fig. 13. Intensity of CARS signal changes with time [107] . In the T-CARS method, the resonant and nonresonant components have different temporal response characteristics. In order to effectively separate the resonant and nonresonant contributions and avoid the intensity loss of the CARS signal, the pulse-width of simultaneous pump ans Stokes pulse should be as short as possible and the rising edge of the probe pulse should be as steep as possible. Recently the broadband T-CARS spectroscopy and microscopy with SC has been rapidly developed [43,44] , whose energy level diagram is shown in figure 14. In the broadband T-CARS, a well-designed SC is used as the pump and Stokes pulses and a temporally delayed laser pulse is used as the probe pulse. The simultaneously detectable spectral range and the spectral resolution are determined by the temporally overlapped spectral range of the SC and by the line-width of the probe pulse respectively. With the improvement of temporalspectral distribution of the SC, the simultaneously detectable spectral range of system can be further extended, which is called the ultra-broadband T-CARS and will be discussed in detail in the next section. Ultra-broadband T-CARS spectroscopy with SC generated by PCF [84, 96] With a broadband T-CARS spectroscopy, we can obtain more specificities of the sample, not only the vibrational spectra reflecting the molecular structure and compositions, but also the dephasing time of various molecular vibrational modes reflecting the molecular responses to the external micro-environment, which are especially favorable for the study of the complicated interaction processes between molecules and their micro-environment such as solute-solvent interactions [108,109] , molecular dynamics [110][111][112][113][114] , supramolecular structures [115] and excess energy dissipations in the fields of biology, chemistry and material science [64,[116][117][118] . The principle of broadband T-CARS spectroscopy has been presented in section 4.2. As discussed in section 3 and 4, the simultaneously detectable spectral range of a broadband T-CARS spectroscopy is limited by the simultaneously generated spectral range and its continuity in the SC. An ultra-broadband T-CARS spectroscopy based on optimized SC has been developed to simultaneously obtain CARS signals corresponding to various molecular vibrational modes and Raman free induction decays (RFID) of these molecular vibrational modes in a single measurement [43,84] . The schematic of the broadband T-CARS spectroscopy is shown in figure 15. A femtosecond laser pulse of a mode-locked Ti:sapphire laser oscillator (Mira900, Coherent) is split into two parts by a beam splitter. One beam of the laser pulse, used as the seed pulse, is introduced into a PCF with geometric length of 180 mm and ZDW of 850 nm respectively. After passing through a long-pass filter, the residual spectral components of SC are used as the pump and the Stokes. Another beam is used as the probe pulse after passing through a narrow-band-pass filter. Two beams of the laser pulses are collinearly introduced into a microscope and tightly focused into a sample with an achromatic microscopy objective. The generated CARS signals in the forward direction, passing through a short-pass filter, are collected with the same microscope objective and detected by a fibre spectrometer. The delay time between SC pulse and probe pulse can be accurately adjusted by a kit of time delay line. Fig. 15. The schematic of the ultra-broadband T-CARS system. BS, beam splitter; Iso, optical isolator; NL, non-spherical lens; PCF, photonic crystal fibre; BC, beam combiner; MO1-3, microscopy objective; BPF, narrow-band-pass filter; LPF, long-pass filter; SPF, short-pass filter [84] . With the ultra-broadband T-CARS spectroscopy, the time-resolved measurement is achieved by adjusting the delay time between the SC pulse and the probe pulse step by step [84] . The obtained time-resolved CARS spectral signals and CARS signals at specific delay www.intechopen.com Photonic Crystals -Innovative Systems, Lasers and Waveguides 192 time of pure benzonitrile and mixture solution are shown in figure 16. The molecular vibrational spectra for pure liquid benzonitrile in the range of 380-4000 cm -1 can be simultaneously obtained without any tuning of the system and its characteristics. The NRB noise can be effectively suppressed through tuning the delay time. For the pure benzonitrile, the obvious peaks at wavenumbers of 1016 cm -1 , 1190 cm -1 , 1608 cm -1 , 2248 cm -1 and 3090 cm -1 correspond to C-C-C trigonal breathing, C-H in plane bending, C-C in plane stretching, C≡N stretching, and C-H stretching vibrational modes respectively [119] . In the mixture, the peaks at wavenumbers of 1016 cm -1 , 2248 cm -1 and 3090 cm -1 of benzonitrile can be apparently seen [120] . Other peaks correspond to the typical molecular vibrational modes of methanol and ethanol. We can easily and accurately distinguish the various components in the mixture. The spectral resolution, depending on the line-width of the probe pulse and spectral resolution of the spectrometer, is 14 cm -1 in this case. [84] . By extracting the time evolutions of CARS signals corresponding to the molecular vibrational modes for a pure liquid benzonitrile and mixture, the RFID processes of various molecular vibrational modes can be measured at the same time. The dephasing times of various molecular vibrational modes can be obtained by fitting the data to a single exponential function as: where T is the vibrational dephasing time responding to each molecular vibrational mode; A 0 is a constant; is the delay time. The normalized intensities of five typical peaks corresponding to typical molecular vibrational modes for pure benzonitrile, at the wavenumbers of 1016 cm -1 , 1190 cm -1 , 1608 cm -1 , 2248 cm -1 and 3090 cm -1 , are plotted one by one as a function of and fitted to equation (4.5) in figure 17 (a)-(e) respectively [84] . [84] . In a benzonitrile-methanol-ethanol mixture solution, the benzonitrile molecule is regarded as the target molecule. The normalized intensities of three typical peaks corresponding to typical molecular vibrational modes for benzonitrile, at the wavenumbers of 1016 cm -1 , 2248 cm -1 and 3090 cm -1 , are plotted one by one as a function of and fitted to equation (4.5) in figure 18 (a)-(c), respectively. [84] . From experimental results, the intensities of the CARS signals corresponding to different molecular vibrational modes attenuate exponentially against the delay time in a large dynamic range. By fitting the intensity data of the CARS signal to a single exponential function for the molecular vibrational modes at different wave-numbers, half of vibrational dephasing time T/2 can be worked out as shown in figure 17, which are consistent with the previously published data [43,[121][122][123] . But in a benzonitrile-methanol-ethanol mixture solution, the experimental results show that the influence of solvent on the property of solute is reflected not by the Raman peak position but by the variations of the vibrational dephasing times for different molecular vibrational modes. Simultaneously obtaining the complete molecular vibrational spectra As discussed above, the simultaneously detectable spectral range of the ultra-broadband T-CARS with SC depends on the quality of the SC. It is of importance to simultaneously obtain the complete molecular vibrational spectra and the dephasing times of various molecular vibrational modes of the sample. The former is very useful for effectively and accurately www.intechopen.com Ultra-Broadband Time-Resolved Coherent Anti-Stokes Raman Scattering Spectroscopy and Microscopy with Photonic Crystal Fiber Generated Supercontinuum 195 distinguishing various kinds of components and understanding the mechanisms of chemical reactions in a dynamic process. The latter is very helpful for explanation of both solvent dynamics and solute-solvent interactions in the fields of biology, chemistry and material science. The question is whether we can reach above goal in the near future by optimizing the temporal-spectral distribution of the SC? Our answer is positive. As what we have known the existing molecules have the Raman wave-numbers in the range of about tens to 5000 cm -1 , which means that the simultaneously generated Stokes wavelength bandwidth should be not less than 350nm. As we have given in section 3 that the bandwidth of the simultaneously generated SC can be greater than 400nm, therefore it is very promising to achieve label-free microscopic imaging technique with high contrast and chemical specificity based on the simultaneously obtained complete molecular vibrational spectra. Methods of improving spatial resolution of CARS microscopy As well known, there is a theoretical limitation of the spatial resolution for any far-field optical microscopes because of the existence of light diffraction. Ernst Abbe defined the diffraction limit as [124] : where d is the resolvable minimum size, is the wavelength of incident light, n is the refraction index of the medium being imaged in, is the aperture angle of the lens, and NA is the numerical aperture of the optical lens. It is obvious that for an optical microscope, d is the theoretical limit of spatial resolution. The samples' spatial features, smaller than approximately half the wavelength of the used light, would never be able to be resolved. In recent years, in order to meet the requirements on the study of life science and material science, ones have found several ways to overcome the optical diffraction limit and obtained sub-diffraction limited spatial resolution theoretically. In fluorescence microscopy, the success of the resolution enhancement techniques relies on the ability to control the emissive properties of fluorophores with a proper optical beam. The most important developments for breaking through the diffraction barrier are sub-diffraction-limited resolution fluorescent imaging techniques, such as photo activated localization microscopy (PALM) [125] , stochastic optical reconstruction microscopy (STORM) [126] , and stimulated emission depletion (STED) microscopy [127,128] , which have opened up notable prospect for sub-cellular structure and bio-molecular movement and interaction imaging. As one of label-free nonlinear imaging techniques, the spatial resolution of CARS microscopy is higher (about 300nm lateral resolution) than the one of traditional linear optical microscopy, but it is still a diffraction-limited imaging technique. Today, how to achieve a sub-diffractionlimited CARS microscopy has become one of attractive topics all over the world. Compared with developments of the fluorescence nanoscopy, the method for breaking through the diffraction limitation in CARS microscopy is still under theoretical research. In 2009, Beeker et al. firstly presented a way to obtain a sub-diffraction-limited CARS microscopy in theory [129] . With the density matrix theoretical calculations, they found that the molecular vibrational coherence in CARS can be strongly suppressed by using an annular mid-infrared laser to control the pre-population of the corresponding vibrational state. The energy level diagram is shown in figure 19. Thereby the emission of generated CARS signals in the annular area of point spread function could be significantly suppressed and the spatial resolution can be improved considerably. Fig. 19. Energy level diagram for CARS extended with an additional level 4 . Energy level 1 -4 are ground state, vibrational state, excited vibrational state and control state, respectively. ω P , ω pr , ω S , and ω ctrl are the frequencies of pump, probe, Stokes and control laser [129] . Alexei Nikolaenko et al provided their CARS interferometric theory in the same year [130] . In this theoretical research, a stabilized, phase-adjustable interferometer was used to achieve nearly complete interference between the local oscillator and the pump-and Stokes-induced CARS radiation. The schematic of the CARS interferometry setup is shown in figure 20. Their theoretical analysis showed that the energy loss in the anti-Stokes channel is accompanied by an energy gain in the pump and Stokes channels. This implied that the CARS interferometry provided a controllable switching mechanism of anti-Stokes radiation from the focal volume, which might be a possible technique for improving the spatial resolution of the CARS microscopy. Fig. 20. Schematic of the CARS interferometry. ES, error signal; LO, unit for generating local oscillator; WP, wedged plate; BS, 50-50 beam splitter; DM, dichroic mirror; FB, optical feedback signal; MO, microscope objective; BPF, bandpass filter; Cond., condenser; PMT, photomultiplier [130] . Hajek et al presented a theoretical analysis and simulation of a wide-field CARS microscopy with sub-diffraction-limited resolution in 2010 [131] . The configuration and a simulation result are shown in figure 21. In this method, two coherent pump beams were used and interfered in the sample plane, forming a standing wave with variable phase. The numerical simulation showed that a super-resolved image with three times better lateral resolution could be obtained by image processing method of standing-wave frequency theory. [131] . All above discussed sub-diffraction-limited CARS microscopy open up the possibility of achieving sub-diffraction-limited CARS microscopy. Unfortunately, these approaches can only be used in the single frequency CARS microscopy based on the signal of a single molecular vibrational mode. [132] As discussed above, successful methods of breaking through the theoretical diffraction limitation of CARS microscopy depend on the controllable emissive properties of the useful signals in the focus. But the above suggested methods for breaking through the diffraction limited resolution can only be used for dealing with the single bond signal. By researching the CARS process with quantum optics theory, we presented our method for breaking through the diffraction limitation, unlike the above methods, which is effective for ultrabroadband T-CARS microscopy. In our theoretical model, all incident laser fields, generated signal field and the material system are all described with quantum mechanics theory. In the CARS process, the first light-matter interaction process involves resonant enhancements of all active molecular vibrational modes, in which the frequency differences of pump and Stokes fields equal the inherent vibrational frequencies of the molecular bonds respectively. The resonantly enhanced molecular vibrations exist in quantized forms which are called the phonons. Their numbers are equal to the numbers of generated Stokes photons respectively. When a probe field propagates through the matter, the photons of the probe interact with the generated phonons. The photons with anti-Stokes frequencies are generated, and phonons are annihilated at the same time. Phonon depletion CARS microscopy Based on the whole quantized picture of the CARS process, we presented a phonon depletion CARS (PD-CARS) technique by introducing an additional probe beam with the frequency different from the one of the probe beam in the center of the focus. When the pump and Stokes simultaneously reaches the focus, the phonons are generated. The additional probe beam, which is shaped into a doughnut profile at the focus with a phase mask, reaches the focus a little bit earlier than the probe beam in the center of the focus. Therefore the wavelengths of the generated anti-Stokes signals at the peripheral region differ from the ones at the center of the focus and can be easily separated with a proper interference filter. By this way, the spatial resolution of the ultra-broadband T-CARS microscopy can be improved greatly. The simulation result of PSF is defined as [132] : are intensities of phonon field at the center of the focus and additional probe field for phonon depletion in the annual region respectively. From equation (5.2), we can know that the spatial resolution of CARS microscopy will be improved by increasing the intensity of additional probe beam. The simulation result of PSF is shown in figure 22. When max 1 P I is fiftyfold of I det , the spatial resolution of the ultra-broadband T-CARS microscopy reaches 41nm. Conclusions and prospects In this chapter, we mainly introduce a kind of noninvasive label-free imaging techniquethe ultra-broadband T-CARS spectroscopy and microscopy with SC generated by PCF. We www.intechopen.com Ultra-Broadband Time-Resolved Coherent Anti-Stokes Raman Scattering Spectroscopy and Microscopy with Photonic Crystal Fiber Generated Supercontinuum 199 describe the mechanisms of Raman scattering and T-CARS process with the classical and quantum mechanical theory. The CARS signals with much stronger strength and welloriented direction originate from the coherent resonant enhancement between incident lights and molecular vibrations. In order to quickly and accurately distinguish different kinds of molecules in a complex, such as in a live cell, a method for simultaneously detecting the ultra-broadband CARS signals without the NRB noise has to be developed. On the basis of theoretical analysis and simulation of the SC generation with a PCF, a satisfied SC source can be achieved for obtaining the ultra-broadband even complete CARS spectra of the specimen by optimizing the parameters and the other experimental conditions of the PCF and ultra-short laser. At the same time, the NRB noise can be effectively suppressed in a broad spectral range with the time-resolved method. The method study for obtaining subdiffraction limited spatial resolution is still on the stage of theoretical research. Some original techniques are presented in this chapter. The PD-CARS technique provides a possible route to the realization of ultra-broadband T-CARS microscopy with sub-diffraction limited spatial resolution, which will probably become an attractive imaging method in biology, medicine and life science in the near future.
14,281.4
2012-03-30T00:00:00.000
[ "Biology", "Chemistry" ]
Study on Strategies to Implement Adaptation Measures for Extreme High Temperatures into the Street Canyon The purpose of this study is to evaluate the potential for using the spaces integrating the roads and sidewalks in the street canyon as human-centered spaces, and to investigate more appropriate measures to improve the thermal environment for pedestrians and visitors in these spaces. Based on the spatial distribution of SET* throughout the day, as possible human-centered street space uses, north–south streets with restricted widths and south sidewalks on east–west streets are candidates. Spatiotemporal distributions of SET* were calculated when water was sprinkled on the road surface in the street canyon and when water surface, sunshade, and trees were introduced in the street canyon. Assuming people walk or stay on the water surface, the MRT decreases, causing SET* to be below 31.5 ◦C at any time, so if a continuous supply of water is guaranteed and people can approach the water surface, the water surface can be expected to have a significant impact anywhere at any time. On the east–west street, shading by sunshades and trees occurs along the lanes at any time, allowing pedestrians moving through the lanes to pass through the shaded areas on a periodic cycle. On north–south street, the time required for the countermeasures is limited to around noon, so the measure is effective even if the shade does not occur in the target lanes only around noon. Introduction Based on the experience of the extreme high temperature (heatwave) of the summer in recent years, Kobe city has been studying and implementing several adaptation measures for extreme high temperatures. The authors [1] evaluated the effects of watering on road, sunshade with mist spray, water surface, watering on pavement, and mist spray in a park on improving thermal environment using thermal environment index Standard New Effective Temperature (SET*) based on the demonstration experiments. As a demonstration of cool spots in outdoor spaces, the effect of fractal sunshades with fine mist spray was measured in the plaza in front of a famous department store and on the north-south street in front of a central train station in the summer of 2019. Two vehicles with water tanks sprinkled 32 tons of water on 25.8 ha of downtown streets every day except rainy days in the summer of 2020. By watering on the roadway when the surface temperature was above 40 • C, the maximum reduction in surface temperature, mean radiant temperature (MRT) and SET* were about 10 • C, 1.9 • C and 0.8 • C, respectively. When the incident solar radiation to the human body was shielded by the sunshade, the reduction in MRT and SET* were about 15 • C and 7 • C, respectively. The reduction of surface temperature by the water surface in a park was about 15 • C, which was larger than that by watering on the pavement. However, the reduction of MRT and SET* at the center of the sidewalk 3.75 m away from the water surface was about 0.2 • C and 0.1 • C, respectively. Air temperature decrease and relative humidity increase in the vicinity of the mist outlets were about 1 • C and 1%, respectively. When the human body got wet, MRT and SET* decreases were large, ranging from 2.9 to 19.4 • C, and 1.2 to 8.2 • C, respectively. The improvement of human thermal sensation varies depending on the distance from the countermeasure technologies to the human body. The Japanese Ministry of the Environment developed the "Heat countermeasure guideline in the city" [2], which includes basic, specific adaptation measures, and technical sections. The Japanese Ministry of Land, Infrastructure, Transport and Tourism is promoting various initiatives for the reconstruction and utilization of street space [3]. It is recommended to reconstruct the street space from a car-centered space to a "humancentered" space. The integrated use of roads and sidewalks as places where people can gather, enjoy recreation, and engage in a variety of activities will be promoted in the future. The development of a countermeasure strategy for extreme high temperatures in the street canyon is urgently needed to promote this policy. Various strategies have been developed to mitigate the negative effects of extreme temperatures, including solar shading, urban ventilation, and mist spray [4]. Several studies have been implemented in various countries focusing on effective countermeasures against heat waves [5][6][7][8][9][10]. The issues of thermal sensation evaluation have been organized and specific measures have been discussed [11][12][13][14]. Studies have also been conducted on numerical models to evaluate thermal sensation [15][16][17][18]. Using those models, the effectiveness of various techniques for heat mitigation has been evaluated [19][20][21][22][23][24][25]. According to a report from Karlsruhe city [26], it is recommended that appropriate adaptation measures be introduced in "hot spots" where temperatures are high. Several typical urban districts with the potential for adaptation measures to be promoted in the future are highlighted. Appropriate strategies should be applied according to the characteristics of each location. Urban climate maps are an effective tool for identifying where countermeasures are needed as well as for assessing which adaptation techniques should be applied in each location [27,28]. The authors [29] used detailed calculations with GIS building shape data to derive a thermal environment map in the street canyon and examined its effectiveness for implementing extreme temperature countermeasures. The influence of MRT rather than wind velocity dominates the SET* distribution on a typical summer day in the street canyon, and solar radiation shading is more effective in suppressing daytime SET* rise than land cover improvement. In the previous study, evaluation and improvement of the thermal environment was discussed for pedestrians on the sidewalk. In this study, we assume that the sidewalk and the roadway are used as a unified human-centered space and discuss the possibility of use and the necessity of improvement from the viewpoint of the thermal environment for pedestrians and visitors. The purpose of this study is to evaluate the potential for using the spaces integrating the roads and sidewalks in the street canyon as human-centered spaces, and to investigate more appropriate measures to improve the thermal environment for pedestrians and visitors in these spaces. Calculation Methods and Results The calculation method is the same as in previous studies by the authors [29]. Surface temperatures on the ground and wall are calculated based on the following surface heat budget equation. Objective Area MRT is calculated by incident solar and infrared radiation to human body, which is calculated by surrounding surface temperature and view factors between the human body and surrounding surfaces. SET* is calculated by integrating the wind velocity and MRT distributions, by giving air temperature, relative humidity, a clothing amount, and a metabolic rate of the human body. Building shape and classification of the objective road is shown in Figure 1. The objective area is divided into 2 m mesh, and surface materials are set for each mesh. Asphalt, concrete block, and grass are set for the surfaces in the street canyons. The crown width and tree height of each street tree are set by a field survey and Google Earth. Kobe city is located facing Osaka Bay. The climate is classified as warm and temperate. According to Köppen and Geiger, this climate is classified as Cfa. The average annual temperature is 16.7 • C. The average annual rainfall is 1216 mm. Calculation Results Time change of the ground surface temperature, MRT, and SET* are calculated on a typical summer sunny day, 5 August 2020. Air temperature and relative humidity are given the measurement data by Kobe local meteorological observatory located nearby the objective area. It is assumed that a solar absorption rate, a clothing amount, and a metabolic Calculation Results Time change of the ground surface temperature, MRT, and SET* are calculated on a typical summer sunny day, 5 August 2020. Air temperature and relative humidity are given the measurement data by Kobe local meteorological observatory located nearby the objective area. It is assumed that a solar absorption rate, a clothing amount, and a metabolic rate of the human body are 0.5, 0.6 clo, and 1.0 met, respectively, and transmittance of solar radiation of the tree is 0.06. Distribution of SET* at 1.5 m high at 13:00 on 5 August 2020 is shown in Figure 2. Ground surface temperature is low in the shaded area by the surrounding buildings and trees. Since daily integral incident solar radiation is small in the shaded streets by street trees, surface temperature is low in both east-west and north-south roads. MRT is low in the median strip in some streets and the central park, where incident solar radiation and surrounding surface temperature are low. SET* is more affected by MRT than wind velocity. The calculation results were validated by the measurement results obtained on sunny summer days [29]. Distribution of SET* at 1.5 m high at 13:00 on 5 August 2020 is shown in Figure 2. Ground surface temperature is low in the shaded area by the surrounding buildings and trees. Since daily integral incident solar radiation is small in the shaded streets by street trees, surface temperature is low in both east-west and north-south roads. MRT is low in the median strip in some streets and the central park, where incident solar radiation and surrounding surface temperature are low. SET* is more affected by MRT than wind velocity. The calculation results were validated by the measurement results obtained on sunny summer days [29]. Spatiotemporal Distribution of SET* in Road Space Diurnal variations of spatial distribution frequency of SET* at 1.5 m high on eastwest road and north-south road on 5 August 2020 are shown in Figures 3 and 4. Based on the relationship between SET* and thermal sensation by Ishii et al. [31], it is uncomfortable Spatiotemporal Distribution of SET* in Road Space Diurnal variations of spatial distribution frequency of SET* at 1.5 m high on east-west road and north-south road on 5 August 2020 are shown in Figures 3 and 4. Based on the relationship between SET* and thermal sensation by Ishii et al. [31], it is uncomfortable if SET* exceeds about 30 • C. It is uncomfortable from 9:00 to 16:00 on east-west roads, while it is limited from 11:00 to 14:00 on north-south roads. It is uncomfortable on all roadways and sidewalks from 11:00 to 14:00 on the north-south road, while there are locations where it is not uncomfortable even from 9:00 to 16:00 on the south side sidewalks on the east-west road. It is possible to find shaded areas on the south side sidewalk of the east-west road even from 9:00 to 16:00, due to the buildings on the south side of the road. However, on the north side sidewalk of the east-west road, it is a severe thermal environment similar to that on the roadway. The overall trend is the same for wide roads with many lanes, with only an increase in the number of lanes with severe thermal environments. The same trend is observed on boulevards, where the orientation slightly shifted from east to west, as on the east-west road. The same trend is observed at the intersection as on the east-west road where the buildings on the south side are low. As for the possibility of human-centered use of road space, a north-south road with a limited width is a candidate, and a sidewalk on the south side of an east-west road is also envisioned. Spatiotemporal Distribution of SET* in Road Space Diurnal variations of spatial distribution frequency of SET* at 1.5 m high on eastwest road and north-south road on 5 August 2020 are shown in Figures 3 and 4. Based on the relationship between SET* and thermal sensation by Ishii et al. [31], it is uncomfortable if SET* exceeds about 30 °C. It is uncomfortable from 9:00 to 16:00 on east-west roads, while it is limited from 11:00 to 14:00 on north-south roads. It is uncomfortable on all roadways and sidewalks from 11:00 to 14:00 on the north-south road, while there are locations where it is not uncomfortable even from 9:00 to 16:00 on the south side sidewalks on the east-west road. It is possible to find shaded areas on the south side sidewalk of the east-west road even from 9:00 to 16:00, due to the buildings on the south side of the road. However, on the north side sidewalk of the east-west road, it is a severe thermal environment similar to that on the roadway. The overall trend is the same for wide roads with many lanes, with only an increase in the number of lanes with severe thermal environments. The same trend is observed on boulevards, where the orientation slightly shifted from east to west, as on the east-west road. The same trend is observed at the intersection as on the east-west road where the buildings on the south side are low. As for the possibility of human-centered use of road space, a north-south road with a limited width is a candidate, and a sidewalk on the south side of an east-west road is also envisioned. Effects of Water Sprinkling When water was sprinkled on the roads, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on east-west road and north-south road on 5 August 2020 were calculated and are shown in Figure 5. Based on the experimental results with sprinkler vehicles, the calculations are carried out with an evaporation efficiency set at 0.15. When the surface temperature is high before water sprinkling, a surface temperature reduction is confirmed greater than 10 °C, and SET* is reduced up to 2 °C. Although the SET* reduction around noon is large, the water sprinkling does not lead to comfortable conditions because the conditions before water sprinkling are quite uncomfortable. Sprinkling water in the evening may increase the number of comfortable spaces. Effects of Water Sprinkling When water was sprinkled on the roads, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on east-west road and north-south road on 5 August 2020 were calculated and are shown in Figure 5. Based on the experimental results with sprinkler vehicles, the calculations are carried out with an evaporation efficiency set at 0.15. When the surface temperature is high before water sprinkling, a surface temperature reduction is confirmed greater than 10 • C, and SET* is reduced up to 2 • C. Although the SET* reduction around noon is large, the water sprinkling does not lead to comfortable conditions because the conditions before water sprinkling are quite uncomfortable. Sprinkling water in the evening may increase the number of comfortable spaces. Effects of Water Surface When the road surface is covered by a water layer, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on the east-west road and north-south road on 5 August 2020 were calculated and are shown in Figure 6. Based on the experimental results with water surface in a park, it is assumed that the water is supplied at a constant temperature 32 °C. If people are assumed to walk or stay on the water surface, SET* is less than 31.5 °C at any time, due to lower MRT. If a continuous water supply can be guaranteed and people can approach the water surface, the water surface can be expected to have a significant effect at any time and place. Water sprinkling is the effect due to evaporative cooling, whereas water surface is the effect due to the supply of cooler water. Effects of Water Surface When the road surface is covered by a water layer, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on the east-west road and north-south road on 5 August 2020 were calculated and are shown in Figure 6. Based on the experimental results with water surface in a park, it is assumed that the water is supplied at a constant temperature 32 • C. If people are assumed to walk or stay on the water surface, SET* is less than 31.5 • C at any time, due to lower MRT. If a continuous water supply can be guaranteed and people can approach the water surface, the water surface can be expected to have a significant effect at any time and place. Water sprinkling is the effect due to evaporative cooling, whereas water surface is the effect due to the supply of cooler water. Effects of Water Surface When the road surface is covered by a water layer, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on the east-west road and north-south road on 5 August 2020 were calculated and are shown in Figure 6. Based on the experimental results with water surface in a park, it is assumed that the water is supplied at a constant temperature 32 °C. If people are assumed to walk or stay on the water surface, SET* is less than 31.5 °C at any time, due to lower MRT. If a continuous water supply can be guaranteed and people can approach the water surface, the water surface can be expected to have a significant effect at any time and place. Water sprinkling is the effect due to evaporative cooling, whereas water surface is the effect due to the supply of cooler water. Effects of Sunshade When sunshades were installed, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on 5 August 2020 on the east-west road and north-south road were calculated and are shown in Figure 7 Effects of Sunshade When sunshades were installed, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on 5 August 2020 on the east-west road and north-south road were calculated and are shown in Figure 7. 2 m × 4 m sunshades are installed at 10 m intervals along a 3.5 m wide lane. The ratio of sunshade area to the lane area is 23%. SET* decreases by up to 6 • C around noon. On the east-west road, shade occurs along the lane at any time, so that pedestrians moving along the lane can periodically pass through the shade. On the north-south road, the time required for countermeasures is limited to around noon, so shade is effective even if it does not occur in the target lane only around noon. Shade also occurs on the sidewalks of the north-south road at the time required for countermeasures. (c) (d) Figure 6. Diurnal variation of spatial distribution frequency of SET* at 1.5 m high on east-west road and north-south road on 5 August 2020, when the road surface is covered by a water layer. (a) north side roadway, (b) south side roadway, (c) west side roadway, (d) east side roadway. Effects of Sunshade When sunshades were installed, diurnal variations of spatial distribution frequency of SET* at 1.5 m high on 5 August 2020 on the east-west road and north-south road were calculated and are shown in Figure 7. 2 m × 4 m sunshades are installed at 10 m intervals along a 3.5 m wide lane. The ratio of sunshade area to the lane area is 23%. SET* decreases by up to 6 °C around noon. On the east-west road, shade occurs along the lane at any time, so that pedestrians moving along the lane can periodically pass through the shade. On the north-south road, the time required for countermeasures is limited to around noon, so shade is effective even if it does not occur in the target lane only around noon. Shade also occurs on the sidewalks of the north-south road at the time required for countermeasures. Effects of Street Tree When street trees are installed, diurnal variation of spatial distribution frequency of SET* at 1.5 m high on 5 August 2020 on east-west road and north-south road were calculated and are shown in Figure 8. Cylindrical canopy with a radius of 2 m and a height of 7 m is set at 3 m to 10 m above the ground surface. When trees are installed at 15 m intervals, the ratio of canopy area to the lane area is 23%, same as for the sunshade. SET* decreases by up to 8.5 °C around noon. On east-west roads, tree shade also occurs along the Effects of Street Tree When street trees are installed, diurnal variation of spatial distribution frequency of SET* at 1.5 m high on 5 August 2020 on east-west road and north-south road were Atmosphere 2022, 13, 946 9 of 12 calculated and are shown in Figure 8. Cylindrical canopy with a radius of 2 m and a height of 7 m is set at 3 m to 10 m above the ground surface. When trees are installed at 15 m intervals, the ratio of canopy area to the lane area is 23%, same as for the sunshade. SET* decreases by up to 8.5 • C around noon. On east-west roads, tree shade also occurs along the lane at any time, so that pedestrians moving along the lane can periodically pass through the shade. On the north-south road, the time required for countermeasures is limited to around noon, so shade is effective even if it does not occur in the target lane only around noon. Figure 7. Diurnal variation of spatial distribution frequency of SET* at 1.5 m high on east-west road and north-south road on 5 August 2020, when sunshades are installed. (a) sunshade installation condition, (b) north side roadway, (c) diurnal shade change, (d) west side roadway, (e) east side roadway, (f) diurnal shade change, (g) west side sidewalk, (h) east side sidewalk. Effects of Street Tree When street trees are installed, diurnal variation of spatial distribution frequency of SET* at 1.5 m high on 5 August 2020 on east-west road and north-south road were calculated and are shown in Figure 8. Cylindrical canopy with a radius of 2 m and a height of 7 m is set at 3 m to 10 m above the ground surface. When trees are installed at 15 m intervals, the ratio of canopy area to the lane area is 23%, same as for the sunshade. SET* decreases by up to 8.5 °C around noon. On east-west roads, tree shade also occurs along the lane at any time, so that pedestrians moving along the lane can periodically pass through the shade. On the north-south road, the time required for countermeasures is limited to around noon, so shade is effective even if it does not occur in the target lane only around noon. Discussion The potential for using the spaces integrating the roads and sidewalks in the street canyon as human-centered spaces, and the effectiveness of water sprinkling, water surface, sunshade and street tree as a heat mitigation technology were evaluated. Moreover, taking into consideration the results of our previous studies that evaluated the effectiveness of mist spray and other measures [1], we discussed the future countermeasures strategies for the extreme heat with the Kobe city government officers. Since the measures are intended for street spaces, green and cool roofs and green walls are not considered. Based on the characteristics of the target space, each technology should be selected appropriately in consideration of the following points. The overviews of our discussions are presented below, however, they are not based on scientific evidence, but rather on social perceptions. - Regarding mist spray, is it acceptable for mist to wet the human body? It may be acceptable at parks where leisure is the main purpose, but may not be acceptable at bus stops where business is the main purpose. Kobe City used water from a well for this sprinkling experiment, and plans to use spring water from Mt. Rokko, which is located just north of the city center. In a city like Kobe City, which has water resources in the vicinity of mountains or the sea, there is a possibility of selecting water-based countermeasure technologies. It is important to note that we are not recommending water-based countermeasure technologies for any of the cities. The administrators of each city should select available technologies such as mist spray, water surface, watering pavement, watering road, street trees, sunshade, etc. In many cities around the world, it may not be suitable to use water from the viewpoint of water resources. This study contributes to presenting several possible countermeasure technologies. Conclusions In the case where street canyons are used as a human-centered space integrating sidewalks and roadways, we discussed the possibility of their use and the need for improvement from the perspective of the thermal environment for pedestrians and visitors. Diurnal variations in the frequency of spatial distribution of SET* at 1.5 m high on east-west and north-south roads on fine summer day are analyzed. Based on the spatial distribution of SET* throughout the day, as possible human-centered street space uses, north-south streets with restricted widths and south sidewalks on east-west streets are candidates. Spatiotemporal distributions of SET* were calculated when water was sprinkled on the road surface in the street canyon and when water surface, sunshade, and trees were introduced in the street canyon. When the surface temperature before watering was high, the reduction in surface temperature due to watering was observed to be more than 10 • C, and SET* was found to be reduced by up to 2 • C. However, assuming people walk or stay on the water surface, the MRT decreases causing SET* to be below 31.5 • C at any time, so if a continuous supply of water is guaranteed and people can approach the water surface, the water surface can be expected to have a significant impact anywhere at any time. Shading by sunshades and trees reduces SET* by up to 6 and 8.5 • C around noon. On the east-west street, shading occurs along the lanes at any time, allowing pedestrians moving through the lanes to pass through the shaded areas on a periodic cycle. On north-south street, the time required for the countermeasures is limited to around noon, so the measure is effective even if the shade does not occur in the target lanes only around noon. We discussed the future countermeasures strategies for the extreme heat with the Kobe city government officers. We have presented an overview of the discussion, however, it is not based on scientific evidence, but only on social perceptions.
6,127.8
2022-06-10T00:00:00.000
[ "Mathematics" ]
D- and N-Methyl Amino Acids for Modulating the Therapeutic Properties of Antimicrobial Peptides and Lipopeptides Here we designed and synthesized analogs of two antimicrobial peptides, namely C10:0-A2, a lipopeptide, and TA4, a cationic α-helical amphipathic peptide, and used non-proteinogenic amino acids to improve their therapeutic properties. The physicochemical properties of these analogs were analyzed, including their retention time, hydrophobicity, and critical micelle concentration, as well as their antimicrobial activity against gram-positive and gram-negative bacteria and yeast. Our results showed that substitution with D- and N-methyl amino acids could be a useful strategy to modulate the therapeutic properties of antimicrobial peptides and lipopeptides, including enhancing stability against enzymatic degradation. The study provides insights into the design and optimization of antimicrobial peptides to achieve improved stability and therapeutic efficacy. TA4(dK), C10:0-A2(6-NMeLys), and C10:0-A2(9-NMeLys) were identified as the most promising molecules for further studies. Introduction Antimicrobial peptides (AMPs) are a class of small molecules produced by numerous living organisms as part of their host innate immune response to infection [1][2][3]. AMPs derived from insects and other species have demonstrated the ability to eliminate invading pathogens, holding promise with regard to the prospect of developing AMPs as alternatives to antibiotics [4,5]. AMP research endeavors started back in the 1980s with the discovery of insect cecropins by Hans Boman, human α-defensins by Robert Lehrer, and magainins by Michael Zasloff [6]. Over 3300 kinds of AMPs have now been found in a vast number of biological sources, ranging from microbes to plants and animals [7]. These peptides may have optimal properties for further drug development. In this regard, they can permeabilize and disrupt the bacterial membrane, regulate the immune system, exert broad-spectrum antibiofilm activity, and show a reduced propensity for selective bacterial resistance [8][9][10][11]. Although the use of AMPs has primarily been limited to topical infections due to their relatively narrow druggability and lack of special unique protocols to assess pharmacodynamics, the advantages of these molecules over conventional antibiotics have attracted wide attention from the research community [12,13]. Results and Discussion In a previous study, we reported a potent antimicrobial lipopeptide, namely C10:0-A2, and a cationic decapeptide (IKQVKKLFKK) which was conjugated with decanoic acid (C10:0) [22]. This lipopeptide exhibited potent broad-spectrum activity against grampositive and gram-negative bacteria with a MIC range of 1.4 to 2.8 µM, and was able to inhibit the degrowth of yeast (MIC = 90.6 µM). Nevertheless, its selectivity is low due to its moderate hemolytic activity, around 50% of hemolysis at 200 µM. We demonstrated that C10:0-A2 adopted an amphipathic α-helix structure in bacterial membrane mimetic vesicles but remained unstructured in the presence of eukaryotic membrane mimetic vesicles. Therefore, we suggest that the adoption of a stable secondary structure is a key factor for antibacterial activity, while the hemolytic activity is governed by the lipopeptide hydrophobicity. In a later study, we went on to describe the design and synthesis of a cationic 12 residue α-helical amphipathic peptide, TA4, based on a pharmacophore motif of bacteriocin Pln 149 (KLFK), which resulted in a promising antimicrobial with a MIC range of 2.5 to 10.2 µM against bacteria, and 40.8 µM against yeast. [23]. This peptide showed moderate hemolytic activity (around 40% of hemolysis at 200 µM), therefore affecting their selectivity and therapeutic window. Structurally, TA4 has high cationicity (+7) which comprises 50% of hydrophobic amino acids, and adopts an amphipathic α-helical secondary structure in the presence of DPPG vesicles that mimic bacterial membranes, resulting in an optimal combination to ensure membrane permeabilization. Due to the antimicrobial properties and structural features described above, these molecules are of great potential as possible candidates for the development of alternative antimicrobials. It is widely known that natural peptides are susceptible to degradation by digestive and serum enzymes, which reduce the oral bioavailability and half-life circulation of these molecules, and consequently, diminishes their therapeutic efficacy [15]. To improve the therapeutic properties of the two aforementioned peptides, we designed and synthesized analogs in which non-proteinogenic amino acids, namely D-and N-Me, were introduced. Peptide Synthesis and Physicochemical Characterization The synthetic peptide and lipopeptides containing D-and N-methyl amino acids are shown in Table 1. All the compounds were synthesized as amide peptides by solid-phase peptide synthesis (SPPS). HPLC analyses showed that the substitution with D-and N-methyl amino acids decreased the retention time (rt) (see Table 1), and thus, their experimental hydrophobicity. Therefore, this observation indicates that the substitution affects the interaction of the peptides and lipopeptides with the stationary phase of the column, possibly because the peptide sequence adopts a distinct conformation in a hydrophobic environment. Several studies report that the substitution by D-or N-methyl amino acids affects the adoption of the secondary structure of helical peptides, which causes a decrease in the retention time when the compounds elute from HPLC columns [24,25]. Using HPLC analysis, De Vleeschouwer et al. (2017) observed that the retention time of N-methylated analogs of a cyclic lipopeptide (pseudodesmin A) significantly decreased, as did their hydrophobicity. Although N-methylation is expected to increase hydrophobicity, and thus retention time, these authors found that when N-methylation occurs at the center position of the peptide sequence, as demonstrated by 1HNMR analyses, there was a dramatic impact on the overall conformation of the compound. In addition, the presence of an N-alkyl group in an amino acid within the peptide sequence eliminates the possibility of inter-and intra-molecular hydrogen bond formation, thus destabilizing or modifying the secondary structures that can be adopted by the peptide [26]. On the other hand, acylation confers that the peptide has capacity to self-assemble into nanostructures, resulting in a surfactant-like structure, which may affect the extent of peptide insertion and disruption of bacterial membrane integrity. In this work, the CMC of the lipopeptides was determined by conductimetry in Milli-Q water. All the lipopeptides adopted a micellar assembly with CMC values ranging from 2.0 to 5.7 mM ( Table 1). The CMC values of all the lipopeptides containing N-methyl amino acids were lower than those of C10:0-A2, thereby indicating that the presence of N-methyl amino acids in the peptidyl motif facilitated self-assembly into micelles. This effect appeared to be more important when substitution by an N-methyl amino acid was performed on amino acids located on the non-polar face of the amphipathic helix (CMC values of C10:0-A2(8-NMePhe) = 2.01 mM). For all the lipopeptides, the CMC values remained above the MIC values, which could indicate that these molecules interact with the cell membrane as monomers and, consequently, that this property would not be necessary for antimicrobial activity. This notion is in concordance with reports in the literature on short cationic lipopeptides [27,28]. Table 2 shows the antimicrobial activity of the peptides and lipopeptides against bacterial strains and yeast. The substitution of natural amino acids by N-methyl amino acids allowed us to obtain analogs with similar antibacterial activity, and in some cases, greater activity than non-substituted molecules, particularly against P. aeruginosa strain ATCC 27853. Only some analogs showed a slightly higher MIC value than the non-substituted ones against E. faecalis. For the lipopeptides, the substitution of L-Lys by L-NMe-Lys in positions six and nine was the most favorable for antibacterial activity, whereas in analogs with a replacement of L-Phe by L-NMe-Phe and L-Lys by L-NMe-Lys in positions eight and five, respectively, a slight reduction of activity was observed. Thus, this finding suggests that these amino acids are important for the biological activity of the compounds. For peptide TA4, the substitution of two L-Phe by L-NMe-Phe was less favorable against gram (+) bacterial strains. In a structure-activity relationship study, Velkov et al. addressed the effect of Nmethylation at different positions (two to seven) of the Leu10-teixobactin depsipeptide. In general, they observed that N-methylation had a negative effect on antimicrobial activity. Those authors suggested that this loss of activity was due to a conformational change and/or a reduction in the ability to form a multimeric active structure by blocking hydrogen bond formation [29]. Antimicrobial Activity In contrast, we found that the substitution by D-amino acids was less suitable for antibacterial activity. TA4(dK) showed satisfactory inhibitory activity against all the bacterial strains tested (MIC values = 5 to 20.4 µM), although the MIC values were higher than those of TA4, while C10:0-A2(dk) retained the inhibitory activity only against gram (−) bacterial strains. These results are consistent with those reported by Zhao et al. (2016) for a D-Lys substituted analog of polybia-MPI: an α-helical cationic peptide isolated from the wasp Polybia paulista (IDWKKLLDAAKQIL-NH2). The replacement of three L-Lys by D-Lys resulted in a loss of antimicrobial activity against gram (+) and (−) bacteria. The authors suggested that this effect on antibacterial activity was due to a reduction in α-helical conformation in a membrane-mimicking environment [21]. Concerning antifungal activity, most of the substituted analogs inhibited the growth of the two yeast strains tested. In particular, C10:0-A2(6-NMeLys) showed improved inhibitory activity against C. albicans. For TA4, the substitution of the seven L-Lys by D-Lys considerably reduced inhibitory activity against the two yeast strains. Toxicity Characterization and Therapeutic Index (TI) In the development of new drugs for human health, low toxicity towards host cells is a desirable property. In this regard, the assessment of toxicity is an important issue. Here, we evaluated the hemolytic and cytotoxic activity of the new synthetic compounds in vitro. Hemolytic activity was determined using red blood cells and the hemoglobin released was measured at 405 nm. The percentage of hemolysis corresponding to the synthetic compounds is shown in Figure 1 while the HC 50 values are provided in Table 3. TA4 and C10:0-A2 showed moderate hemolysis at higher concentrations (400 µM), achieving around 65% and 45% of hemolysis, respectively. Substitutions by N-methyl and D-amino acids reduced the hemolytic activity of the peptides and lipopeptides. Most of the substituted analogs showed less than 20% of hemolysis at all the concentration ranges tested, except C10:0-A2(9-NMeLys), which at higher concentrations presented a similar hemolysis profile to that of C10:0-A2. For all the peptides and most of the lipopeptides, the HC 50 values exceeded 400 µM. Indeed, the HC 50 values were lower only for C10:0-A2(9-NMeLys) and C10:0-A2, which were around 300 and 200 µM, respectively. Hemolytic activity was determined using red blood cells and the hemoglobin released was measured at 405 nm. The percentage of hemolysis corresponding to the synthetic compounds is shown in Figure 1 while the HC50 values are provided in Table 3. TA4 and C10:0-A2 showed moderate hemolysis at higher concentrations (400 µM), achieving around 65% and 45% of hemolysis, respectively. Substitutions by N-methyl and D-amino acids reduced the hemolytic activity of the peptides and lipopeptides. Most of the substituted analogs showed less than 20% of hemolysis at all the concentration ranges tested, except C10:0-A2(9-NMeLys), which at higher concentrations presented a similar hemolysis profile to that of C10:0-A2. For all the peptides and most of the lipopeptides, the HC50 values exceeded 400 µM. Indeed, the HC50 values were lower only for C10:0-A2(9-NMeLys) and C10:0-A2, which were around 300 and 200 µM, respectively. [33] demonstrated that the substitution of the residues involved in the formation of internal hydrogen bonds of gramicidin S, a βtype structure peptide, by N-methyl amino acids caused a significant decrease in the hemolytic activity of the analogs. In contrast, the substitution of residues involved in the formation of external hydrogen bonds did not affect this parameter. The authors suggested that the latter observation could be attributed to the alteration of the secondary structure and the change in amphipathicity. [33] demonstrated that the substitution of the residues involved in the formation of internal hydrogen bonds of gramicidin S, a β-type structure peptide, by N-methyl amino acids caused a significant decrease in the hemolytic activity of the analogs. In contrast, the substitution of residues involved in the formation of external hydrogen bonds did not affect this parameter. The authors suggested that the latter observation could be attributed to the alteration of the secondary structure and the change in amphipathicity. Many studies have demonstrated that high amphipathicity, high hydrophobicity, and a high tendency to adopt secondary structure are key parameters responsible for high toxicity to mammalian cells [34][35][36]. We assessed the cytotoxicity of the compounds synthesized using HeLa cells. The viability percentage of synthetic compounds is shown in Figure 2. The non-substituted Antibiotics 2023, 12, 821 6 of 15 compounds, namely TA4 and C10:0-A2, presented significant toxicity, reducing HeLa viability by more than 60%. In contrast, the vast majority of substituted analogs were less cytotoxic at all the concentrations tested (>50% of HeLa viability). The IC 50 values calculated for each compound are shown in Table 3. For all substituted analogs, the IC 50 value was equal to or higher than 400 µM, and were considered non-cytotoxic, except for TA4(3,7-NMePhe), which showed an IC 50 value of around 86 µM, even though it was less cytotoxic than TA4 (IC 50 = 46 µM). Many studies have demonstrated that high amphipathicity, high hydrophobicity, and a high tendency to adopt secondary structure are key parameters responsible for high toxicity to mammalian cells [34][35][36]. We assessed the cytotoxicity of the compounds synthesized using HeLa cells. The viability percentage of synthetic compounds is shown in Figure 2. The non-substituted compounds, namely TA4 and C10:0-A2, presented significant toxicity, reducing HeLa viability by more than 60%. In contrast, the vast majority of substituted analogs were less cytotoxic at all the concentrations tested (>50% of HeLa viability). The IC50 values calculated for each compound are shown in Table 3. For all substituted analogs, the IC50 value was equal to or higher than 400 µM, and were considered non-cytotoxic, except for TA4(3,7-NMePhe), which showed an IC50 value of around 86 µM, even though it was less cytotoxic than TA4 (IC50 = 46 µM). It is important to note none of the compounds showed cytotoxicity or hemolytic activity in the MIC concentration range. The selectivity of a compound can be assessed using the therapeutic index (TI). This parameter can be increased by enhancing antimicrobial activity, decreasing hemolytic activity, or by a combination of both. In this work, the TI was calculated as the relationship between the HC50 or IC50 values and mean MIC against different microorganisms (yeast and gram (+) and (−) bacteria) ( Table 3). For peptides, TA4(dK) presented the best TI against all types of microorganisms. Although the antimicrobial activity of this compound was lower than that of the unsubstituted peptide, the TI was higher due to its lower toxicity against red blood cells and HeLa cells. TA4(3,7-NMePhe), on the other hand, was able to increase TI values only against gram (−) bacteria compared with TI values of TA4. However, for lipopeptides, C10:0-A2(9-NMeLys) and C10:0-A2(6-NMeLys) showed higher TI values than C10:0-A2, especially C10:0-A2(6-NMeLys) against gram (−) bacteria. These results are explained by their lower toxicity compared to that of C10:0-A2. It is important to note none of the compounds showed cytotoxicity or hemolytic activity in the MIC concentration range. The selectivity of a compound can be assessed using the therapeutic index (TI). This parameter can be increased by enhancing antimicrobial activity, decreasing hemolytic activity, or by a combination of both. In this work, the TI was calculated as the relationship between the HC 50 or IC 50 values and mean MIC against different microorganisms (yeast and gram (+) and (−) bacteria) ( Table 3). For peptides, TA4(dK) presented the best TI against all types of microorganisms. Although the antimicrobial activity of this compound was lower than that of the unsubstituted peptide, the TI was higher due to its lower toxicity against red blood cells and HeLa cells. TA4(3,7-NMePhe), on the other hand, was able to increase TI values only against gram (−) bacteria compared with TI values of TA4. However, for lipopeptides, C10:0-A2(9-NMeLys) and C10:0-A2(6-NMeLys) showed higher TI values than C10:0-A2, especially C10:0-A2(6-NMeLys) against gram (−) bacteria. These results are explained by their lower toxicity compared to that of C10:0-A2. Enzymatic Stability Characterization in the Presence of Digestive Enzymes and Serum Proteases The enzymatic stability of the peptides and lipopeptides in the presence of trypsin and chymotrypsin was evaluated in vitro by measuring residual antimicrobial activity. These results are shown in Figure 3. Enzymatic Stability Characterization in the Presence of Digestive Enzymes and Serum Proteases The enzymatic stability of the peptides and lipopeptides in the presence of trypsin and chymotrypsin was evaluated in vitro by measuring residual antimicrobial activity. These results are shown in Figure 3. Substitution of L-Lys for D-Lys in C10:0-A2 and TA4 resulted in increased stability against trypsin degradation, retaining 80% and 85% inhibition activity against S. aureus strain ATCC 25923 after 7 h of treatment, respectively. These results indicate that the replacement of all sites susceptible to trypsin cleavage by non-proteinogenic D-series amino acids, markedly increased the stability of the compounds. In the presence of chymotrypsin, these two peptides were less stable compared to trypsin. C10:0-A2(dK) retained 75% of its antimicrobial activity after 3 h of treatment. However, a total loss of activity was observed after this time. In contrast, TA4(dK) showed a full loss of activity after 30 min of treatment. Multiple substitutions by D-Lys in C10:0-A2 seemed to have a favorable effect on stability against chymotrypsin. This observation might be explained by a conformational change that makes the target site less accessible, thus requiring enzyme activity for longer. This was not observed for TA4(dK), possibly because its sequence holds more sites susceptible to chymotrypsin action. On the other hand, the replacement of one or two proteinogenic amino acids by N-methyl amino acids in TA4 and C10:0-A2 was not enough to improve enzymatic stability. Peptide stability in human serum was evaluated by HPLC analyses. The remaining area versus treatment time was plotted (Figure 4). Substitution of L-Lys for D-Lys in C10:0-A2 and TA4 resulted in increased stability against trypsin degradation, retaining 80% and 85% inhibition activity against S. aureus strain ATCC 25923 after 7 h of treatment, respectively. These results indicate that the replacement of all sites susceptible to trypsin cleavage by non-proteinogenic D-series amino acids, markedly increased the stability of the compounds. In the presence of chymotrypsin, these two peptides were less stable compared to trypsin. C10:0-A2(dK) retained 75% of its antimicrobial activity after 3 h of treatment. However, a total loss of activity was observed after this time. In contrast, TA4(dK) showed a full loss of activity after 30 min of treatment. Multiple substitutions by D-Lys in C10:0-A2 seemed to have a favorable effect on stability against chymotrypsin. This observation might be explained by a conformational change that makes the target site less accessible, thus requiring enzyme activity for longer. This was not observed for TA4(dK), possibly because its sequence holds more sites susceptible to chymotrypsin action. On the other hand, the replacement of one or two proteinogenic amino acids by N-methyl amino acids in TA4 and C10:0-A2 was not enough to improve enzymatic stability. Peptide stability in human serum was evaluated by HPLC analyses. The remaining area versus treatment time was plotted (Figure 4). Both TA4 and C10:A2 were degraded by serum proteases with about 40% and 20% of the remaining area after 1 h of incubation, respectively. Substitution of L-Lys by D-Lys enhanced the serum stability of TA4 and C10:0-A2, with 100% of the remaining area after 8 h of incubation. On the other hand, substitution with N-methyl amino acids also enhanced stability in serum, increasing from 40% to 65% of the remaining area of peptides and from 20% to 50% for lipopeptide analogs after 1 h of exposure. Although both strategies improved enzymatic stability to serum protease, multiple substitutions with non-proteinogenic acids had a more significant effect on this property than single substitutions. These results are in concordance with those reported by Dong et al. (2012), where at selected sites of DhHP-6, tri-N-methylation (a deuterohemin-mimetic peptide conjugate of microperoxidase), showed higher resistance to serum and digestive proteolytic degradation and higher apparent permeability coefficient than mono-and di-substituted analogs [37]. Hong et al. (1999) studied the effect of D-amino acid substitution in the KKVVVFKVKVKFKK peptide on enzyme stability in serum. These authors demonstrated that the substitution of susceptible sites increased the enzymatic stability of the analogs and that this increase in stability correlated with an increase in the number of susceptible sites blocked simultaneously [38]. In another study, Gao and coworkers reported similar results in a phylloseptin-PT analog: PS-PT2. Here the substitution of five L-Lys by D-lys was more effective at improving stability against trypsin and serum proteases than the individual L-Lys by D-Lys substitution in position seven [39]. Secondary Structure Determination by Circular Dichroism (CD) The CD spectra of the substitution analogs of TA4 in water and DPPC (Figure 5 A,B) showed that all the analogs were predominantly unstructured. The analyses of the CD spectra of TA4(dk) and its deconvolution by the Selcon3 and Contin methods showed that the presence of TFE:H2O (50% v/v) presented a greater contribution of beta structure and turn (45%), with 28% of the unordered structure (see Figure S1). On the other hand, in the presence of DPPG, the peptide was less ordered. In contrast, TA4 showed greater contributions of the alpha helix in TFE (45%) and DPPG (36%), and also its structure was more stable in TFE (only 24% of unordered structure). These results revealed that the incorporation of D-amino acids in the TA4 sequence caused a significant modification to its secondary structure. Both TA4 and C10:A2 were degraded by serum proteases with about 40% and 20% of the remaining area after 1 h of incubation, respectively. Substitution of L-Lys by D-Lys enhanced the serum stability of TA4 and C10:0-A2, with 100% of the remaining area after 8 h of incubation. On the other hand, substitution with N-methyl amino acids also enhanced stability in serum, increasing from 40% to 65% of the remaining area of peptides and from 20% to 50% for lipopeptide analogs after 1 h of exposure. Although both strategies improved enzymatic stability to serum protease, multiple substitutions with non-proteinogenic acids had a more significant effect on this property than single substitutions. These results are in concordance with those reported by Dong et al. (2012), where at selected sites of DhHP-6, tri-N-methylation (a deuterohemin-mimetic peptide conjugate of microperoxidase), showed higher resistance to serum and digestive proteolytic degradation and higher apparent permeability coefficient than mono-and di-substituted analogs [37]. Hong et al. (1999) studied the effect of D-amino acid substitution in the KKVVVFKVKVK-FKK peptide on enzyme stability in serum. These authors demonstrated that the substitution of susceptible sites increased the enzymatic stability of the analogs and that this increase in stability correlated with an increase in the number of susceptible sites blocked simultaneously [38]. In another study, Gao and coworkers reported similar results in a phylloseptin-PT analog: PS-PT2. Here the substitution of five L-Lys by D-lys was more effective at improving stability against trypsin and serum proteases than the individual L-Lys by D-Lys substitution in position seven [39]. Secondary Structure Determination by Circular Dichroism (CD) The CD spectra of the substitution analogs of TA4 in water and DPPC ( Figure 5A,B) showed that all the analogs were predominantly unstructured. The analyses of the CD spectra of TA4(dk) and its deconvolution by the Selcon3 and Contin methods showed that the presence of TFE:H 2 O (50% v/v) presented a greater contribution of beta structure and turn (45%), with 28% of the unordered structure (see Figure S1). On the other hand, in the presence of DPPG, the peptide was less ordered. In contrast, TA4 showed greater contributions of the alpha helix in TFE (45%) and DPPG (36%), and also its structure was more stable in TFE (only 24% of unordered structure). These results revealed that the incorporation of D-amino acids in the TA4 sequence caused a significant modification to its secondary structure. The deconvolution of the CD spectra of the analog TA4(3,7-NMePhe) showed that the peptide was partially structured (with 32% of unordered structure) in the presence of DPPG. In comparison with TA4, the presence of N-MePhe caused a conformational change in the original structure of the TA4 analog, with a slight increase in the beta and turn structure (from 27 to 33%), and a reduction of the helical structure (from 36% to 27%). Similar results were found with TFE:H2O 50%. The CD spectra of all C10:0-A2 lipopeptide analogs in water showed that they were predominantly unordered, which is consistent with the presence of a minimum at 198 nm (or maximum in cases of C10:0-A2(dK)), except for C10:0-A2(6-NMeLys) ( Figure 6A), whose deconvolution showed 26% of α-helix and 16% of turn contribution (see Figure S2), making this the most structured analog in water (only 44% of unordered structure). In the presence of TFE and DPPG, the analogs that contained NMeLys or D-Lys were less structured than C10:0-A2 ( Figure 6C,D). This result could explain the fact that most of the analogs showed less antimicrobial activity than C10:0-A2. This observation would then suggest that N-Methyl amino acids and D-amino acids affect the secondary structure. These results reveal that the incorporation of N-Methyl amino acids in the C-terminus and the middle part of the A2 sequence disturbed the secondary structure. The analogs with non-proteinogenic amino acid substitutions in positions six and nine showed higher structuration and also presented greater antimicrobial activity than those substituted in positions five and eight. These observations suggest that, for this lipopeptide, the adoption of an alpha helix structure is a key feature for antimicrobial activity. The deconvolution of the CD spectra of the analog TA4(3,7-NMePhe) showed that the peptide was partially structured (with 32% of unordered structure) in the presence of DPPG. In comparison with TA4, the presence of N-MePhe caused a conformational change in the original structure of the TA4 analog, with a slight increase in the beta and turn structure (from 27 to 33%), and a reduction of the helical structure (from 36% to 27%). Similar results were found with TFE:H 2 O 50%. The CD spectra of all C10:0-A2 lipopeptide analogs in water showed that they were predominantly unordered, which is consistent with the presence of a minimum at 198 nm (or maximum in cases of C10:0-A2(dK)), except for C10:0-A2(6-NMeLys) ( Figure 6A), whose deconvolution showed 26% of α-helix and 16% of turn contribution (see Figure S2), making this the most structured analog in water (only 44% of unordered structure). In the presence of TFE and DPPG, the analogs that contained NMeLys or D-Lys were less structured than C10:0-A2 ( Figure 6C,D). This result could explain the fact that most of the analogs showed less antimicrobial activity than C10:0-A2. This observation would then suggest that N-Methyl amino acids and D-amino acids affect the secondary structure. These results reveal that the incorporation of N-Methyl amino acids in the C-terminus and the middle part of the A2 sequence disturbed the secondary structure. The analogs with non-proteinogenic amino acid substitutions in positions six and nine showed higher structuration and also presented greater antimicrobial activity than those substituted in positions five and eight. These observations suggest that, for this lipopeptide, the adoption of an alpha helix structure is a key feature for antimicrobial activity. Finally, in the presence of DPPC, all the analogs studied were predominantly unordered ( Figure 6B); an observation that is consistent with their low hemolytic activity. Peptide Synthesis All the compounds in this study were obtained as C-terminal amides by Fmoc solidphase peptide synthesis (SPPS) ( Table 1). To prepare the lipopeptides, different fatty acid chains were added on the N-terminus of peptide bound to the resin using standard protocols. For the coupling reactions, (1H-benzotriazol-1-yl)(dimethylamino)-methylene]-Nmethylmethanaminium tetrafluoroborate N-oxide (TBTU) and diisopropylethylamine (DIEA) were used. Couplings of N-methyl amino acids and the subsequent amino acid were performed by benzotriazol-1-yl-oxytripyrrolidinophosphonium hexafluorophosphate (PyBOP) and DIEA. Fmoc was removed with 20% piperidine in N, N-dimethylformamide (DMF) (v/v). Final cleavage from the resin was achieved by a mixture of trifluoroacetic acid (TFA)/H2O/triisopropyl silane (TIS) (95:2.5:2.5) (v/v). The crude peptides were precipitated in dry cold diethyl ether, centrifuged, and washed several times with cold diethyl ether until scavengers were removed. The products were then dissolved in water and lyophilized twice. Peptide and lipopeptides were analyzed by RP-HPLC (Waters) using an Atlantis (Waters) C18 analytical column (5 µm, 4.6 mm × 150 mm) for peptides and a Jupiter (Phenomenex) C4 (5 µm, 300 Å, 250 × 4.60 mm) analytical column for lipopeptides. For elution purposes, a linear gradient from 15% to 60% of acetonitrile (ACN) in H2O containing 0.1% TFA, at a flow rate of 0.8 mL/min was used for lipopeptides, and a linear gradient from 5% to 80% of acetonitrile with 0.1% TFA at a flow rate of 0.8 mL/min for peptides. Absorbance was measured at 220 nm. The crude products synthesized were purified by HPLC using a semi-preparative reverse-phase (RP) C18 column (Jupiter-Proteo Phenomenex, 10 µm, 90 Å, 250 × 10 mm), and Mass spectrometric data were obtained using a MALDI-TOF-TOF spectrometer, Ultraflex II (Bruker), in the Mass Spectrometry Facility CEQUIBIEM, Argentina. Finally, in the presence of DPPC, all the analogs studied were predominantly unordered ( Figure 6B); an observation that is consistent with their low hemolytic activity. Peptide Synthesis All the compounds in this study were obtained as C-terminal amides by Fmoc solidphase peptide synthesis (SPPS) ( Table 1). To prepare the lipopeptides, different fatty acid chains were added on the N-terminus of peptide bound to the resin using standard protocols. For the coupling reactions, (1H-benzotriazol-1-yl)(dimethylamino)-methylene]-N-methylmethanaminium tetrafluoroborate N-oxide (TBTU) and diisopropylethylamine (DIEA) were used. Couplings of N-methyl amino acids and the subsequent amino acid were performed by benzotriazol-1-yl-oxytripyrrolidinophosphonium hexafluorophosphate (PyBOP) and DIEA. Fmoc was removed with 20% piperidine in N, N-dimethylformamide (DMF) (v/v). Final cleavage from the resin was achieved by a mixture of trifluoroacetic acid (TFA)/H 2 O/triisopropyl silane (TIS) (95:2.5:2.5) (v/v). The crude peptides were precipitated in dry cold diethyl ether, centrifuged, and washed several times with cold diethyl ether until scavengers were removed. The products were then dissolved in water and lyophilized twice. Minimal Inhibitory Concentration (MIC) against Bacteria The MIC against bacterial strains was determined by the modified broth microtiter dilution method, following the procedures proposed by R.E.W. Hancock Laboratory for testing antimicrobial peptides [40]. The target strains Escherichia coli ATCC 35218, Pseudomonas aeruginosa ATCC 27853, Enterococcus faecalis ATCC 29212, and Staphylococcus aureus ATCC 25923 belong to the American Type Culture Collection (ATCC). The Methicillin-Resistant Staphylococcus aureus BSF FBCB1313 strain (MRSA) was provided by the Clinical Bacteriology Section of FBCB-UNL. All strains were activated by culture for 24 h at 37 • C in Mueller-Hinton Broth (MHB) (Biokar Diagnostics, Cedex, France). Each inoculum was taken and adjusted to a cellular concentration of 5 × 10 5 colony-forming units (CFU)/mL in diluted MHB. All the peptides were dissolved in bovine serum albumin buffer with the addition of 0.01% acetic acid; 100 µL of each inoculum was added to 11 µL of peptide solution in serial 2-fold dilutions and were incubated for 18-24 h at 37 • C. The MIC was the lowest peptide concentration that inhibited visible growth of each bacterial strain. The test was conducted in triplicate. Minimal Inhibitory Concentration (MIC) against Yeast The MIC against yeast strains was determined by the broth microtiter dilution method following the conditions of NCCLS document M27-A. The target strains Candida albicans DBFIQ CA 1, C. albicans PEEC 2, and Candida tropicalis DBFIQ 3, all belonging to the Culture Collection of Microbiology and Biotechnology Sections-FIQ-UNL, were activated by culture for 24 h at 30 • C in Sabouraud Dextrose Agar (SDA) (Biokar Diagnostics, Cedex, France). Each inoculum was taken and the cellular concentration was adjusted to 2 × 10 3 CFU/mL in Sabouraud Dextrose Broth (SDB) (Biokar Diagnostics, Cedex, France). Next, 50 µL of these inocula was added to 50 µL of peptide solution in serial 2-fold dilutions. The plates were incubated for 48 h at 30 • C. The MIC was considered the lowest peptide concentration that inhibited visible growth of each yeast strain. The test was conducted in triplicate. Hemolysis Assay Erythrocyte lysis was determined using the following previously optimized protocols [41]. Human erythrocytes from a healthy voluntary donor were isolated by centrifugation (3000 rpm for 10 min) after washing three times with Physiological Solution (PS). Erythrocyte solutions were prepared at a concentration of 0.4% (v/v) in PS. Test tubes containing 200 µL of erythrocyte solution were incubated at 37 • C for 60 min with 200 µL of peptide solution at concentrations ranging from 6.25 to 400 µM. After centrifugation at 3000 rpm for 5 min, supernatant absorbance was measured at 405 nm. Lysis induced by 1% Triton X-100 was taken as a 100% reference value. Cytotoxicity Assay The 3-4,5-dimethylthiazol-2,5-biphenyl tetrazolium bromide (MTT) cell viability assay was used to assess the cytotoxic activity of synthetic compounds against the human HeLa line. HeLa cells were seeded in 96-well plates at 1 × 10 5 cells/well, in which the peptide solution was added at different concentrations (3.125-400 µM) for 24 h of incubation. Next, MTT reagent (Sigma-Aldrich, St. Louis, MO, USA) was added, and the cells were incubated for 2 h. Then, 100 µL of dimethylsulfoxide (DMSO) was added to each well to dissolve formazan crystals, and the plates were read at 595 nm. Finally, the IC 50 values of the peptides and lipopeptides were calculated as the mean of the concentrations at which each compound caused a 50% decrease in cell viability in two independent experiments, each with three replicates. Calculation of the Therapeutic Index (TI) The TI is defined as the relationship between the concentration that caused 50% of hemolysis (HC 50 ) or a 50% reduction in HeLa cell viability and the MIC. Thus, higher TI values indicate greater antimicrobial specificity. When a peptide did not surpass 50% of hemolysis (or reduction of viability) at any of the concentrations tested, a value of 400 µM was used to calculate the TI. The average MIC for each peptide against different microorganism groups was used to calculate the TI value. 3.5. Characterization of Enzymatic Stability 3.5.1. Peptide Stability in the Presence of Digestive Enzymes 500 µL of a trypsin or chymotrypsin solution in ammonium bicarbonate buffer (0.03 M pH = 7.9) at a concentration of 2 mg/mL was added to 500 µL of peptide solution (concentration: 2 mg/mL). The mixtures were incubated for 7 h at 37 • C. Aliquots were taken at various time points (0, 15, 30, 60, 180, 300, and 420 min). Subsequently, the enzyme reaction was immediately stopped by thermal shock (80 • C for 10 min). All fractions collected were lyophilized twice. The enzymatic stability of the peptides and lipopeptides was determined by measuring residual antimicrobial activity using the agar diffusion method [42]. To this end, 60 µL of the peptide (treated and untreated) in phosphate buffer (0.1 mM pH: 5.5) was added to a 7 mm well on an agar plate previously seeded with 1 mL of a fresh culture of S. aureus ATCC 25923. The plates were incubated for between 18 and 24 h at 37 • C, and then the inhibition halo diameter was measured. The test was conducted in duplicate. Peptide stability in Serum A 50% (v/v) human serum solution in phosphate buffer (pH: 7.2) was added to the peptide solution (concentration: 2 mg/mL) and incubated for 8 h at 37 • C. Aliquots were taken at different time points (0, 30, 60, 120, 240, 360, and 480 min) and serum proteins were immediately precipitated with a mixture of ACN-water-TFA (89:10:1). They were kept at 4 • C for 45 min and then centrifuged for 15 min at 10,000 rpm. The supernatants were then analyzed by HPLC-RP using a C18 analytical column (Beckman, Indianapolis, IN, USA). A linear gradient from 5% to 50% of ACN in H 2 O containing 0.1% TFA was used, at a flow rate of 0.8 mL/min. Absorbance was measured at 220 nm. The result was expressed as the percentage of area remaining vs. treatment time. The test was conducted in duplicate. Determination of Critical Micelle Concentration (CMC) A lipopeptide solution was prepared in Milli Q water, and the specific conductivity of each lipopeptide solution, in a concentration ranging from 0.06 to 18 mg/mL, was measured using a drop conductivity meter (HORIBA, Kyoto, Japan), at 25 • C. Conductivity values were plotted vs. lipopeptide concentration (mS × cm −1 vs. mg/mL), and the CMC was graphically determined [43]. Secondary Structure Determination by Circular Dichroism (CD) Far-UV circular dichroism (CD) measurements were performed on a Jasco J-810 CD spectrometer (Tokyo, Japan) in a 0.1 cm path quartz cuvette (Hellma, Müllheim, Germany) and recorded after five runs. CD analyses were recorded in the presence of dipalmitoylphosphatidylglycerol (DPPG) and dipalmitoylphosphatidylcholine (DPPC) vesicles. For the preparation of small unilamellar vesicles, lipid dispersion in Milli-Q water was sonicated using a tip-sonicator (Vibra cell), until the solution became transparent. The final lipid concentration was 3 mM. Spectra were corrected for background scattering caused by the vesicles by subtracting the spectrum of a single vesicle solution from that of the peptides in the presence of vesicles [44]. Additional spectra were obtained in water and the presence of trifluoroethanol [50% TFE/H 2 O (v/v)]. The final peptide concentration was 0.2 mg/mL for all cases. CD spectra were deconvoluted by means of the CDPro software package (Colorado State University), using SELCOM 3 and CONTILL methods [45]. Conclusions Here, we studied the effect of the substitution with non-proteinogenic D-amino acids and N-methyl amino acids on the therapeutic properties and enzymatic stability of two previously described compounds, namely TA4 peptide and C10:0-A2 lipopeptide. Both strategies were effective at increasing the therapeutic potential of the molecules. The substitution of D-amino acids in both TA4 and C10:0-A2 at multiple sites in the peptide sequence reduced toxicity against human red blood cells and HeLa cells, possibly because a diminution in the experimental hydrophobicity of these molecules, as shown by the retention time in the RP-HPLC analysis, an important feature for hemolytic activity. However, these analogs showed lower antimicrobial activity, especially against gram (+) bacteria, than non-substituted ones. This observation could be explained by a decrease in the adoption of an amphipathic alpha-helix structure, as demonstrated by CD analysis in the presence of prokaryotic membrane-simulating mimetic environments (DPPG vesicles), where it was observed that multi-site D-amino acid substitution significantly decreased the adoption of secondary structure. This parameter is critical in activity against gram (+) bacteria. A single or double substitution by N-methylated amino acids proved to be a highly effective strategy to decrease toxicity against the two eukaryotic systems tested, maintaining, and in some cases, increasing antimicrobial activity and resulting in highly selective compounds with high therapeutic indexes, especially against gram (+) and (−) bacteria. All N-Methyl substitution analogs presented significantly less experimental hydrophobicity, suggesting less interaction with the eukaryotic membranes, except C10:0-A2(9-NMeLys), which showed similar hydrophobicity and hemolytic activity than C10:0-A2. It is interesting to note that the more antimicrobial active lipopeptides were showed, the greater the contribution of the α-helix structure (C10:0-A2(9-NMeLys) and C10:0-A2(6-NMeLys), which reaffirms that the adoption of amphipatic α-helical structure is significant for these lipopeptides. With respect to enzymatic stability, multiple D-substitution sites significantly increased enzymatic stability. On the other hand, single or double substitution by N-methylated amino acids was less favorable to enhance the enzymatic stability of the peptides and lipopeptides, in particular against digestive enzymes. This observation could be attributed mainly due to the numerous sites in the peptide sequences targeted by the enzymes tested. TA4(dK), C10:0-A2(6-NMeLys), and C10:0-A2(9-NMeLys) were identified as the most promising molecules for further studies. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/antibiotics12050821/s1, Figure S1: Percentage of secondary structure types of peptides obtained by deconvolution of spectra by means of CDPro software package (Colorado State University), using SELCOM 3 and CONTILL methods. Figure S2: Percentage of secondary structure types lipopeptides obtained by deconvolution of spectra by means of CDPro software package (Colorado State University), using SELCOM 3 and CONTILL methods.
9,263.8
2023-04-27T00:00:00.000
[ "Medicine", "Chemistry" ]
CLUSTERING TAX ADMINISTRATIONS IN EUROPEAN UNION MEMBER STATES The European Union Member States use different organizational and functional models of tax administration that could determine better or worse performances. This paper analyzes the way of organization and operation of tax administrations in European Union Member States from the perspective of the 21 variables obtained based on the information made available on the OECD’s Tax Administration Comparative Information Series. Using the hierarchical clustering procedures, tax administrations in the European Union Member States were grouped into clusters. The purpose of this approach was to observe if the respective clusters can be associated with a certain grouping of the tax administrations, made according to their classification, from the point of view concerning the activity efficiency. The efficiency of the activity was evaluated based on 5 indicators developed in the specialty literature. The research showed that the tax administrations in the formed clusters can be found in one of the ranking parts. Therefore, the grouping of tax administrations based on variables that reflect their characteristics can be a useful tool in identifying an organization and functioning model for the tax administration that associated with a certain efficiency level. Introduction Tax administration (TA) is the public institution with the most important role in collecting tax revenue. The tax policy of a country is applied through the TA which is the main interface between state and taxpayers. Nowadays, TA is required to use adequate tools and to apply modern methods for tax planning, tax collection, in such a way that tax revenue collection and compliance of taxpayers with their tax obligations should be as high as possible. In-depth research on the specific problems of the TA, from the performance perspective that it must achieve, is an important topic in the current economic and social context. On the one hand, the citizens' demands in relation to the services provided by the state institutions have increased considerably, and therefore the TA must improve especially the activities through which it gets in direct contact with taxpayers. On the other hand, current neo-liberal policies have generated a reduction in tax pressure and require the state to use public money much more rationally and efficiently. Within this context, the pressure exercised on the TA to achieve a tax revenue collection, as high as possible, grows. The above-mentioned aspects can also be seen in the policies and strategies promoted by the International Monetary Fund, the World Bank, the Organization for Economic Co-operation and Development (OECD), etc. This research aims at grouping the TAs of the 28 European Union Member States (EU MS) by cluster analysis, using data provided by the OECD (Tax Administration 2019 Comparative Information on the OECD and Other Advanced and Emerging Economies) in order to highlight if a certain pattern of organization and functioning of TAs can be associated with the high TA performance. The fact that TAs in the EU MS are performing differently is revealed by the differences of the values from indicators calculated and published by the World Bank, OECD and European Commission (for example, cost of collection ratios, the size of the tax gap, time to prepare and pay taxes, etc.). One problem in solving the research question was that TAs in the EU MS are not yet ranked after a composite index to reflect their performance. An evaluation report of TA based on the new assessment tool developed for TAs (The Tax Administration Diagnostic Assessment Tool-TADAT) is not available for EU MS. The main research question is whether we can identify organization and functioning models of TAs in the EU MS that can be associated with a certain level of performance of the activity carried out. The information regarding the organizational characteristics of TAs in the EU MS and their performances were based on the OECD's Tax Administration Comparative Information Series-a database developed on the basis of information provided by MS through questionnaires and on data published in the 'Paying Taxes' Report, a part of the World Bank's Doing Business project, and in Study and Reports on the VAT Gap in the European Union-28 Member States (Davoine, 2019). The first part of the paper includes an analysis of how the assessment of TA performance is reflected in the specialty literature, as well as a number of relevant infor-mation on certain aspects related to the activity of TAs in the EU MS. In the following parts the research methodology is described and the results, discussions and conclusions are thus presented. This research is a first attempt, in the specialty literature, to link the characteristics of a TA (from the perspective of the structure-function) to its efficiency. A number of output indicators have been presented, based on which the TAs in the EU MS can be ranked. The study has shown that the efficiency of TAs in the EU MS is related to the way they are organized and operated. Considerations regarding the organization and functioning of the tax administrations There are various models of organization and functioning of TAs around the world, the differences between them being generated both by legislative elements (for example, the way taxes are collected: at national or sub-national level), as well as by cultural, historical or political elements (OECD, 2019, p. 106). A study carried out by OECD shows that, generally, TAs in the EU MS are organized as unified semi-autonomous bodies with main responsibilities in administrating direct and indirect taxes, but they also carryout other activities such as payment of welfare/ benefits and customs administration or the collection of social security contributions (OECD, 2011). The organizational structures of TAs that can be identified in the EU MS (organization by type of taxes collected, organization by functions performed and organization by type of taxpayer, combinations of two or more types of organizational structures) were analyzed in the specialty literature (OECD, 2009;Kidd, 2010;Murdoch et al., 2012;Jacobs et al., 2013) in terms of the advantages and disadvantages generated in use. The analysis of the specialty literature showed the use tendency of the functional structures in more and more TAs. The way of organizing and functioning of the TAs influences their degree of autonomy, materialized in the freedom to elaborate and implement adequate procedures for achieving the fiscal policy objectives. Numerous studies highlight the tendency to increase the autonomy degree of TAs (Murdoch et al., 2012;Jacobs et al., 2013;Crawford, 2013;OECD, 2019), with the aim of reducing the risk of political intervention in tax collection and improving the effectiveness of the TAs in the conditions of increasing the taxpayer's respect for the tax authority. In this context, there have been situations of outsourcing the activities of TAs to the private sector or other public institutions. Activities related to information technology, tax compliance activities, tax returns and the processing of tax payments are frequently outsourced. The authors who studied the effects of these trends point out that the outsourcing of some activities by TAs should not seek to reduce costs, as the only objective, but to represent a complex process to simultaneously increase the efficiency and the quality of services provided to taxpayers (Hartrath, 2015;Lemgruber et al., 2015;Davies et al., 2018). Sassi et al. (2018) and Walker and Tizard (2018) showed that outsourcing can be considered useful, to the extent that it generated savings for TAs, but the outsourcing impact on the quality of services provided to taxpayers is difficult to evaluate, so that this exploitation tendency must be done with caution. However, maintaining control over the basic functions of the TAs is essential. Mainly, the TAs in the EU MS want to be modern, characterized, according to D'Ascenzo (2015), by transparency and adaptability, in order to reduce international tax risks and to promote a positive investment climate and thus to reduce the causes of non-compliance. The efficiency evaluation of tax administration Because of the way TAs carry out their activity, which has an important impact on the total amount of public money, there is a high interest from the side of all factors involved in public finance that the activity of public administration is a performing one. The indicator of performance evaluation of the TA with generalized acknowledgment is maximizing the tax revenue collection, among which Serra (2005, p. 20) found the following: minimization of compliance costs and simpler performance measures, staff motivation and satisfaction of excise staff and taxpayers (James et al., 2006, p. 93), maximizing visibility and results in wider acceptance of the tax system, minimizing the administrative burden and minimizing service delivery transaction times (Yoon et al., 2014, p. 38). Developing tools for the performance evaluation of TA has been an important concern for the specialists, international organizations and the European Commission. A first initiative for creating a tool that would have allowed the TAs from the EU to identify its strengths and weaknesses dates from 2007, and took the shape of a fiscal blueprints set, that included the concept of measurement via a scoring system. The set of fiscal blueprints covers the following aspects: the overall framework of the TA, structure and organization, tax legislation, human and behavioral issues, ethics, human resources, revenue collection and enforcement, tax audit, administrative cooperation and mutual assistance, fraud and tax avoidance, taxpayer services, taxpayer rights and obligations, systems for taxpayers' management, voluntary compliance, information technology and communications (European Commission, 2007, pp. . Collecting the necessary data to achieve the scoring profiles must be done through a questionnaire. In 2008, the European Organization of Supreme Audit Institutions proposed a series of performance indicators considered as being measurable, time-related and comparable that could be used internationally in benchmarking the performance of TA. The suggested indicators are: tax gap, collection gap, timely filling, completeness and accuracy of taxpayers' tax returns, take up of electronic services, efficiency and productivity, total expense-to-revenue ratio of administration of taxes to revenue, overall customer satisfaction rating, quality of the TA's work-consisten-cy, correctness and speed of response to customers, cost to compliant taxpayers, and compensating the customer (European Organization of Supreme Audit Institutions, 2008, pp. [8][9][10][11]. Without using the appropriate data for all mentioned indicators (due to their unavailability), the study attempted to set up a group of 32 TAs in 32 states based on 6 variables. In 2011, a study proposed a set of high-performance indicators for TA (23 indicators) structured into three categories: TA framework and systems, compliance and risk education, services, enforcement and management, organization and responsibility (Crandall, 2011). A recent assessment TA tool is The Tax Administration Diagnostic Assessment Tool (TADAT) supported by the European Commission, Germany, the International Monetary Fund, Japan, the Netherlands, Norway, Switzerland, the United Kingdom and the World Bank. TADAT is a comprehensive TA diagnostic tool, performance evaluation being based on 28 indicators, built on 1 to 4 dimensions (OECD, 2016). A series of evaluation reports on the performance of TAs in developing countries are available. There are authors (Mansor and Tayib, 2015) who consider that a prescriptive set of measures or a series of verification indicators will not necessarily lead to an increase in the performance of a TA. Specific aspects of the guidelines that help identify ways to improve performance management in a TA and, in this sense, identifying measures that can be taken by other TAs can, however, improve performance management. Other authors consider that tax assessment performance indicators, such as cost for TA and net revenue collected, do not allow for comparison of TA efficiency as there are many differences in responsibilities, geographical location, cost administration and process automation, proposing a multifactor model for assessing efficiency of such authorities (Pētersone et al., 2016). Hauptman et al. (2014) consider that the level of communication between a tax auditor and a taxpayer may be an evaluation indicator of the TA performance. Jurušs and Kalderauska (2017) reveal that the best practice of customer relationship management in business should be adjusted and implemented in the TA. In the literature, we find several attempts to analyze some factors that the performance of TAs may depend on. For example, Katharaki and Marios (2010) argued that diverse factors such as location, inflation, etc. could play a significant role in TA performance. Esteller-Moré (2011) has attempted to show that it can be determined to a certain extent that the tax gap is due to TA's inefficiency or the predisposition of taxpayers to avoid paying their taxes. Kotsogiannis Given the statistics available, a series of assessments can be made regarding the performance of the TAs in the EU MS. To achieve a hierarchy of TAs from the point of view of the efficiency of the activities carried out, the information available in the 'Paying Taxes' report, published in November 2018, a part of the World Bank's Doing Business project was used. Thus, the EU MS were compared in terms of the number of hours required to pay taxes, as well as the number of annual payments that a medium-sized company has to make in order to pay its tax obligations. It is noted the fact that in 2018, medium-sized companies in Luxembourg, Italy, Cyprus, Bulgaria, Austria, Romania, Croatia, Belgium and Hungary made a larger number of payments over the year than the European average, which was about 10 annual payments ( Figure 1). Also, the time allocated for calculating, completing and filing tax returns was higher than the European average (about 172 hours per year) for medium-sized companies in Bulgaria, Croatia, the Czech Republic, Germany, Greece, Hungary, Italy, Poland, Portugal, Slovakia and Slovenia ( Figure 2). From the perspective of the ease with which a company can obtain the value added tax (VAT) refund (Figure 3), the TAs in Belgium, Bulgaria, Cyprus, Greece, Italy, Malta, Romania, the Czech Republic and Slovakia had procedures considered to be more difficult than the EU average (about 16 weeks). Also, in case of correcting some errors in the tax return statement on the profit tax (Figure 4), the companies in Bulgaria, Croatia, Finland, Malta, Slovenia have to spend more time than the European average (about 6 hours). The fiscal gap is a relevant indicator for evaluating the performance of TAs, at the EU level being available only information regarding the VAT gaps percent of VAT total tax liability. The VAT gap is an estimate of income losses due to fraud and tax evasion, avoidance of tax liabilities, bankruptcies, financial insolvencies, as well as calculation errors. In order to limit the influence of certain conjuncture short-term factors on the level of VAT gap, the average levels are presented for the period 2012-2018 (the values for 2018 were estimated) as it can be seen in Figure 5. The worst results in the collection of VAT revenues were recorded by the TAs in Romania, Greece, Slovakia, Lithuania and Italy, where the VAT gap was 2 and even almost three times higher than the EU average. A summary of the above information is presented in Table 1, in which the TAs of the EU MS are grouped according to the value of the five indicators analyzed. Methodology The OECD data base (the OECD's Tax Administration Comparative Information Series) on TA represented the source of the required data for the quantitative analysis. This database contains information valid for 2017. This database contains the data provided by the 55 TAs (including the 28 TAs of the EU MS) about revenue collections, institutional arrangements, budget and human resources, segmentation, registration, return filing and payment, service and education, collection and enforcement, verification/ audit and dispute resolution. In order to carry out the research, those data were selected on the basis of which variables can be defined that can characterize the TAs from the point of view of or-ganization and functioning. The variables were defined only if the information from the OECD database was available for all EU MS. We mention that the possibility of defining variables was limited by the numerous situations in which the data were not available for all the analyzed TAs. The absence of data is generated by the fact that the OECD collects the data by applying a complex questionnaire and the representatives of the TAs in the EU MS do not answer all the questions of the questionnaire. For these reasons, we managed to define only 21 variables, presented in Table 2. Part of the previously mentioned variables was taken from the OECD database on TA, and others were developed by the authors. TA exercises discretion over operating budget or not. Performance standards Availability of performance standards related to solving tax dispute cases via administrative review. The possible variants are: no standard, no standard met, partially standard met and mostly standard met. 6 Organizational features related to return filing/ payment processing Location of staff performing TA operational activities related to return filing/ payment processing. The possible variants are: centralized, localized and regionalized. 7 Organizational features related to managing taxpayer appeals/ disputes Location of staff performing TA operational activities related to managing taxpayer appeals/ disputes. The possible variants are: centralized, localized and regionalized. 8 Human resources The number of full-time permanent staff/ 1,000 inhabitants. 9 Importance of the audit activity from the perspective of the human resource The personnel involved in audit, investigation and other verification as % of full-time (FT) permanent staff. 10 Complexity of the office network The total number of offices (headquarters, regional and local offices and other offices) /1 mill. inhabitants. 11 Operating expenditure allocation The salary cost as % of recurrent budget. 12 Remuneration and staff performance The performance is linked to pay and reward or not. The classification of cases in the database was performed using cluster analysis in SPSS. A hierarchical clustering procedure was used because some of the variables are binary or numbers. Hierarchical cluster analysis separates each case into its own individual cluster in the first step so that the initial number of clusters equals the total number of cases. At successive steps, similar cases or clusters are merged together (as described above) until each case is grouped into a single cluster (Yim and Ramdeen, 2015, p. 17). The best results were obtained using method Ward Linkage and range Euclidean distance. To represent the cluster analysis graphically, a dendrogram was created. For this purpose, we opted to use the Ward Linkage method, considered by Armeanu et al. (2012) 'the most efficient and most efficient of all hierarchical classification algorithms', given the fact that 'at each step, those two clusters are merged for which the variability of the resulting cluster is the lowest of all cluster merging possibilities'. Results and discussions Clustering TAs in the EU MS led to the formation of 5 clusters, the main characteristics being presented in Table 3. From the point of view of the institutional framework, TAs in most EU MS operate as a unified semi-autonomous body. This type of organization is viewed as the most efficient, from the revenue collection point of view and the TA manages most direct and indirect taxes. From the point of view of the TA organizational structure, in most EU MS, various combinations of three types of established organizational structures are used: tax type, function type and taxpayer type. Thus, the disadvantages of the exclusive use of one of the respective organizational structures are avoided. For example, the flexible use of the staff whose competence was largely limited to a certain tax or the useless fragmentation of the tax system is the disadvantage of a tax type, or the impossibility of taking into account the different characteristics, behaviors and attitudes of taxpayers in relation to tax compliance is a disadvantage of the function tax. No.3 Developed or maintained in-house and from an external supplier in most (55%) TAs. From an external supplier centralized in a half of TAs. Developed or maintained in-house and from an external supplier centralized in a half of TAs. Developed or maintained in-house and from an external supplier in most (75%) TAs. Developed or maintained in-house and from an external supplier. No.4 80% of total TAs exercise delegated authority without requiring external approval. 75% of total TAs exercise delegated authority without requiring external approval. 100% of total TAs exercise delegated authority without requiring external approval. 63% of total TAs exercise delegated authority without requiring external approval. TA exercise delegated authority without requiring external approval. No performance standards -50% of total TAs. Mostly met. No.7 Centralized in a part (46%) of TAs. Regionalized in a half of TAs. Centralized in a half of TAs. Centralized in a half of TAs. The performance is linked to pay and reward in all TAs. The performance is linked to pay and reward in a half of TAs. The performance is linked to pay and reward in most (75%) TAs. The performance is not linked to pay and reward. No.13 No specific programs for SME's exist in most (64%) TAs. No specific programs for SME's exist in all TAs. No specific programs for SME's exist in most (75%) TAs. No specific programs for SME's exist in most (75%) TAs. No specific programs for SME's exist. No.14 E-payment is available and mandatory for all taxpayers (36% of total TAs) or for some taxpayers (27% of total TAs). The TA has not created a behavioral insights unit/team in order to influence taxpayer behavior in most (75%) TAs. The TA has not created a behavioral insights unit/team in order to influence taxpayer behavior in most (63%) TAs. The TA has not created a behavioral insights unit/team in order to influence taxpayer behavior. No.17 The TA has the ability to settle dispute with taxpayer in most (73%) TAs. The TA has the ability to settle dispute with taxpayer in most (75%) TAs. The TA has the ability to settle dispute with taxpayer in a half of TAs. The TA has the ability to settle dispute with taxpayer in most (88%) TAs. The TA has the ability to settle dispute with taxpayer. No. 18 Positive in most TAs. Positive in most TAs. Negative in all TAs. Negative in most TAs. Positive. No.19 The TA use innovative approaches in most (91%) TAs. The TA use innovative approaches in most (75%) TAs. The TA use innovative approaches in most (75%) TAs. The TA use innovative approaches in most (63%) TAs. The TA use innovative approaches. No.20 Includes individual development plans. HR strategy is competencybased in a half of TAs. HR strategy is competencybased. The percentage of TAs that exercise delegated authority without requiring external approval is higher in the case of cluster 1 compared to cluster 4. The existence of performance standards related to resolving tax dispute cases via administrative review was observed to a greater extent in the case of the TAs in the clusters 3 and 4. The location of staff performing TA operational activities related to return filing/ payment processing is centralized in 17 of the 28 TAs of the EU MS. Regarding the location of staff performing TA operational activities related to managing taxpayer appeals/ disputes, EU MS have opted in equal proportions for centralized and localized location. The number of full-time permanent staff/ 1,000 inhabitants, the salary cost as % of recurring budget and the value of refunds made (VAT and other refunds) as % of total net revenue collected by the TA are below the EU averages in the case of TAs in cluster 4. Personnel involved in audit, investigation and other verification as % of full-time permanent staff is above the EU average in the case of TAs in cluster 1. The performance is linked to pay and reward in all TAs in cluster 1. This feature can be observed to a much lesser extent in the case of TAs in clusters 3 and 4. In cluster 1 are the most TAs that have specific programs for SMEs. E-payment is available but not mandatory in most of the TAs in clusters 3 and 4. Also, these clusters are characterized by negative staff dynamics. The percentage of TAs using innovative approaches, individual development plans of HR and competency-based HR strategy is higher in case of cluster 1 as compared to clusters 3 and 4. The dendrogram for the formed clusters, using the Ward method for calculating the distance between objects, can be found in Figure 6. The dendrogram highlights the five formed clusters, certifying the robustness of the cluster analysis results. The grouping of the TAs from EU MS in clusters, based on some variables that characterize aspects related to organization and functioning, demonstrates the heterogeneity of these institutions. This heterogeneity is natural given the existence of different tax systems in the EU. But, given the close links that must exist between EU TAs in the tax collection process, the trend should be to homogenize and standardize in terms of the main aspects that characterize the organization and functioning of these bodies. The TAs in the EU MS are at relatively different levels of maturity (especially technological). The organization and functioning models outlined through the 5 identified clusters provide indications about the organizational level at which the TAs in the EU are. The TAs included in cluster 1 can be considered to be at the highest level of maturity considering that they are based on functional structures, they pay more attention to IT infrastructure, practice large-scale delegation of authority and taxpayers appreciate that, in general, their appeals/ complaints are resolved in a timely manner by the TA, staff activity is based on performance indicators, staff are highly qualified for their functions and receive specific training, and that there are specific roles for headquarters and local directorates. Conclusions We note that the TAs in cluster 1 can be found frequently in the first part of the ranking, based on 5 indicators that reflect the efficiency of the activity, and the TAs in clusters 3 and 4 can be found frequently in the last part of the respective ranking. Therefore, we can say that the efficient TAs in the EU MS have organizational and operational characteristics specific to cluster 1. In order to improve the efficiency of the TA activity, the decision makers should consider the change of some elements that aim at its organization and functioning, so that: • the degree of autonomy to increase; • to implement IT solutions in all activities; • to use innovative approaches in management, based on stimulating the performance of human resources and increasing the importance given to auditing; • to expand function-based organization for TA, based on matrix management, which are the most likely to launch and implement reforms; • deepening the segmentation of taxpayers; and • rethinking the organization scheme, so that it becomes supple and flexible, avoiding bureaucratization in terms of information flow, decision making and providing services to taxpayers. In order to provide high quality services to citizens, to improve revenue collection and to provide operational excellence, TAs are required to innovate quickly. The present research has shown that the grouping of TAs based on variables that reflect their characteristics can be a useful tool in identifying a model of organization and functioning of the TA associated with a certain level of efficiency. Such an instrument could provide TAs with a starting point for evaluating the way they are organized and functioning, in order to adopt the changes that could generate the efficiency of the activity. Such an instrument could be used complementarily with the assessment tools of the TAs presented in the first part of the paper. A limitation of research is related to the inexistence of an aggregate indicator of measuring the performance of a TA that can allow a classification of the TAs of the EU MS. Also, the possibility of choosing the variables regarding the organization and operation of TAs of EU MS has been limited by the many situations where the information was not available because the questionnaires received from the OECD were not fully completed. The grouping of TAs on the basis of variables that reflect their characteristics in order to identify the organization and functioning models of the TA that can be associated with a certain level of efficiency could become much more accurate if the information from the OECD database is complete.
6,535.2
2021-06-30T00:00:00.000
[ "Economics", "Political Science" ]
Mutagenicity and human chromosomal effect of stevioside, a sweetener from Stevia rebaudiana Bertoni. Leaves of Stevia rebaudiana Bertoni have been popularly used as a sweetener in foods and beverages for diabetics and obese people due to their potent sweetener stevioside. In this report, stevioside and steviol were tested for mutagenicity in Salmonella typhimurium strains TA98 and TA100 and for chromosomal effects on cultured human lymphocytes. Stevioside was not mutagenic at concentrations up to 25 mg/plate, but showed direct mutagenicity to only TA98 at 50 mg/plate. However, steviol did not exhibit mutagenicity in either TA98 or TA100, with or without metabolic activation. No significant chromosomal effect of stevioside and steviol was observed in cultured blood lymphocytes from healthy donors (n = 5). This study indicates that stevioside and steviol are neither mutagenic nor clastogenic in vitro at the limited doses; however, in vivo genotoxic tests and long-term effects of stevioside and steviol are yet to be investigated. Introduction Stevia rebaudiana Bertoni is a small herb (Compositae) (Fig. 1). The plant is native to South America and has been used for sweetening beverages and foods since 1600 (1). Stevia became popular and commercialized by Japanese. The plant has been distributed to southeast Asia including Thailand [as "Ya wan" (2)]. More than 750 tons of stevia leaves per year are used as crude extract for consumption. The sweetening compound was isolated from stevia leaves by Rebaudi and Resenac (3), and was named as "stevioside" (4,5). Stevioside has very high sweetening potency, 250-300 times that of sucrose, but little caloric value (6). Its sweetness is stable to heat and yeast fermentation. Stevia and stevioside have been applied as a sugar substitute and used by those with obesity, diabetes mellitus, heart disease, and dental caries (7). Stevioside can also inhibit the growth of certain bacteria (8). Eight different sweetening ent-kaurene glycosides (from about 88 compounds in stevia leaves) were isolated (9). The common alycone of those glycosides is steviol, Mutagenicity The mutagenicity of crude stevia extract, stevioside, and steviol have been reported (18)(19)(20)(21)(22)(23). Stevioside showed no mutagenicity in bacterial systems (18)(19)(20); however, its aglycone, steviol, was mutagenic after metabolic activation in the forward mutation assay using only Salmonella typhimurium TA677 (TM677) (21) but not mutagenic in the reverse mutation test using Salmmellka typhimurium TA100, TA98, TA102, or TA97. When steviol was metabolically activated with S-9 from Aroclor 1254-pretreated rats, 15-oxosteviol was found to be the mutagenic product in the forward mutation assay (22). It was suggested (23) that this major, oxidized steviol was responsible for the indirect mutagenicity of steviol in TM677, probably by selectively inducing a deletion or insertion of more than one base pair, which cannot be found in strains of TA98, TA100, and TA102, which are regularly used in the Ames test. Carcinogenicity No evidence has been reported that stevioside and its metabolites are carcinogenic. No carcinogenicity was detected in hamsters given stevioside orally for 6 months and in long-term feeding for 2 years in rats (1,17). Contraceptive Activity and Teratogenicity It was shown that 5% water decoction of stevia leaves reduced the fertility of female rats by about 65% (24). Stevioside (95-98% pure) had no effects on the rate of pregnancy or the development of rat fetuses (25). Stevioside did not cause any abnormalities on mating, pregnancy, or the development of fetuses in the experimental animals (15,26). Safety Assessment of Stevioside Consumption in Humans Stevia has been used as a sweetening ingredient in foods and drinks by South American natives for many centuries, and there is no report of any plant toxicity to the consumers. The safety of stevia crude extract and stevioside have been well accepted and their various products commercialized in Japan as sweeteners for several foodstuffs (7,8). Stevia leaves have been used as herbal tea by mixing with other plant products for reducing sugar consumption in diabetic patients in Thailand (27). No side effects were observed in these patients after 5 years of continued consumption. Long-term genotoxicity and health risks in humans have not been completely assessed. In Thailand, stevia leaves or their crude extract have been legally permitted to be used commercially as a herbal tea by the Ministry of Public Health, but purified stevioside as an additive in foods and drinks has not yet been legalized. However, stevioside has been permitted to be exported. More evidence of the safety of products from stevia and stevioside and health risk assessment on their genotoxicity are needed. In this report, the mutagenicity and human-chromosomal effect of stevioside and its aglycone steviol were tested and considered for further health risk assessment. Materials and Methods Tested Compounds. Stevioside was isolated from stevia leaves by hot-water extraction, decolorized by electrolysis and ion-exchange chromatography, and crystallized by the method previously reported (29). The purity ofthe product was 99%. Steviol was obtained by periodate oxidation of stevioside, followed by acid hydrolysis and crystallization. Mutagenicity Assay The Salmonella mutation with preincubation was assayed using S-9 mix prepared from the livers of rats pretreated with sodium phenobarbital and 5,6-benzoflavone as previously described (30). The tester strains were Salmnella typhimurium TA98 and TA100. The bacteria were cultured in Oxoid nutrient broth no. 2-14 hr before each assay. The histidine + revertants were scored. All samples analyzed were in duplicate. Chromosomal Aberration Test. Whole-blood samples from five healthy donors were used. Lymphocyte cultures were performed according to the standard method. After 24 hr of incubation, the cultures were added with different concentrations of stevioside (1, 5, and 10 mg/mL) or steviol (0.1 and 0.2 mg/mL). Mitomycin C at 1 p,g/mL was used. Structural chromosomes aberrations were analyzed from 100 metaphases in each tested culture. Mutagenicity As shown in Figure 3, at lower than 25 mg/plate either in the presence or absence of S-9 mix, stevioside was not mutagenic toward Salnwnella typhimurium TA98. However, at the higher dose, 50 mg/mL, stevioside showed significant mutagenicity (four times the control) of TA98 without metabolic activation, while the same dose demonstrated a slight increase of bacterial mutation with the addition of S-9 mix. Under the same doses and conditions, stevioside did not exhibit any mutagenic activity toward TA100. Steviol, 1-20 mg/plate, did not show mutagenic activity to either TA98 or TA100. Higher doses of steviol were not tested due to their strong cytotoxicity to both tester strains. The treatment with I8-glucosidase on 50 mg stevioside did not significantly alter its mutagenic effect to TA98 either with or without metabolic activation (Fig. 3). Chromosomal Aberrations Stevioside at concentrations of 1, 5, and 10 mg/mL did not cause any significant aberrations of metaphasic chro- mosomesin all blood samples analyzed (p > 0.05). Similarly, steviol at 0.1 and 0.2 mg/mL did not show any significantly abnormal change of chromosomes in four blood samples, except in one case. Steviol (0.1 and 0.2 mg/mL) in the presence of S-9 mix in one tested blood sample did not alter the result. However, under the same conditions, mitomycin as the positive control caused remarkable genetic damage of chromosomes. Discussion We have shown the lack of mutagenicity of stevioside and steviol at limited doses (up to 20 mg) toward Salmonella typhimurium strains TA98 and TA100 with or without metabolic activation. This confirmed the findings previously reported (18,21) that crude stevia extract and stevioside were negative toward TA98 and TA100 mutation. By using other various systems such as reversion mutation (18), bacterial recombination (19), host-mediated mutation (20), Salmonella mutation (22), forward mutation (21), chromosome aberration on human fetus fibroblasts (8), and dominant lethality (8), all the mentioned mutagenicity in vitro were negative both with and without S-9 mix. Pimbua et al. (21) also demonstrated that stevioside was not mutagenic, even after incubating with microsomal enzymatic fractions from livers of different animal species. Generally, stevioside per se is not a mutagen toward bacterial cells or a genotoxin to cultured mammalian cells, and it is not carcinogenic to experimental animals either. Only at an unusually high dose, 50 mg/plate, was stevioside mutagenic to TA98 but not to TA100. The mutagenicity might be due to some impurities in the sample. Crude extract of stevia leaves was shown by other investigators to have some slight mutagenicity to TA100 (8) and induce weak chromosomal aberration in fibroblasts from Chinese hamsters (20). The mutagenic activity of stevioside at the high dose was more evident in the absence of its metabolic activation than with S-9 mix. The decrease of mutagenicity by the presence of S-9 mix might be due to some inactivation of such high amount of stevioside during the preincubation with rat-liver microsomal enzymes. If impurities were not responsible, stevioside seemed to be a very weak, direct-acting mutagen. Steviol was shown to be nonmutagenic in TA98 and TA100 by the reverse mutation assay. In several mutagenic assays, only one forward mutation assay reported by Perzzuto et al. (20) and Pimbua et al. (21) with steviol at 5 mg/plate was shown to be mutagenic toward Salmonella typhimurium TA677 either with or without S-9 mix. The 15-oxo steviol was reported to be an active metabolite from steviol incubated with rat liver S-9 mix, and it was shown to be a direct-acting mutagen toward Salmonella typhimurium TM677 (23). No in vitro carcinogenicity of steviol has been studied. The health risk oflong-term ingestion of stevioside has to be studied further. Experimentally, stevioside was converted into steviol by hydrolysis of endogenous or bacterial enzyme(s) in rats orally given with 3H-stevioside (31). Steviol and dihydrosteviol could inhibit the mitochondrial translocation of adenine nucleotides and also inhibit energy metabolism, oxygen consumption, and gluconeogenesis (32). It is still unknown whether steviol could be formed during ingestion and then absorbed in human gastrointestinal tract. It was concluded that stevioside and steviol, at less than 20 mg/plate, are not mutagenic toward Salmonella typhimurium strains TA98 and TA100 with or without metabolic activation. At higher concentrations, stevioside showed weak mutagenicity to TA98, which might be due to contamination by impurities. An in vitro chromosomal effect on human lymphocytes of stevioside and steviol was not observed. However, other in vitro genotoxicity studies on stevioside and its metabolites and their long-term consumption in humans are yet to be investigated.
2,350.6
1993-10-01T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
Production of Polycaprolactone/Atorvastatin Films for Drug Delivery Application Developing biomaterials for tissue regeneration is a promising alternative for the recovery of various tissues, including bone. Atorvastatin (ATV) has a series of beneficial effects which includes bone anabolism, vasodilating Introduction Atorvastatin (ATV) belongs to the pharmacological class of statins and is an anti-cholesterol drug that inhibits the HMG-CoA reductase, an essential enzyme in cholesterol synthesis, and reduces cholesterol production by the human organism 1 .Traditionally, this drug is presented as a tablet.It has an absorbed fraction and absolute bioavailability of about 30% and 12%, respectively, due to a pre-systemic clearance in the gastrointestinal mucosa and its first-pass hepatic metabolism 2,3 .However, ATV has beneficial effects unrelated to lipid metabolism, the so-called pleiotropic effects.This includes bone anabolism, vasodilating, antithrombotic, antioxidant, anti-inflammatory, and immunosuppressive actions 4,5 .In 2002, Pasco et al. 6 , in a retrospective casecontrol study, Statins were associated with a 60% reduction in the risk of bone fractures when evaluating 1,375 women aged 50 to 95 years 6 .Some statins have been studied with polymers for controlled release 7,8 .However, for ATV, there is still no study available. Biodegradable polymers have been investigated as raw materials to manufacture biomaterials applied to tissue regeneration and devices for drug delivery [9][10][11][12][13][14] .Due to Polycaprolactone (PCL) biocompatibility, and biodegradability, several applications in the medical field are found for it, such as drug delivery systems 9,15 ; films applied to tissue-engineered skin [16][17][18] ; coating for urethral stents in musculoskeletal tissue engineering 19 and scaffolds for supporting osteoblast and fibroblast growth 19,20 microspheres and nano formulations for drug delivery applied to cancer treatment 21 ; and resorbable scaffolds by injection molding for application in orbital eye repair 22 . Mainly, PCL has been extensively studied to produce materials applied to both bone and cartilage repair 19,22,23 .In this context, different techniques such as electrospinning, solvent casting, and compression molding, among others 9,22 , have been used to produce biomaterials in different formats, such as films 9 , scaffolds 9 , among others 24 . Biomaterials have played a prominent role in the development of tissue regeneration 1 .It can be used for local drug delivery as a promising alternative to recover various tissues, including bone 1 .In this context, two different technologies, the controlled release of drugs and tissue engineering, can be combined to provide efficient results in tissue repair.Polymeric materials in matrices, films, or scaffolds can guide and support tissue regeneration and work as vehicles for drug delivery [9][10][11][12][13] .The technology of controlled release of drugs can potentially overpass many problems related to the traditional administration of active pharmaceutical ingredients by regulating the rate and spatial localization of the released agent 14 . Recent studies on metabolic abnormalities and obesity, particularly abnormal lipid metabolism, have been associated with diseases caused by cartilage degeneration, such as osteoarthritis 25 .Developing materials based on PCL and ATV can be an alternative to treat problems associated with bone, local cholesterol disorder, and cartilage diseases due to PCL 9,10 and ATV characteristics 25,26 .Thus, the objective of the present work was to produce and characterize PCL/ATV matrices with prolonged drug release. . Materials Atorvastatin calcium was kindly donated by the Oswaldo Cruz Foundation.PCL (average molar mass of 60,000g/mol) was supplied by Polymorph Ltd, China.Chloroform (PA) was obtained from Labsynth, Brazil.The manufacturer's name will not be disclosed for confidentiality reasons. Production of PCL/ATV matrices The polymeric matrices were prepared by solvent casting, where atorvastatin calcium and PCL were solubilized in 2 mL of chloroform at room temperature using room temperature the following quantity of atorvastatin 0; 0.1; 0.2; and 0.4 mg ATV/mg PCL.The values were based on the ATV pill formulations.The resulting mixtures were added to silicone molds (3 cm x 2 cm) and dried at room temperature for 1 hour.The matrices were named according to the amount of ATV in the sample: PCL (0 mg of ATV), PA20 (20 mg of ATV); PA40 (40 mg of ATV); PA60 (60 mg of ATV); PA80 (80 mg of ATV). Fourier-Transform Infrared Spectroscopy (FTIR) FTIR analysis was performed to investigate the possible interactions between ATV and PCL.The analysis was done using an Infrared Spectrometer (Bruker, model Vertex 7) with attenuated total reflectance (ATR) by recording measurements from 4000 to 400 cm −1 and 64 scans and compared with the raw materials (PCL and ATV). X-Ray Diffraction (XRD) All the samples were analyzed on an X-Ray diffractometer (Rigaku, model Mini Flex II) operated with the Cukα source (λ = 1.5418Å).The scans were recorded over the 2θ = 6-60°, with a 2°/min scan speed. Thermogravimetric analysis (TGA) PCL, ATV, PA20, PA40, PA60, and PA80 samples were analyzed on a thermogravimetric analyzer (model Discovery 550, TA Instruments).The measurement was performed using platinum pans with about 5 mg of the sample at a heating rate of 20 °C/min under a nitrogen atmosphere, from room temperature to 700 °C.TGA curves and their derivative (DTG) were obtained. Differential Scanning Calorimetry (DSC) PCL, ATV, PA20, PA40, PA60, and PA80 samples were characterized on a differential scanning calorimeter (DSC model Discovery 250, TA Instruments).Samples with a mass of around 5 mg were heated in aluminum pans in the range from room temperature to 300°C, with a heating rate of 10°C/min under a flowing dry nitrogen atmosphere (20 cm 3 /min). Scanning Electron Microscopy (SEM) and Energy Dispersive Spectroscopy (EDS) To evaluate the morphology of the samples with different amounts of ATV, the raw materials, PA20, PA40, PA60, and PA80 samples were analyzed using a scanning electron microscope (JEOL, model JSM-6390LV) at an acceleration voltage of 15 kV.For this analysis, samples were recovered with gold. The samples were vaporized with carbon to observe the drug distribution on the matrices, and an EDS spectrum was acquired.The spectra acquisition was made for 30 sec with a 15kV and 79 mA beam at 10 mm and a spot size of 71.For the acquisition of the EDS map, the exact condition of the beam was used, and it acquired 500 frames with a dead time between 17-19%. In vitro release studies To evaluate the capacity of the prepared films in releasing ATV, they were submitted to an in vitro release study using UV-Vis.A standard curve of ATV in a phosphate medium was obtained at 240 nm.Therefore, the in vitro release study was conducted using a Distek Evolution 6100 dissolution test system.The experiment was done in triplicate, using three vats for each sample.The samples (PA20 and PA40) were immersed in 900 mL of potassium phosphate buffer (pH = 7.4), at 37°C, under the agitation of 75 rpm, and aliquots were withdrawn at the following predetermined intervals: 5, 30, 60, 90, 120, 150, 180, 210, 240, 1440, 1800, 2880, 3180, 4320, 5760, 5940, 10080, 11520, 12960, 14400 and 15840 min.The drug concentrations were calculated using the equation (y = 33.195x+ 0.0176; with a correlation coefficient of 0.9603) obtained through the analytical curve of ATV in phosphate buffer pH 7.4.Drug releases were adjusted for the following models: zero-order kinetics, first order-kinetics, Hixson-Crowell, Higuchi, and Korsmeyer-Peppas 27,28 .The linearity of the results was evaluated by the determination coefficients (R 2 ).After the in vitro release study, the matrices morphology was analyzed by SEM as described in the above section. FTIR analysis FTIR analysis of PCL, ATV, and samples containing different amounts of ATV was performed, and the spectra are shown in Figure 1.The prominent bands of PCL and ATV are also presented in Figure 1.The results showed that ATV and PCL present bands with wavenumbers closer than those reported in the literature 19,[29][30][31] . The spectra of PA20, PA40, PA60, and PA80 samples present the characteristic bands of both pure PCL and pure ATV.With the increase in the proportion of ATV in the samples, the ATV bands become more evident (Figure 1).The ATV characteristic bands around 1646 cm -1 for (C = O) and 1575 cm -1 for (C -N) can be observed in the region ranging from 1900 cm -1 to 1350 cm -1 .The strong carbonyl absorption of PCL overlaps the carbonyl absorption bands of ATV.Therefore, according to FTIR analyses, no chemical linkage was formed between ATV and PCL, and this is important as the primary objective is the release of ATV.The peaks related to PCL and ATV were observed in the samples PA20, PA40, PA60, and PA80.Peaks related to ATV, as in 2θ = 17.05° and 2θ = 19.57°are more clearly observed in samples with a greater amount of the drug 3,32 (See in detail on the right side in Figure 2).According to Figure 2, changes in PCL pick position occurred, probably due to the presence of ATV. Thermal analysis -DSC, TGA and DTG The samples PA20, PA40, PA60, PA80, PCL, and ATV were analyzed by TGA and DSC to evaluate the thermal properties of these materials.DTG results are presented in Table 1.DSC curves are shown in Figure 3 and the Endothermic events values in Table 2.The PCL thermal degradation occurred in a single stage of mass loss.The onset was 390°C and the maximum thermal degradation occurred at 448°C (Table 1).This stage corresponds to the depolymerization of PCL from the ends of the polymer chain, with the hydroxyl terminal groups forming ε-caprolactone 23 .These results agree with the literature that described that PCL is completely decomposed in a single stage, with a peak of maximum degradation around 430°C 9,33 . As previously described 11,24 , a five-stage weight loss was recorded for TGA curves.According to Shete et al. 2 , the first phase of weight loss, around 80-140°C, is related to the loss of volatile that was absorbed on the surface.When adsorbed on the surface, water usually comes out at lower temperatures, around 60ºC, and, on the other hand, the connected water usually leaves around 100ºC.According to these authors, this result is consistent with the literature for ATV trihydrate 2 .In this work, stages of mass loss at 251ºC, 340ºC and 459ºC were observed for ATV.The theoretical value of mass loss for the trihydrate would be 4.46% and we observed a very close loss, 4.38%, indicating that this inference is probably correct 2 .It can also be observed in the TGA curve, that the greatest loss of volatile occurs after 200°C. TGA analyze of PCL/ATV showed characteristics compatible with PCL and ATV.Table 1 shows the maximum temperatures observed in DTG curves (excluding the initial events at temperatures below 175°C) for both ATV, PCL, and PCL/ATV samples.These obtained results indicate that PCL/ATV samples showed intermediate characteristics between the curves of the drug and polymer in the thermal decomposition curves, indicating drug entrapment in the polymer matrix. The DSC curves of ATV (Figure 4) present two endothermic events: one in the range of 80°C to 140ºC related to the loss of volatiles and the second around 160ºC due to the melting of ATV 32,34,35 .These results are consistent with the TGA data obtained in this work.The PCL matrix (Figure 3) showed a single endothermic peak that corresponds to the crystalline fusion at 52°C.A similar result was observed by Silva et al. 30 , Sekosan and Vasanthan 36 . The peaks related to PCL fusion remained similar in the PCL/ATV samples.Nevertheless, since changes in the fusion enthalpy values were observed (Table 2) it is possible to infer that ATV alters the crystallinity of PCL (Figure 4).The value of ΔH decreases with increasing ATV content.This result agrees with the XRD analysis (Figure 2) that showed changes in the characteristic peak of the PCL.The peaks related to the drug and the amount of energy involved in thermal transitions were also modified.Considering the matrices, the endothermic events related to ATV were observed in PA40, PA60, and PA80 samples.Although, probably, due to the recrystallization of ATV in PCL, these peaks were shifted to higher temperatures (Table 2).In the PA20 sample, the peaks related to ATV were not observed, perhaps due to the low concentration of drug present in the sample SEM and EDS Analysis SEM images of ATV particles, PCL matrix, PA20, PA40, PA60, and PA80 matrices are shown in Figure 5.The ATV micrograph (Figure 4a) shows agglomerates of needle structures with variable length and width.The pure PCL matrix (Figure 4b) presented a smooth surface.According to SEM images (Figure 4c, 4d, 4e, and 4f), the morphology of the matrices was affected by the amount of ATV in the sample.Structures smaller than 10 µm can be observed on the surfaces of PA40 and PA60 matrices. As mentioned, in the present work, ATV calcium was used.EDS analysis was performed to evaluate the calcium distribution on PA20 and PA40 surfaces (Figure 5).The EDS calcium map allows concluding that the drug is dispersed in the matrices and the clusters at the PA40 matrix are constituted by the drug (Figure 5b).This allows the extrapolation for the clusters observed in the SEM images of PA60 and PA80 (Figure 4e-4f) and the conclusion that these samples have excess particles on the surface, probably, outside the polymeric structure. In vitro release study PA60 and PA80 samples (Figure 6) were subjected to in vitro drug release studies.The films were produced in 3cm x 4cm molds and obtained thickness of 0.1cm. ATV release was affected by its concentration in the sample as shown in Figure 7.During the studied period, it was observed that P60 (60 mg of ATV) and PA80 (80 mg of ATV) samples released 36,43% and 64,72% of ATV, respectively.The dissolution of the drug contained in the matrices can be considered prolonged because up to 15000 minutes it had not reached 100% release 37 .PCL is used in the development of controlled drug release systems due to its mechanical properties and these characteristics may have helped in the drug release behavior 9 .This result can be considered a positive one since one of the goals of this work was to produce ATV matrices with the prolonged release. The experimental data were adjusted for the most used release models in the literature: zero-order kinetics, first-order kinetics, Hixson-Crowell, Higuchi, Korsmeyer-Peppas and the correlation coefficients were determined.The values of the adjusted coefficients as well as the R 2 are presented in Table 3.According to the data, PA60 samples can be explained by the Hixson-Crowell and Higuchi model, being the last one which fits best (Table 3).In the PA80 sample, the kinetic release could be explained by kinetics model of order zero, Hixson-Crowell, Higuchi and Korsmeyer-Peppas model, the last one with the higher correlation coefficient (Table 3).This shows that more than one model can explain the initial drug release behavior.However, it is important to highlight that only the Higuchi model can be adjusted for both samples, since the R 2 values were above 0.96 for both.Some considerations must be made for the application of the models.Higuchi's model, for example, is based on Fick's law and in order to be used, it is necessary to consider different factors, such as: mass transport through the diffusion of the drug, being the limiting step; sink condition for ATV (which is poorly soluble in phosphate buffer pH 7.4 35 ); the device must not be eroded while the drug is being delivered; the diffusion coefficient of the species must be constant; the swelling of the device must be insignificant or happen quickly in order to achieve balance; among others 37 .The dissolution process can change the matrices morphology and, consequently, the release behavior 38 .In fact, it is not, immediately, the erosion of the PCL itself, but the erosion of the matrix due to the exit of ATV.This erosion can create preferential pathways for the fluid. To understand the effect of ATV's release from the matrix, SEM analysis of PA60 and PA80 samples after the release study was made and showed that erosion of the matrix occurred (Figure 8).In the images, it was possible to observe the formation of pores when compared to the micrographs before the release (Figure 4e-4f) , possibly due to the exit of the drug particles that were well incorporated in the matrices (PCL degradation can take 2 to 4 years depending on the initial molecular weight of the device or implant 9 . Nevertheless, the formation of pores in the matrix structure for tissue engineering could be positive, since it allows the initial fixation of cells, serving as a structure and allowing tissue regeneration 9,39 .In this context, these matrices produced by PCL and ATV start as a matrix of tissue and, as the release proceeds, there is the formation of pores that could facilitate tissue growth.Similar behavior was also seen by Jiao et al. 40 where the pore structure in the nano-hydroxyapatite (HA)/PCL and micro-HA/PCL tissue engineering scaffolds produced by 3D printing facilitated the growth of blood vessels, the transport of nutrients, and provided a very favorable environment for the discharge of cellular metabolic waste 40 .Also, in the work of Liu D. the 3D printed PCL/Strontium-HA bone scaffold was prepared with the aim of simulating the natural bone components, exhibiting significant osteogenic activity in vitro and in vivo, being able to simultaneously release strontium and calcium ions to promote osteogenesis repair due to its porosity 41 . According to the observation of SEM analysis about ATV release from matrices produced in this work, it is suggested that the mechanism of drug release from PA60 and PA80 samples occurs in the following steps: (1) Solubilization of the drug particles that were closest to the sample surface. (2) Diffusion of the drug particles that were inside the matrices structure into the solution, forming pores where previously were drug particles, which may cause the erosion of the polymer.(3) Diffusion of the phosphate buffer solution into the matrix after the drug has started to dissolve, causing the swelling of the polymeric matrix and its erosion.Since PCL is hydrophobic, this process can be more difficult, being a factor with less impact on the release.These could explain the change in the release in the dissolution profile (Figure 8). Conclusions In the present work, PCL matrices containing ATV were obtained using the solvent casting method.The distribution of ATV in the matrices was uniform as observed in SEM analysis.All characterizations proved the incorporation of the drug in the matrices, maintaining its structure with minimal changes.It was observed that, at the end of the tests (after 15000 minutes), the obtained dissolution reached a maximum of 65%, indicating a prolonged release of the ATV, where the release process occurs in the following steps: solubilization of the drug on the matrix surface, diffusion of the drug from the matrices, and then erosion.The experimental data of the in vitro release analysis of the matrices were adjusted for different models and the one that showed the best fit for the two studied matrices was the Higuchi model, with a correlation coefficient above 0.95 for both.The conditions for the application of the Higuchi model were not completely satisfied in the present work, but, due to the obtained R 2 value, it is suggested that diffusion is an important step in the process of ATV release from PCL matrices produced by solvent casting. Figure 6 . Figure 6.(a) PA60 and (b) PA80.Both films were produced in 3cm x 4cm molds and obtained thickness of 0.1 cm. Table 1 . DTG maximum values for ATV, PCL, and PCL/ATV samples. Table 2 . Endothermic events values for ATV, PCL, and PCL/ATV samples.
4,485.6
2023-01-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Z-boson production in p-Pb collisions at sNN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{s_{\mathrm{NN}}} $$\end{document} = 8.16 TeV and Pb-Pb collisions at sNN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{s_{\mathrm{NN}}} $$\end{document} = 5.02 TeV Measurement of Z-boson production in p-Pb collisions at sNN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{s_{\mathrm{NN}}} $$\end{document} = 8.16 TeV and Pb-Pb collisions at sNN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{s_{\mathrm{NN}}} $$\end{document} = 5.02 TeV is reported. It is performed in the dimuon decay channel, through the detection of muons with pseudorapidity −4 < ημ< −2.5 and transverse momentum pTμ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {p}_{\mathrm{T}}^{\mu } $$\end{document}> 20 GeV/c in the laboratory frame. The invariant yield and nuclear modification factor are measured for opposite-sign dimuons with invariant mass 60 < mμμ< 120 GeV/c2 and rapidity 2.5 <ycmsμμ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {y}_{\mathrm{cms}}^{\mu \mu} $$\end{document}< 4. They are presented as a function of rapidity and, for the Pb-Pb collisions, of centrality as well. The results are compared with theoretical calculations, both with and without nuclear modifications to the Parton Distribution Functions (PDFs). In p-Pb collisions the center-of-mass frame is boosted with respect to the laboratory frame, and the measurements cover the backward (−4.46 <ycmsμμ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {y}_{\mathrm{cms}}^{\mu \mu} $$\end{document}< −2.96) and forward (2.03 <ycmsμμ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {y}_{\mathrm{cms}}^{\mu \mu} $$\end{document}< 3.53) rapidity regions. For the p-Pb collisions, the results are consistent within experimental and theoretical uncertainties with calculations that include both free-nucleon and nuclear-modified PDFs. For the Pb-Pb collisions, a 3.4σ deviation is seen in the integrated yield between the data and calculations based on the free-nucleon PDFs, while good agreement is found once nuclear modifications are considered. JHEP09(2020)076 twice that of 2015. The dataset used in this paper includes both the 2015 and 2018 samples and therefore supersedes the previous Pb-Pb results. The larger dataset allows for a more differential analysis as well as increased precision on the integrated cross section measurement. The paper is organized as follows: the ALICE detector and data samples are detailed in section 2, followed by the analysis procedure in section 3. The main results are then given in section 4 and the conclusions are drawn in section 5. 2 ALICE detector and data samples Z bosons are reconstructed via their dimuon decay channel using data from the ALICE muon spectrometer, which selects, identifies and reconstructs muons in the pseudorapidity range −4 < η µ < −2.5 [34]. The tracking system consists of five stations, each containing two multi-wire proportional cathode pad chambers. The third station is located inside a dipole magnet that provides an integrated magnetic field of 3 T × m. A conical absorber of 10 interaction lengths (λ i ) made of carbon, concrete and steel, is located in front of the tracking system to filter out the hadrons and low-momentum muons from the decay of light particles (such as pions or kaons). The muon trigger system consists of four resistive plate chamber planes arranged in two stations placed downstream of an iron wall of ∼7.2 λ i that reduces the contamination of residual hadrons leaking from the front absorber. Finally, a small-angle beam shield made of dense materials protects the whole spectrometer from secondary particles coming from beam-gas interactions and from interactions of large rapidity particles with the beam pipe. Primary vertex reconstruction is performed by the Silicon Pixel Detector (SPD), the two innermost cylindrical layer of the Inner Tracking System (ITS) [35]. The first and second layer cover the pseudorapidity regions |η| < 2.0 and |η| < 1.4, respectively. Two arrays of scintillator counters (V0A and V0C [36]) are used to trigger events and to reject events from beam-gas interactions. The V0A and V0C detectors are located on both sides of the interaction point at z = 3.4 m and z = −0.9 m and cover the pseudorapidity regions 2.8 < η < 5.1 and −3.7 < η < −1.7, respectively. The V0 detectors are also used to estimate the centrality in Pb-Pb collisions by using a Glauber model fit to the sum of their signal amplitudes [37]. The events are then distributed in classes corresponding to a percentile of the total inelastic hadronic cross section. Finally, the Zero Degree Calorimeters (ZDC) [38], placed on both sides of the interaction point at z = ±112.5 m, are used to reject electromagnetic background. A complete description of the ALICE detector and its performance can be found in refs. [39,40]. The analysis in p-Pb collisions is performed on data collected in 2016 at a centerof-mass energy √ s NN = 8. 16 TeV. The data were taken in two beam configurations, with either the proton (p-going) or lead ion (Pb-going) moving towards the muon spectrometer. By convention, the proton moves toward positive rapidities. Because of the asymmetry in the proton and lead beam energies (E p = 6.5 TeV and E Pb = 2.56 TeV per nucleon), the resulting nucleon-nucleon center-of-mass system is boosted with respect to the laboratory frame by ∆y cms = ±0.465. Therefore, the rapidity acceptance of the muon spectrometer -3 - JHEP09(2020)076 in the center-of-mass system is 2.03 < y µµ cms < 3.53 for the p-going configuration and −4.46 < y µµ cms < −2.96 for the Pb-going configuration. The data used in the Pb-Pb analysis were taken in 2015 and 2018 at √ s NN = 5.02 TeV and cover the rapidity 1 interval 2.5 < y µµ cms < 4. The events selected for the analyses require two opposite-sign muon candidates in the muon trigger system, each with a transverse momentum above a configurable threshold, in coincidence with a minimum bias (MB) trigger. The latter was defined by the coincidence of signals in the two arrays of the V0 detector. In the Pb-Pb analysis, only the events corresponding to the most central 90% of the total inelastic cross section (0-90%) are used. For these events the MB trigger is fully efficient and the contamination by electromagnetic interactions is negligible. For p-Pb collisions, the Z-boson cross section is calculated using a luminosity normalization factor obtained via a reference process corresponding to the MB trigger condition itself. Therefore the MB trigger efficiency does not affect the cross section evaluation. Finally, the muon trigger threshold was p µ T 0.5 GeV/c for p-Pb and p µ T 1 GeV/c for Pb-Pb collisions. After the event selection, the integrated luminosity in Pb-Pb collisions is about 750 µb −1 . In the p-Pb analysis, where a precise value of the luminosity is needed to compute the Z-boson cross section, dedicated Van der Meer scans were performed [41]. The values of the luminosity amount to 8.40 ± 0.16 nb −1 and 12.74 ± 0.24 nb −1 for the p-going and Pb-going configuration, respectively. Analysis procedure The Z-boson signal extraction is performed by combining muons of high transverse momentum in pairs with opposite charge. Muon track candidates are reconstructed in the tracking system of the spectrometer using the algorithm described in ref. [42]. In order to ensure a clean data sample, a selection is performed on the single muon tracks reconstructed in the muon spectrometer, requiring them to have a pseudorapidity −4 < η µ < −2.5 and a polar angle measured at the end of the front absorber of 170 o < θ abs < 178 o . This procedure removes tracks at the edge of the spectrometer acceptance, and rejects tracks crossing the high-density section of the absorber, which experience significant multiple scattering. The background from tracks not pointing to the nominal interaction vertex, mostly coming from beam-gas interactions and muons produced in the front absorber, is removed by applying a selection on the product of the track momentum and its distance of closest approach to the primary vertex (i.e. the distance to the primary vertex of the track trajectory projected in the plane transverse to the beam axis). Finally, a track is identified as a muon if the track reconstructed in the tracking system matches a track segment in the triggering stations. Only muons with p µ T > 20 GeV/c are used, to reduce the contribution from low-mass resonances and semileptonic decays of charm and beauty hadrons. The µ + µ − pairs are counted in the invariant mass range 60 < m µµ < 120 GeV/c 2 , where the Z-boson contribution is dominant with respect to the Drell-Yan process. The invariant mass distributions of the Z-boson candidates are shown in figure 1 for minimum bias p-Pb collisions in the 1 In the ALICE reference frame the muon spectrometer covers negative η. However, due to the symmetric nature of the Pb-Pb collisions, we use positive values for the probed rapidity interval. p-going and Pb-going configurations, and Pb-Pb collisions in the centrality range 0-90%. Several background sources can contribute to the invariant mass distributions of oppositecharge dimuons. The combinatorial background from random pairing of muons in an event is evaluated by looking at the like-sign pairs (µ ± µ ± ), applying the same selection criteria as for the signal extraction. In the Pb-Pb sample, one pair is found in the invariant mass range considered, which is subtracted from the signal distribution. In p-Pb collisions, no such pairs are found in the region of interest. An upper limit for this background contribution is evaluated by releasing the p µ T selection, fitting the resulting invariant mass distribution between 2 and 50 GeV/c 2 and extrapolating the fit to the 60-120 GeV/c 2 range. Various functions of exponential and power law forms were tried. With this procedure, the number of same-charge events in the region of interest is much smaller than 1% of the opposite-charge one, and is therefore neglected. Contributions from cc, bb, tt and the muon decay of τ pairs in the process Z → τ + τ − → µ + µ − + X were estimated with Monte Carlo (MC) simulations using the POWHEG event generator [43]. In p-Pb collisions, the sum of these contributions amounts to 1% of the signal in the p-going configuration, which is taken as a systematic uncertainty from this background source. This contribution is negligible for the Pb-going configuration. In Pb-Pb collisions, a value of 1% is estimated as described in the previous publication [20]. The low amount of background allows the signal to be extracted by counting the candidates in the interval 60 < m µµ < 120 GeV/c 2 in the distributions shown in figure 1. In the p-Pb data sample, 64 ± 8 (34 ± 6) good µ + µ − pairs are counted in the forward (backward) rapidity region. In Pb-Pb collisions, 208 ± 14 Z bosons are counted after merging the 2015 and 2018 data samples. All quoted uncertainties are statistical. The dimuon invariant mass distributions are compared with the mass shapes obtained by detector-level simulations of the Z → µ + µ − process, generated using the POWHEG generator [43] paired with PYTHIA 6.425 [44] for the parton shower. The CT10 [45] freenucleon PDFs are used, with EPS09NLO [46] for nuclear modifications. The propagation of the particles through the detector is simulated with the GEANT3 transport code [47]. To account for the modification of the production due to the light-quark flavor content of the nucleus (isospin effect), the simulated distributions are obtained with a weighted average of all possible binary collisions: proton-proton, proton-neutron and for Pb-Pb collisions also neutron-neutron. At high p µ T , tracks are nearly straight so a small misalignment of the detector elements will generate large changes in the track parameters. Therefore, a detailed study of the alignment of the tracking chambers is of utmost importance in order to correctly reproduce the track reconstruction in the simulations. The absolute position of the detector elements was measured by photogrammetry before data taking. The relative position of the elements is then estimated using the MILLEPEDE [48] package, combining data taken with and without magnetic field, with a precision of about 100 µm. This residual misalignment is then taken into account in the simulations of the Z production and the efficiency computation. This method accounts for the relative misalignment of the detector elements but it is not sensitive to a global displacement of the entire muon spectrometer. The simulation of the response of the muon tracking system is based on a data-driven parametrization of the measured resolution of the clusters associated to a track [40], using extended Crystal-Ball (CB) functions [49] to reproduce the distribution of the difference between the cluster and the track positions in each chamber. The CB functions, having a Gaussian core and two power law tails, are first tuned to data and then used to simulate the smearing of the track parameters. The effect of a global misalignment of the spectrometer is implemented by applying a systematic shift, in opposite directions for positive and negative tracks, to the distribution of the angular deviation of the tracks in the magnetic field. This shift is tuned to reproduce the observed difference in the p µ T distributions of positive and negative tracks. In Pb-Pb collisions, the data were taken with two opposite magnetic field polarities of the muon spectrometer dipole magnet. In this case, the sign of the shift is inverted accordingly. JHEP09(2020)076 The Z-boson raw yields are corrected for the acceptance times efficiency (A × ) of the detector. It is evaluated with the MC simulations of the Z → µ + µ − process with POWHEG described above. The A × is estimated as the ratio of reconstructed Z bosons with the same selections as for the data, to the number of generated ones with 2.5 < y µµ lab < 4 for the dimuon pairs, and p µ T > 20 GeV/c and −4 < η µ < −2.5 for the muons. The dimuon invariant mass selection 60 < m µµ < 120 GeV/c 2 is applied to both reconstructed and generated distributions. In p-Pb collisions, the efficiency is 74% (72%) for the p-going (Pbgoing) sample. In Pb-Pb collisions, the efficiency depends on the detector occupancy and therefore on the centrality of the collision. To account for this effect, the generated signal is embedded in real Pb-Pb events. The efficiency is found to be stable from peripheral to semi-central collisions, with a value of about 77% (71%) in the 2015 (2018) data sample and decreases to 71% (66%) for the most central collisions. The centrality-integrated efficiency amounts to 73% in the 2015 dataset and 68% for the 2018 dataset. The Z-boson invariant yield is then computed by dividing the number of measured candidates, corrected for A × , by the corresponding number of minimum bias events. The latter is evaluated using the normalization factor F µ−trig/MB , corresponding to the inverse of the probability to observe an opposite-sign dimuon triggered event in a MB event. The value of F µ−trig/MB is evaluated with two methods: (i) by applying the opposite-sign dimuon trigger condition in the analysis of MB events, and (ii) by comparing the counting rate of the two triggers, both corrected for pile-up effects. The first method is performed on the smaller data sample of the recorded MB events. In the second method, information from the trigger counters was used. This means that the relative frequencies of MB and triggered events were counted, including events that were not stored. The pile-up correction accounts for the occurrence of multiple collisions in a time span smaller than the detector resolution. The latter is of the order of 2% in p-Pb collisions and is negligible in Pb-Pb due to a lower collision rate. The final value is the average over the two methods. In Pb-Pb collisions, the normalization factor is computed for all the centrality classes considered. In the p-Pb analysis, the invariant yield is multiplied by the MB cross section to obtain the Z-production cross section [41]. In the Pb-Pb analysis, results are given both integrated and differential with respect to centrality and rapidity. The production is expressed as the invariant yield, normalized by the nuclear overlap function T AA . The centrality is expressed as N part , the average number of participant nucleons. The T AA and N part quantities are estimated via a Glauber model fit of the signal amplitude in the two arrays of the V0 detector [37]. The nuclear modification of the production of a hard process, such as those producing the Z boson, is measured by R AA , the ratio of the observed normalized yield in Pb-Pb collisions to that in pp collisions. Due to the insufficient integrated luminosity for pp collisions at √ s NN = 5 TeV, the pp reference is determined from pQCD theoretical calculations using the MCFM code with the CT14 PDF set [50]. The relative systematic uncertainties for the p-Pb analysis are summarized in table 1. The variation between the two methods for the computation of the normalization factor, which is less than 1%, is taken as its systematic uncertainty. shown not to be affected by a change of PDF and nPDF set, or transport code in the MC simulations. The uncertainty on the Z-boson yield due to the tracking efficiency, evaluated to be 1% (2%) for the p-going (Pb-going) sample, is obtained by comparing the efficiency between data and MC, using the redundancy of the chambers of the tracking stations [40]. The systematic uncertainty due to the dimuon trigger efficiency is determined by propagating the uncertainty on the efficiency of the detector elements, estimated with a data-driven method based on the redundancy of the trigger chamber information. The matching condition between tracks reconstructed in the tracking and triggering systems introduces a 1% additional uncertainty. Finally, the systematic uncertainty associated to the alignment procedure is evaluated as the difference between the A× computed with the data-driven tuning of the cluster resolution, with a global shift, and a MC parametrization without shift. This uncertainty is 7.7% for the p-going dataset and 5.7% for the Pb-going dataset, the difference between the two originating from the difference in the signal shape, which depends on rapidity. The total systematic uncertainty is determined by summing in quadrature the uncertainty from each source. The sources and values of systematic uncertainties for the Pb-Pb analysis are displayed in table 2. The systematic uncertainties of the normalization factor, the tracking and trigger efficiencies, the trigger/tracker matching, and the alignment are evaluated in the same way as for the p-Pb analysis. The uncertainty of the centrality estimation and the average nuclear overlap function T AA are obtained by varying the centrality class limits by ± 0.5%, as detailed in ref. [51]. The uncertainty of the theoretical pp cross section, which is used as a reference for the R AA computation, is obtained by varying the factorization and renormalization scales and accounting for the PDF uncertainty. This uncertainty is rapidity dependent and has values between 3.5% and 5.0%. The total systematic uncertainty is taken as the quadratic sum of all the sources. Total (R AA ) 7.4 Table 2. Components of the relative systematic uncertainties on the Z-boson yield and R AA in the Pb-Pb analysis. See text for details. The symbol indicates a rapidity-dependent correlated uncertainty, while the uncertainty sources correlated as a function of centrality are marked by a . In the total lines are reported the cumulative systematic uncertainty of the result integrated in centrality and rapidity. Results The production cross section for the Z → µ + µ − process in p-Pb collisions at √ s NN = 8.16 TeV with p µ T > 20 GeV/c and −4 < η µ < −2.5 is measured to be dσ Pb−going Z→µ + µ − /dy = 2.5 ± 0.4 (stat.) ± 0.2 (syst.) nb and dσ p−going Z→µ + µ − /dy = 6.8 ± 0.9 (stat.) ± 0.6 (syst.) nb. In figure 2 the results are compared with pQCD calculations with and without the nuclear modification of the parton distribution functions. The Bjorken-x range of partons in the Pb nucleus probed in the Pb-going collisions (−4.46 < y µµ cms < −2.96) is above 10 −1 , while in p-going collisions (2.03 < y µµ cms < 3.53) it is roughly between 10 −4 and 10 −3 . The former is expected to be mostly affected by antishadowing and EMC effects, while the latter is in the shadowing region. The observed difference between the backward and forward cross sections is mainly due to the asymmetry of the collision and is consistent with that predicted by theoretical calculations for nucleon-nucleon collisions, as shown in the figure. The forward-y region is closer to midrapidity where production cross sections are known to be larger. The measurements are compared with two model calculations based on pQCD at NLO. The first calculation utilizes the MCFM (Monte Carlo for FeMtobarn processes) code [52] using CT14 at NLO [50] as free-nucleon PDFs. The EPPS16 [53] parametrization of the nuclear modification to the PDFs is then considered to describe the lead environment. The second calculation uses the NNLO code FEWZ (Fully Exclusive W and Z Production) [54]. The lead nucleus is modelled with nCTEQ15 nuclear PDFs [33, 55], while CT14 is used for the proton. Both EPPS16 and nCTEQ15 rely on NLO calculations. The latter is a full nPDF set while EPPS16 is anchored to the CT14 free-nucleon PDFs. More details on the approximations and experimental datasets included in the extraction of the nPDFs can be found in ref. [18]. In all nuclear calculations, the proton and neutron contributions are weighted to reproduce the lead nucleus isospin. Figure 2 shows that the measurements reported here are consistent with pQCD calculations incorporating both free-nucleon and nuclear-modified PDFs, within experimental and theoretical uncertainties. In p-Pb collisions the nuclear effects modify the parton distributions of only one of the two colliding nucleons and the inclusion of the nuclear modification of the PDFs results in a small change if compared to theoretical uncertainties. Moreover, the backward-y region corresponds to a high Bjorken-x range where multiple nuclear effects are present. These lead to both enhancement and depletion compared to free-nucleon PDFs. Their resulting effect is expected to be less pronounced than the one at forward-y where only shadowing is present. The Z-boson invariant yield normalized by the nuclear overlap function T AA measured in Pb-Pb collisions at √ s NN = 5.02 TeV is 6.1 ± 0.4 (stat.) ± 0.4 (syst.) pb. Because of the symmetry of the collision, the forward rapidity of this measurement probes simultaneously the high-x and low-x range of partons in the lead nucleus. As a result of the rapidity shift and the different nucleon-nucleon center-of-mass energy, these ranges are very close to those probed in p-Pb. In figure 3 the normalized yield is compared with the result previously published by ALICE [20] based on the 2015 data sample which contains less than a third of the full statistics. The measurements are fully compatible with each other. The normalized yield is also compared with several pQCD calculations based on different codes (MCFM [52] or FEWZ [54]) and different parton distribution sets. Along with CT14, CT14+EPPS16 and nCTEQ15 calculations [50,53,55], the calculation with CT14 baseline PDF and EPS09s nPDFs is also included in the comparison [56]. Although as a whole EPS09 is superseded by the more recent EPPS16, EPS09s is used because it contains a centrality dependence of the parton distributions which is not provided in the EPPS16 nPDF set. Neutron and proton contributions are properly weighted according to the lead isospin. The uncertainties on the models include the uncertainty on the NLO calculations as well as the uncertainty on the parton distributions that are larger for those including nuclear effects. The large uncertainty of the EPPS16 calculation originates from the larger number of flavor degrees of freedom included in the parametrization [18]. The calculations using nuclear PDFs describe the yield measured in Pb-Pb collisions within uncertainties while the CT14-only calculation deviates from data by 3.4σ. This deviation is not observed in the p-Pb analysis for two main reasons. The first one is statistical. Although the Pb-Pb luminosity is smaller than the p-Pb one, the presence of more nuclear matter in Pb-Pb collisions makes the expected Z-boson yield greater than the one measured in the p-Pb samples, reducing the statistical uncertainty. Second, in Pb-Pb collisions, the distributions of both interacting partons experience nuclear modifications. In order to produce a Z boson at forward rapidity, a collision must occur between a low-x and high-x parton. This leads to a convolution of the shadowing effects at low x and the net nuclear effect observed in backward-y p-Pb collisions. Their combination enhances the suppression of the production with respect to what is separately measured in the two p-Pb rapidity regions. At the moment most of the nPDF sets do not contain an explicit dependence on the position inside the nucleus, but they provide the average effect over all the nucleons in a given nucleus. Results global fitting procedure used to constrain the nPDFs. An estimation of the integrated normalized invariant yield in the 0-100% centrality interval is therefore important. Assuming that the yield scales with the number of nucleon-nucleon binary collisions N coll , and with a conservative estimation of the nuclear modification in the 90-100% centrality interval, the difference between the integrated normalized yields in 0-90% and 0-100% is found to be less than 1 per mille. This means that the present measurement can be regarded also as the normalized invariant yield in the 0-100% centrality interval given the current uncertainties. The Z-boson production in Pb-Pb is also studied as a function of rapidity and centrality. The left panel of figure 4 shows the normalized invariant yield in the rapidity intervals 2.5 < y µµ cms < 2.75, 2.75 < y µµ cms < 3, 3 < y µµ cms < 3.25 and 3.25 < y µµ cms < 4. The results are compared with CT14 predictions both with and without EPPS16 nuclear modification. A shadowing effect is foreseen in the full rapidity range. The right panel of figure 4 shows the rapidity dependence of the nuclear modification factor R AA , computed by dividing the yield normalized to T AA by the pp cross section at √ s = 5.02 TeV obtained with pQCD calculations with CT14 PDFs. For this observable, the uncertainties on the free-nucleon PDFs are factored out in the theoretical calculations, and the remaining uncorrelated uncertainties are related to the nuclear PDFs only. The measured R AA is in agreement within uncertainties with the EPPS16 calculations while, at large rapidity, it deviates from the free-nucleon calculations. In figure 5, the normalized invariant yield is shown as a function of centrality. The CT14 calculations are based on free-nucleon PDFs and therefore, by construction, carry The results are compared with the free-nucleon PDF prediction (CT14 [50]) and with calculations with the centrality-dependent EPS09s nPDFs [56]. no centrality dependence. The data are also compared with calculations from EPS09s [56], which show a decrease in the invariant yield towards more central collisions, although the effect is very weak. Furthermore, in each centrality bin the EPS09s prediction is consistent with the more recent EPPS16 set, which does not implement a dependence on the impact parameter (the CT14+EPPS16 calculation is displayed in figure 3). Within uncertainties, each data point is well described by models including nPDFs, while the CT14-only calculation overestimates the data, especially for the most central collisions where the difference is 3.9σ. Conclusions The Z-boson production has been studied at large rapidities in p-Pb collisions at √ s NN = 8.16 TeV and in Pb-Pb collisions at √ s NN = 5.02 TeV. For the p-Pb collisions, the Z bosons were measured in the rapidity range −4.46 < y µµ cms < −2.96 and 2.03 < y µµ cms < 3.53. The production cross section at forward and backward rapidity has been compared with theoretical predictions, both with and without nuclear modifications. The data show little sensitivity to the presence of nuclear effects, partially because in p-Pb collisions, nuclear modifications to the PDFs affect only one of the two colliding particles. This is particularly true in the backward region, where enhancement and depletion effects on nPDFs tend to compensate. As a result, the calculations for the nuclear modification of the PDFs are very close to those without. In the forward region, low-x partons of the Pb nucleus are probed which are only sensitive to shadowing (which corresponds to a depletion in the nPDFs). Consequently, nuclear effects tend more clearly to induce a decrease in the cross section. Nonetheless it remains compatible within uncertainties with the one calculated neglecting such effects. JHEP09(2020)076 In the Pb-Pb data, the invariant yield normalized by the average nuclear overlap function has been evaluated in the rapidity range from 2.5 < y µµ cms < 4 and in the 0-90% centrality class. The results obtained in this paper supersede those from an earlier ALICE publication [20], where only part of the current dataset was used. The experimental data are, within uncertainties, in agreement with theoretical calculations that include various parametrizations of nuclear modification of the PDFs. On the contrary, the integrated yield deviates by 3.4σ from the prediction obtained using free-nucleon PDFs. Comparisons with models of the measured differential yields versus centrality and rapidity were also carried out, generally showing agreement with nuclear modified PDFs. In contrast, a discrepancy with calculations based on free-nucleon PDFs was found. The differential measurements presented in this paper can provide additional constraints to the nPDFs. F. Olness, I. Schienbein and T. Tunks for the nCTEQ predictions. The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support provided by all Grid centres and the Worldwide LHC Computing Grid (WLCG) collaboration. The ALICE Collaboration acknowledges the following funding agencies for their support in building and running the -17 -
7,037.8
2020-09-01T00:00:00.000
[ "Physics" ]
SiMaYang Type II Learning Model Assisted by Kahoot Application: Its Impact in Improving Student's Concept Understanding Based on Apos Theory Abstract INTRODUCTION Mathematical abilities are needed by human beings in mastering and developing science and technology (Kamarullah, 2017). Mathematics is an exact science that contains the basis to construct ideas and reasoning (Purnomo, 2017;Rahmah, 2013). Mathematics is also one of the main subjects learned by students from basic education to secondary education (Kamarullah, 2017). Therefore, mathematical ability is one of the abilities that each student needs to have. One important mathematical ability is mathematical concept understanding (Sudarman & Vahlia, 2016;Suraji et al., 2018). Mathematical concept understanding is an ability to interpret and explain something, which enables human to provide an overview, formulate settlement strategies, apply simple calculations, use symbols to represent concepts, and change one form to another (Agustina et al., 2018;Mawaddah & Maryanti, 2016;Susanto, 2013). Concepts in mathematics are interrelated, which prove that this ability is essential (Hutagalung, 2017). A student will have difficulty understanding a material if he does not understand the previous related material (D. Novitasari, 2016). Therefore, an educator must always emphasize learning that refers to the concept of understanding theories. In this research, the conceptual understanding uses the APOS theory. This theory emphasizes that in learning mathematics, a mathematical concept can be developed through the action-processobject stage. Therefore, students' conceptual understanding is more measurable. The APOS theory can increase students' activity in the learning process through the action, process, and scheme stages (Marsitin, 2017;Ningsih, 2016). Poor mathematical concept understanding will negatively influence students' learning outcomes and mathematical problem-solving abilities (Hartati et al., 2017;L. Novitasari & Leonard, 2017). Therefore, educators and students need to improve their interaction to trigger the development of this ability. Therefore, the SiMaYang type II learning model is a good learning model for the classroom. The SiMaYang Type II learning model teaches abstract concepts and related symbols and trains students' imagination (Bait et al., 2018). This learning model has four learning stages: orientation, exploration-imagination, internalization, and evaluation (Iriani, 2016). It is expected that this learning model to help students improve their mental models and conceptual understanding. Several studies reveal that the SiMaYang type II learning model affects students' learning outcomes (Iriani, 2016) and representation (Bait et al., 2018;Sholihah & Arif, 2020). None of these relevant studies have looked at the impact of the application of the SiMaYang type II learning model on students' mathematical concept understanding based on the APOS theory. Also, to make the implementation of this learning model easier, the researcher added the Kahoot application in the learning process. Kahoot has several advantages, one of which is that it contains quizzes in game format. The quiz can make students more active, innovative, and productive (Aflisia et al., 2020;Hartanti, 2019;Kurnia, 2018). Kahoot can maintain students' learning motivation in the learning evaluation process, and an enjoyable atmosphere can be created. Kahoot is also appropriate for the COVID -19 pandemic, where the learning process is done online. The Kahoot application has been used in several previous relevant studies. It positively influences students' learning motivation (Hartanti, 2019), students' Arabic mastery (Aflisia et al., 2020), and students' learning interest (Ningrum, 2018;Wigati, 2019). The difference between this research and several relevant studies that have been mentioned previously lies in the combination of the SiMaYang type II learning model with the Kahoot application on students' mathematical concept understanding. It is hoped that this combination can bring about positive impacts during the learning process. METHOD This research employed the quasi-experimental with posttest only control group design. The population in this research were the seventh-grade students of SMP Negeri 33 Bandar Lampung. The researchers applied the cluster random sampling technique to select three groups as the research samples. Each group was given different treatments. First, experimental group 1 was treated with the siMaYang type II learning model assisted by the Kahoot application. Next, experimental group 2 was treated with the siMaYang type II learning model. Finally, the control class was treated with the conventional learning model. The instrument used in this research was a concept understanding ability test given after the treatment. The collected data were then tested for their normality and homogeneity. Furthermore, the ANOVA test was performed on the normally distributed and homogeneous data. Figure 1 displays the research procedure. RESULTS AND DISCUSSION Students were given different treatments (SiMayang type II learning model assisted by Kahoot, SiMayang type II learning model, and conventional learning model). In addition, the students were given a concept understanding ability test at the end of the meeting. Table 1 displays the test results. Table 1 shows that the highest mean score was obtained by the group that applied the SiMayang type II learning model assisted by the Kahoot application. Therefore, the SiMaYang type II learning model assisted by the Kahoot application provided the best problem-solving ability. Furthermore, the prerequisite tests (normality test and homogeneity test) were performed on students' mathematical concept understanding ability data. The results of the prerequisite tests analysis are presented in Table 2. Table 3 shows that all research sample groups had the same variance (homogeneous). Therefore, the next step was conducting the one-way ANOVA test of the unequal cell. The test results are presented in Table 4. Table 5. Online Learning in Educational Research Ariyana & Putra │ SiMaYang Type II Learning … Significance DK Decision µ 1 vs µ 2 P 1−2 = 0.041 P 1−2 ∈ DK H 0 is rejected µ 1 vs µ 3 P 1−3 = 0.000 P 1−3 ∈ DK H 0 is rejected µ 2 vs µ 3 P 2−3 = 0.028 P 2−3 ∈ DK H 0 is rejected The first row of Table 5 (µ 1 vs µ 2 ) shows that H0 is rejected, meaning there was a significant mean difference between experimental group 1 and experimental group 2. Based on table 1, the average score of the experimental group 1 was 66.94, which was greater than the average score of the experimental group 2 (57.35). Therefore, the SiMaYang type II learning model with Kahoot improved students' concept understanding better than the application of only the SiMaYang learning model type II. The second row of Table 5 (µ1 vs3) shows that H0 is rejected, which means a significant mean difference between experimental group 1 and the control group. Based on table 1, the experimental group 1 (66.94) average score was greater than the average score of the control group (48.22). Therefore, the SiMaYang type II learning model assisted by the Kahoot application improved students' mathematical concepts understanding better than the conventional learning model. The third row of Table 5 (µ2 vs µ3) shows that H0 is rejected, meaning there was a significant mean difference between the experimental group 2 and the control group. Based on table 1, the experimental group 2 (57.35) average score was greater than the average score of the control group (48.22). Therefore, the SiMaYang type II learning model improved students' mathematical concept understanding better than the conventional learning model. Based on the observations at SMP Negeri 33 Bandar Lampung, the students of the experimental class I were very enthusiastic during the orientation stage by paying attention when the teacher greeted and asked them to be prepared. Then in the exploration-imagination stage, the students were very active when the researchers asked questions or described everyday life phenomena related to the material. Finally, the researcher invited the students to freely imagine then explore their knowledge (Anwar et al., 2015). Students' imagination in the SiMaYang type II learning model was used in the exploration-imagination stage, and the results were shown through the internalization phase. Eliani (Eliani et al., 2018) states that the SiMaYang learning model supports the maximum learning process. During the internalization stage, a student worksheet was given using the Kahoot application. Figure 2 contains the display of the Kahoot application. At this stage, the students were asked to participate in a game to decide three winners. It made the Kahoot application able to increase students' motivation, learning independence, and learning outcomes (Ilmiyah & Sumbawati, 2019;Izzati & Kuswanto, 2019;Setiawati et al., 2019). The last stage was evaluation. This stage was conducted by distributing the student worksheet. The researchers provided feedback to students in the form of responses or correction to incorrect answers. The researchers also reviewed by concluding each meeting. Some of these findings made students had a better mathematical concept understanding than the other two classes. Online Learning in Educational Research In the orientation stage in experimental class II, the students only paid attention when the researchers greeted them and conveyed learning motivations before teaching the learning material. Then in the exploration-imagination stage, the students became very active in the learning process when they asked them questions or described everyday life phenomena related to the material. The researchers invited the students to freely imagine and then explore their knowledge. At this stage, the students were active in the learning process. However, their activeness did not exceed the experimental group 1 students. The researchers distributed an interesting and colourful student worksheet in the internalisation stage that made the students active in working on questions. In the evaluation stage, the researchers gave feedback to students in the form of responses or correction to students' answers. Finally, the researchers reviewed by drawing conclusions at each meeting. However, the students' concept understanding was lower than the experimental class I. The learning process was carried out in the control class without applying the SiMaYang Type II learning model. The teacher explained and demonstrated the material and then opened a question and answer session. It hindered the students from being active in learning. Only a few questions emerged in this learning process. Consequently, the students had the lowest concept understanding. Based on the posttest answers and the APOS theory, the experimental group students were dominant in the Action, Process, Object, and Schematic stages. However, the experimental class II students were dominant in the Action, Process, Object stage but failed at the schema stage. Finally, the control class students were dominant in the Action and Object stages but failed at the process and schema stage. The results of this research complement the previous relevant research. Alvianto and Hartini (Alvianto, 2020) (Hartanti, 2019) state that the Kahoot application eases the teachers in creating interactive questions and games to attract students' learning interest and motivation. Khomsah and Imron also (Khomsah & Imron, 2020) argue that the Kahoot application can improve the quality of learning. Other research also reveals that the SiMaYang type II learning model can improve students' mathematical representation abilities (Bait et al., 2018;Sholihah & Arif, 2020) and students' concept understanding (Suraji et al., 2018). Combining the SiMaYang type II learning model and the Kahoot application encourages students to play an active role in the learning process, develops and explores their potential, and provides meaningful experiences in learning to achieve optimum results. The SiMaYang type II learning model assisted by the Kahoot application is effective because the students are given a role to be more active in learning. Besides, the students are given the freedom to find learning resources, both from the internet and books. CONCLUSION Based on the analysis, the SiMaYang type II learning model assisted by the Kahoot application increased students' concept understanding based on the APOS theory. The SiMaYang type II learning model assisted by the Kahoot application improved students' mathematical concepts understanding based on the APOS theory better than applying the SiMaYang type II learning model and the conventional learning model. The researchers expect further researchers to use the SiMaYang Type II learning model assisted by the Kahoot application in the learning process. The combination of learning models and learning media is a good solution in improving students' mathematical concept understanding. The SiMaYang Type II learning model can also be combined with several other theories. Likewise, the Kahoot application is beneficial in the learning process.
2,766.4
2021-01-01T00:00:00.000
[ "Education", "Computer Science" ]
The Period Length of Fibroblast Circadian Gene Expression Varies Widely among Human Individuals Mammalian circadian behavior is governed by a central clock in the suprachiasmatic nucleus of the brain hypothalamus, and its intrinsic period length is believed to affect the phase of daily activities. Measurement of this period length, normally accomplished by prolonged subject observation, is difficult and costly in humans. Because a circadian clock similar to that of the suprachiasmatic nucleus is present in most cell types, we were able to engineer a lentiviral circadian reporter that permits characterization of circadian rhythms in single skin biopsies. Using it, we have determined the period lengths of 19 human individuals. The average value from all subjects, 24.5 h, closely matches average values for human circadian physiology obtained in studies in which circadian period was assessed in the absence of the confounding effects of light input and sleep–wake cycle feedback. Nevertheless, the distribution of period lengths measured from biopsies from different individuals was wider than those reported for circadian physiology. A similar trend was observed when comparing wheel-running behavior with fibroblast period length in mouse strains containing circadian gene disruptions. In mice, inter-individual differences in fibroblast period length correlated with the period of running-wheel activity; in humans, fibroblasts from different individuals showed widely variant circadian periods. Given its robustness, the presented procedure should permit quantitative trait mapping of human period length. Introduction Circadian rhythms of physiology and behavior in mammals are dependent upon a central clock that resides in the suprachiasmatic nucleus (SCN) of the brain hypothalamus. This clock is synchronized to the outside world via light input from the retina, and it in turn entrains similar slave oscillators present in most cells of the body [1]. In constant darkness, the circadian clock will direct sleep-wake cycles and many other physiological processes according to its intrinsic period length, which may be longer or shorter than 24 h. Because the clock is reset by light each day, its intrinsic period length influences the relative phase of circadian physiology and activity patterns. Thus, in human beings there is a correlation between circadian period length and the entrained phase of physiological rhythms and sleep-wake timing [2,3]. Extremely early and late activity patterns are thought to be associated with advanced and delayed sleep phase syndromes, respectively. Both advanced and delayed sleep phase syndromes can have genetic causes, and polymorphisms in three circadian clock genes, CK1e, PER2, and PER3, have been linked to or associated with cases of familial advanced or delayed sleep phase syndromes [4][5][6]. Polymorphisms in the latter gene have also been associated more generally with diurnal preference [7]. The characterization of human clocks and their genetic defects is rendered challenging by the difficulty and expense of measuring human circadian period, since prolonged subject observation under laboratory conditions is required. In mice, the period length of circadian behavior is determined by analysis of wheel-running behavior in constant darkness. Recently, however, it has been possible to complement mouse behavioral analyses by measuring the period length of circadian gene expression in vitro from transgenic animals in which the luciferase gene has been fused to a circadian promoter [8,9]. For these animals, circadian rhythms were analyzed in explants from different tissues simply by real-time measurement of light output. Using the same technology, high-amplitude circadian gene expression can also be measured in cultured mouse NIH 3T3 fibroblasts whose oscillators are synchronized through a short treatment with serum or dexamethasone, a glucocorticoid receptor agonist [10]. Moreover, single-cell recordings of cultured mouse and rat fibroblasts have demonstrated that the circadian oscillators of these cells are self-sustained and cellautonomous [10,11], similar to those operative in SCN neurons [12,13]. The circadian rhythms of electrical firing frequencies of dissociated individual SCN neurons display considerable intercellular differences in period length (s). However, the mean s-values determined for neuron populations harvested from wild-type and tau mutant hamsters closely correlate with the ones measured for the locomotor activity of these animals [13]. Hence, the genetic makeup of the clockwork circuitry appears to influence cellular and behavioral oscillations in a similar fashion. A method of measuring human circadian rhythms from tissue biopsies would greatly complement behavioral studies of circadian rhythms and the disorders affecting them, since genetic differences appear to manifest themselves in both central and peripheral oscillators [14,15]. In this paper, we employed a lentivirally delivered circadian reporter shielded by enhancer-blocking activities to achieve this result. The distribution of period lengths that we measured from 19 human subjects demonstrates that inter-individual genetic differences in circadian clock function can be measured in skin biopsies. In mice, clock function measured in this way correlated with the period of wheel-running behavior. In both organisms, the wide range of fibroblast circadian period lengths obtained suggests interesting differences between physiology controlled by the SCN and circadian gene expression directed by peripheral oscillators in vitro. Results To measure circadian rhythms from a single biopsy, a circadian reporter must be introduced into the cells of a cultured tissue sample. However, human primary cultures do not easily permit transient transfection. Moreover, transiently transfected cells typically contain very high numbers of introduced reporter genes, an imbalance that can alter normal circadian rhythms by titrating circadian regulatory proteins [16]. Therefore, we turned to lentiviral delivery as a method of introducing a stably integrated construct in low copy number to primary fibroblast cultures [17]. We developed a lentivirus that contains a luciferase gene whose expression is governed by the promoter and 39 untranslated regions of the mouse circadian gene Bmal1. To test this virus, it was used to infect immortalized 3T3 fibroblasts, and then circadian rhythms in these cells were synchronized by dexamethasone treatment [18]. BMAL1-luciferase expression was subsequently measured in the cell population by realtime recording of light output [9]. Cells infected with this virus gave high levels of expression, but very low circadian amplitude, so it was useless for circadian period measurements (data not shown). Because lentiviruses integrate preferentially into the coding regions of active genes [19], we reasoned that the circadian behavior of the Bmal1 promoter was hampered by interference from loci at which the virus integrated or, less likely, by viral sequences themselves. To shield the reporter gene from such influences, we introduced multimerized FII insulator sequences from the chick b-globin gene [20] upstream and downstream of the Bmal1 reporter. These sequences have been previously shown to possess enhancerblocking activity in vivo. When 3T3 fibroblasts were infected as above with these modified viruses, robust circadian oscillations were observed. As a parallel strategy, we introduced an ''enhancer trap decoy,'' consisting of the strong promoter of the human Elongation Factor 1a (EF1a) gene immediately followed by an SV40 transcription terminator, upstream of the Bmal1 promoter. The resultant ''decoy'' construct also yielded excellent circadian oscillations of luciferase activity ( Figure 1A and 1B). Because signal magnitude was consistently greater with it than with the insulated construct, it was used for the experiments described in this paper. To insure that the period lengths of the oscillations observed using this virus were not affected by the titer of virus used or by the degree of infection, 3T3 cells were infected with various amounts of virus, and circadian rhythms measured as above. The period length of the oscillations was identical in all cases, although signal amplitude varied over a wide range ( Figure 1C). Next, for two different individuals this virus was used to infect 50,000 activated human monocytes purified from a single blood donation, or 50,000 human fibroblasts amplified from a single 2-mm skin punch biopsy (see Materials and Methods for details). After 4 d, cellular rhythms were synchronized with dexamethasone, and luciferase output was measured. In both cell populations, circadian oscillations were observed; but with skin fibroblasts, we obtained much higher signals and greater amplitudes of circadian oscillation (Figure 2A and 2B); hence, more precise period lengths could Infected with a Lentiviral Luciferase Expression Vector (A) Circadian reporter constructs used in these studies. Each contains the mouse Bmal1 promoter, the firefly luciferase coding region, and the Bmal1 39UTR, flanked by the long terminal repeats (LTRs) of a lentiviral packaging vector. In (i), a dimerized chick b-globin FII element is inserted between each LTR and adjacent Bmal1 sequences. In (ii), a DNA segment composed of the EF1a promoter and a SV40 terminator is inserted between the upstream LTR and Bmal1 promoter, and the gfp coding region between the Bmal1 UTR and the downstream viral LTR. (B) 3T3 cells were infected with the lentiviral vectors shown above, and equivalent infection levels were verified by real-time PCR to detect integrated viruses. Four days after infection, cells were shocked with dexamethasone to synchronize circadian rhythms, and luciferase output was measured by real-time luminometry. (C) 3T3 cells were infected with different concentrations of the lentiviral reporter vector ii (see [A]), and circadian rhythms were measured as in (B). 103 represents unconcentrated filtered viral supernatant, 13 represents a 103 dilution of this, and 1003 was a 103 concentration by ultracentrifugation. The number of viral infection units/plate were approximately 10,000 (13), 30,000 (33), 100,000 (103), 300,000 (303), and 1,000,000 (1003). DOI: 10.1371/journal.pbio.0030338.g001 be estimated. On rare occasions, it has also been possible to cultivate hair root keratinocytes that cling to the end of a plucked human hair. These keratinocytes can also be infected with lentivirus, and give period lengths identical to those from fibroblasts of the same subject ( Figure 2C). However, because most plucked hairs do not contain keratinocytes without performing scalp biopsies, we decided to continue our analysis of human circadian rhythms using fibroblasts isolated from normal skin biopsies. To measure human circadian rhythms, two to five 2-mm diameter skin biopsies were taken from the abdomen or buttocks of 12 healthy normal individuals. Four additional human fibroblast populations were obtained from male foreskin, and three from other sources (see Materials and Methods for details.). From each of these 19 samples, 50,000 adult skin fibroblasts were infected with reporter virus, and circadian rhythms were measured as described previously. Two measurements on two infected populations from each biopsy were taken. Four sample curves are shown in Figure 3A, and the data are summarized in Figure 3B. An average period length of fibroblast circadian gene expression of 24.5 h was obtained, with a standard deviation of 45 min. The period length of different cultures could in principle vary from biopsy to biopsy, or it could vary from individual to individual and remain constant among different biopsies of the same individual. Obviously, only in the latter case would the results be diagnostically useful. Hence, it was important to compare the range of data from different biopsies of the same individual with the range of data from different individuals. In multiple cases, inter-individual differences were significantly greater than the differences observed between cultures; four such examples and their statistical analyses are described in Figure 3C. Overall, the standard deviation among different trials using the same sample was 18 min; among samples derived from different infections of the same sample, the standard deviation was 25 min; and among the average values of different biopsies from the same person, the standard deviation was 6 mins. We conclude that this method can detect small differences in fibroblast circadian period length. What is particularly fascinating, however, is that the standard deviation among different individuals in our trial was 48 min. Thus, significant genetic differences in fibroblast clock function exist even in very small population samples. To ensure that circadian genetic differences are indeed reflected in the rhythms of fibroblast gene expression that we measure, we applied this reporter system to measure circadian rhythms from tail biopsies of mice containing several known circadian mutations that shorten, lengthen, or abolish the period of circadian wheel-running behavior. Table 1 lists the mouse strains that we used and the published properties of their circadian clocks. We obtained adult dermal fibroblasts from tail biopsies of each of these nearly isogenic mice, and analyzed their circadian rhythms exactly as done in humans. Parallel to this analysis, circadian wheel running was measured for the same individuals ( Figure 4). Mice with a period of wheel-running behavior shorter than wild-type (Per1 brdm/brdm ) yielded fibroblasts whose period of circadian Bmal1 expression was also shorter. Similarly, fibroblasts from mice with a period of wheel running that was longer than wild-type (Per1 brdm/brdm ;Cry2 À/À , and Cry2 À/À ) had correspondingly longer period lengths. Mice that were behaviorally arrhythmic (Per2 brdm/brdm , Per2 brdm/brdm ; Cry1 À/À , and Per1 brdm/brdm ;Per2 brdm/brdm ) produced arrhythmic fibroblasts. In most cases, however, the period of fibroblast gene expression was more extreme than that of behavior. For example, mice with behavioral periods shorter or longer than wild-type gave fibroblasts whose periods were even shorter or longer still. Similarly, mice containing the double disruption Per2 brdm/ brdm ;Cry2 À/À , which are behaviorally rhythmic and have a period of 24.4 h, yielded fibroblasts that typically show faint rhythmicity of 23-29 h for one cycle before becoming arrhythmic (Figures 4 and 5). Discussion From the studies presented in this paper, we can conclude that molecular circadian rhythms can be measured in fibroblasts from skin biopsies and that the period of these rhythms is specific to an individual and can vary with genotype. Moreover, the variations observed among the 19 human individuals of this study suggest that the circadian clock is quite heterogeneous at a genetic level. The genetic origins of this variation will doubtless be a topic of future investigations, and our results suggest that fibroblasts could be an excellent system in which to investigate such differences by quantitative trait (QTL) mapping. A major question posed by the research that we have presented is the relationship between fibroblast period length in vitro and the period length of human circadian physiology. Certainly, the two values are contingent upon the clocks of different tissues studied in different contexts (skin versus suprachiasmatic nucleus and in vitro versus in vivo). Although fibroblasts and SCN neurons possess clocks of very similar molecular mechanism [14], different mouse tissues from the same mouse can have periods varying by almost two hours when measured in tissue slices in vitro [8]. The average values that we obtained for the period of human circadian gene expression (24.5 h) correspond well with what has been published about rhythms of human circadian physiology (24.2-24.5 h) [21][22][23][24]. Values for period length in skin and fibroblasts of wild-type inbred mouse strains (23.5 6 0.3 h) also corresponded nicely with behavioral and SCN period values obtained by us or published by others [8]. Nevertheless, our data suggest that it would be an error to assume that fibroblast period length is the same as physiologic period. Although on average the two corresponded well, significant differences were visible on an individual level. In mice, mutations at circadian loci that affected periodicity Where pertinent, the published period of fibroblast transcriptional oscillations is also listed. Descriptions and characterizations of these mice can be found in a [30], b [15], c [31], d [32], e [33], and f [34]. DOI: 10.1371/journal.pbio.0030338.t001 invariably had more extreme phenotypes upon fibroblast period than upon wheel-running period (Figures 4 and 5). Mice with behavioral periods 1 h shorter than wild-type gave fibroblasts whose period was 4 h shorter, and mice with periods 0.75 h longer had fibroblast periods 1.5 h longer. In most cases, though, longer or shorter periods of runningwheel behavior translate to longer or shorter fibroblast periods, respectively. Our data hint at a similar disparity in humans. Specifically, the period lengths of human fibroblast gene expression showed inter-individual variation that was greater than what might have been expected from behavioral studies. Among our 19 human samples, a maximum difference of 4 h was seen, and six samples could be placed in categories that differ by more than 1.5 h. Altogether, a standard deviation of 0.8 h was observed. The period of circadian physiology measured by others in human beings showed standard deviations of 0.2-0.5 h under conditions of ''forced desynchrony'' during which circadian period was assessed in the absence of the confounding effects of light input and sleep-wake cycle feedback [21][22][23][24]. Although fibroblast clocks are not identical to SCN clocks, the fact that they use the same molecular components and that mutations at circadian loci affect biological timing in both tissues in the same qualitative fashion will doubtless render them quite useful in uncovering genetically-caused circadian differences among individuals and populations. Mammals other than humans can also show preferences of morningness and eveningness [25], and mouse wheel-running behavior has already been used as the basis for genome-wide quantitative trait analysis. In some human populations, chronotype questionnaires have suggested that morningness-eveningness tendencies can be widely distributed [26], and twin studies suggest that morningness and eveningness can be genetically determined [27]. Given its robustness, the presented procedure could be used in quantitative trait mapping of human period length and thus in the identification of genetic loci that participate in determining the period length and the phase of daily human rhythmicity. Ideally, studies with large numbers of human subjects should be performed with the least invasive cell harvesting techniques possible. Although the 2-mm cutaneous punch biopsies can be performed rapidly and heal completely within a few days, plucking hairs is obviously even less invasive. As shown in this paper, it was possible on occasion to harvest and cultivate primary keratinocytes from plucked hairs for the analysis of circadian gene expression. We hope that future efforts in optimizing this method will render it generally applicable. Materials and Methods Vector production. Figure 1A, construct (i): The EF1a promoter and gfp gene were removed from lentiviral backbone plasmid pWPI (http://www.tronolab.unige.ch), and replaced with a reporter cassette consisting of 1 kb of mouse Bmal1 upstream region and 53 nucleotides of exon 1, fused in-frame to the luciferase coding region, and followed by 1 kb of Bmal1 39UTR. Two chicken b-globin FII elements [20] were synthesized by PCR and inserted on either side of the reporter cassette. Construct (ii): The Bmal1:luc reporter cassette was inserted downstream of an EF1a promoter and SV40 terminator in pWPI. All viruses were produced, concentrated 10-fold by ultracentrifugation, and used for infection as described [28]. Tissue isolation and culture. To establish our technique, five cylindrical 2-mm diameter cutaneous biopsies were taken from the ventral regions of five patients undergoing abdominoplasty operations. Subsequently, two biopsies were taken from the buttocks of each of seven recruited healthy adult subjects. Fibroblasts were isolated from biopsies by overnight digestion of tissue in DMEM/20% FCS/1 mg/ml collagenase type IA, and cultured in DMEM/20% FCS. Four foreskin fibroblast cultures were obtained by similar methods, and three other adult dermal fibroblast cultures were obtained from others (generous gift of S. Clarkson). Adult mouse fibroblasts from wild-type, Per1 brdm/brdm , and Per1 brdm/brdm ; Per2 brdm/brdm mice were isolated from 2-mm tail biopsies by the same method. Monocytes were isolated as described from fresh human blood from the Geneva University Hospital Blood Bank. [29]. Prior ethical consent for the use of all human tissues was given by the ethical committee of the Geneva University Hospital, informed consent was obtained from all human subjects, and animals were handled according to institutional guidelines. Synchronization and measurement of circadian rhythms. Four days or more after infection or cell passage, circadian rhythms were synchronized by dexamethasone [18]. Medium without phenol red was supplemented with 0.1 mM luciferin, and light output was measured in homemade light-tight atmosphere-controlled boxes for at least 4 d [9]. Mouse running-wheel behavior. Mice of various genotypes were housed in cages with controlled lighting, each equipped with a running wheel (Mini Mitter, Bend, Oregon, United States) Runningwheel actograms and period determination were done with the Stanford Chronobiology kit (Stanford Software Systems, Stanford, California, United States). Statistical methods. For each luciferase measurement, the period of oscillation was calculated by fitting the curve to sine waves of known period using a macro program for Microsoft Excel written by SAB. The maxima and minima of each oscillation were identified, and the timing of these points was used to fit hypothetical sine curves with period and phase as free variables. The period of the sine wave with the best least-squares fit to the data was assumed to be the true period of oscillation. Because the period length of the first day after synchronization varied according to the conditions of synchronization, it was not included in these calculations; rather, period was determined by analyzing only days 2-5. To determine the period length of a particular biopsy, two independent viral infections were performed, and two synchronization/measurement cycles were done for each infection. To determine the period length of a particular individual, at least two separate biopsies were analyzed in this manner. Values are presented as mean plus or minus the standard deviation. Figure 4 depicted in stacked bar graph format. In each rhythmic strain measured, the period of wheel-running activity is shown in light grey, expressed in the difference in hours from the 24-h solar day. On top of this is shown the change in period of fibroblasts from the same animals, also measured in the difference in hours from the solar day. Genotypes depicted, from left to right, are Per2 brdm/brdm , Per1 brdm/brdm , wild-type, Cry2 þ/À , Cry2 À/À ;Per1 brdm/brdm , Cry2 À/À , Per2 brdm/brdm ;Cry2 À/À . Because Per2 brdm/brdm ;Cry2 À/À mice had unstable periods that ranged widely from individual to individual, this genotype is shown twice at the extreme right, with representative mice with periods both less than and greater than 24 h separated into two groups. Arrhythmic fibroblasts are designated ''arr.'' DOI: 10.1371/journal.pbio.0030338.g005
5,021.8
2005-09-27T00:00:00.000
[ "Biology", "Medicine" ]
Blockchain-Based Power Trading Process 78 Abstract. Recently, the paradigm of the power industry has been digitalization with the focus on renewable energy. With this shift toward an energy–information and communications technology (ICT) convergence accelerator, there will also be many changes that affect energy policies. In particular, in terms of energy demand management, there will be regional-centric self-reliant decentralization, and the activation of distributed energy resources (DERs), including renewable energy, will result in the deployment of microgrid-type virtual power plants on a region-wide basis. This paper designs a blockchain-based power transaction process in which individuals (producers) can produce and use power themselves or sell the remaining power to others, rather than transmitting and using the developed power from the existing centralized power grid. Introduction Recently, people-to-people (P2P), people-to-machine (P2M), and machine-tomachine (M2M) digital and information and communications technologies (ICT) processes are entering a hyper-connected society that is closely connected online and offline. A hyper-connected society refers to a society where people, processes, data, and objects can be linked together to create new values and innovations through intelligent networks. In particular, ICT applications in the power industry can lead to the efficient operation of existing energy systems, as well as to the creation of new values, such as the combination of renewable energy support and power storage devices, electric vehicles (EVs), and the development of various energy services. The energy sector is also expected to accelerate the transformation of the energy system, and to contribute to a drastic change in daily life through the fusion of technological elements of the fourth industrial revolution under a smart grid base centered on the power industry. Energy storage technology, combined with small distributed energy resources, plays an important role in system-linked or standalone operations, such as microgrids and virtual power plants (VPPs). Integrated operation of microgrids, VPPs, and energy management systems (EMSs) allows optimal power production and consumption, and enable regional-unit energy production and consumption at the same time. For example, using weather data and a geographic information system to predict renewable energy generation, or effectively managing renewable energy facilities that rely heavily on natural conditions, will contribute to optimizing energy production and consumption in certain areas. As such, the technology of the fourth industrial revolution can be used to construct a convergence system centered on distributed power, and to optimize existing facility operations in the process of transition to a low-carbon energy system. The future energy system is expected to be a form of adjustment (demand response) in the supply and demand balance based on regional-energy selfsufficient systems utilizing distributed energy sources (e.g. stand-alone microgrids, virtual power plants, etc.). In the demand management sector related to energy consumption, fourth industrial revolution technology will strengthen ICT-based demand management, including energy savings and demand response, and will contribute to the development of new energy business models. Small power brokerage market in operation The energy prosumer has emerged, producing and using electricity, and selling extra power from an owned small power plant. Electric vehicles, smart home appliances, smart buildings, and smart houses have increased in number. It is now a hyper-connected society where all of these are connected due to the development of digitalization and ICT. Telecommunications and information technology companies are entering new fields in the power industries, such as microgrids, EV charging infrastructure, and energy storage and management, and convergence is accelerating. In addition, many factors, such as changes in the internal and external environment, advances in ICT, and the spread of the shared economy, have combined to create aggregated business opportunities in the power sector. They create value by gathering and optimizing energy, information, and services among the various participants, or by promoting exchange. Economies of scale are needed for small-scale power players in order for distributed power, demand response, and battery storage to have influence. The energy aggregator performs this role. The energy aggregator's business model is based on dividing profit that generates collecting, sharing, and optimizing of assets held by multiple customers. The area where energy aggregator activity is most active is demand response. Aggregation businesses have also started in areas such as total distributed power, renewable energy, and P2P power trading. There are also commercialized VPP projects in some countries that collect power generated by renewable energy, using these projects as a single power plant. Aggregators have also emerged that provide intermediary platforms for the energy prosumer to sell any remaining electricity to neighbors and regions. Countries around the world are continuously introducing power transaction brokerage services that collect electricity produced by small distributed power plants (i.e., VPPs) and that sell it to the power market. The size of the global VPP market is growing by 30 percent annually, from $190 million in 2016 to $710 million in 2021. It operates cloud-based software on the Internet of Energy (IoE), like a power plant, without large-scale power generation or investment in power transmission facilities, in order to collect and sell power from a large number of small distributed sources. Global energy companies in countries such as Australia, Germany, and Japan are actively carrying out VPP demonstration projects, and Korea revised the Electricity Business Act in December 2018 to allow power transaction brokerage projects for electricity generated or stored by small distributed power sources of under 1 MW. Tesla plans to build and install solar energy (5 kW), batteries (13.5 kWh in each), and a smart meter system for at least 50,000 households in South Australia by 2022, integrating it into cloud-based software in the world's largest virtual power plant. Combined, it can meet the power demand of 75,000 households, or 20 percent of the total electricity demand in South Australia, with 250 MW of solar power and 650 MWh stored in batteries, and it can save up to 30 percent on household electricity bills. This concept of the IoE is a network infrastructure that integrates energy and data/information, enabling power generation and energy storage capacity to be balanced with energy demand in real time. The IoE will enable active integration of advanced metering infrastructure (AMI), demand-response, e-prosumers, vehicleto-grid EVs connected as power consumption and storage media, and various distributed power and energy storage devices and grid management. The IoE will present a groundbreaking methodology for monitoring and communicating energy distribution, energy storage, and energy grids by providing a variety of information and connectivity to the energy grids in conjunction with buildings, cars, and cities. It will also leverage renewable energy, energy storage, smart meters, energy gateways, smart plugs, and consumer electronics to provide energy consumers, manufacturers, and utility providers with new and powerful tools to reduce resources and costs, and control and manage target devices. Home energy management system As IoE technology advances in establishing an integrated network between smart home appliances and small distributed power sources, home energy management system (HEMS) markets for energy reduction and supply and demand management in homes and regions are growing. The global smart home energy management system market is expected to grow from $1.3 billion in 2017 to $3.7 billion in 2023. It is predicted that integrated systems that can be compatible and that can interact with a variety of devices, rather than stand-alone systems that are only linked to specific devices, will drive growth of HEMS markets. If the existing system is to help save energy at home, it will have to develop to optimize energy supply and demand at a regional level, and analyze data collected in the cloud through smart meters and intelligent AMIs to predict energy consumption patterns in each household and power demand in the region. Robotina, a home energy management system company in Slovenia, installs smart meters and HEMS controllers on power distribution boards in the home to analyze the power usage patterns of each home appliance device and collect data on a cloud-based smart grid platform. Artificial intelligence technology optimizes power consumption in the home, including turning off the power of consumer electronics devices that are not in use. By combining the collected data, electricity rates, and weather information, the technology then turns the devices on again when electricity rates are the lowest. Furthermore, the smart grid platform enables joint purchase and transactions of electricity, and provides data collected by individual households to the power supply network to optimize energy supply and demand in the region. In response to the spread of microgrids and the advancement of the IoE technology, domestic energy companies need to develop services that enhance compatibility with networks and key devices and security related to energy data," said Kim Bo-kyung, a senior researcher at the Korea Institute for International Trade. "Considering connectivity with core IoE network devices such as an energy storage system (ESS), smart meters, and intelligent metering infrastructure as well as smart home devices, the convenience of integrated solutions is maximized, and the vulnerability of data security as the IoE network deployment becomes more advanced can result in serious costs. We should actively consider ways to utilize blockchain technology, which is difficult to falsify," he said. "We need to expand our hardware-centric business strategy to the level of a solution or platform to gain the upper hand in the energy prosumer market, which is in its initial formative phase. With the IoE, energy can be delivered in both directions, anytime anywhere, and monitoring of energy consumption will be available at all levels, from individual units to regional, national, and global units. The IoE provides consumers with a reliable, flexible, efficient, and economical energy supply network, allowing them to combine centralized large-scale power plants with distributed, small, renewable energy sources, such as solar and wind power, as a single fusion system. For this purpose, a system with blockchain-based smart solutions is needed to implement energy trading in the autonomous form of the region by introducing a model of virtual power plants in the region. Blockchain technology coupling Recent moves have been made to actively adopt smart technologies to enable and efficiently manage VPP-personal power transactions (P2P). Blockchain technology in P2P transactions minimizes the role of intermediate interventions for secure and free transactions. In the case of a VPP, the integrated manager can participate in the power market while optimizing distributed resources by obtaining data related to power production and consumption using IoT technology, machine learning, and artificial intelligence. One of the factors that have brought new energy projects into the spotlight is blockchain technology. By sharing energy supply and transaction information to distributed plants through blockchain technology, many types of decentralized services can be introduced to energy prosumer markets. The size of the global blockchain-based energy service market is expected to grow from $390 million in 2018 to $7.11 billion in 2023. Because real-time exchange of energy and related data between users is possible, even without a central server, the cost is reduced, and it can be applied to a variety of energy services due to excellent security and new speeds. Areas where related infrastructure construction and deregulation have been preceded by actual commercialization have entered the stage of materializing business models, with the focus on power trading between individuals. Large companies and start-ups are actively participating in business models in China, Europe, and the U.S. with investment in renewable energy. For example, in the U.S., LO3 Energy is generating profits from power transactions between individuals in the form of trading of distributed power generation, such as solar power, among local residents in the form of a blockchain, while LO3 Energy makes profits through sales of power transaction commissions and smart meters. Denmark's M-PAYG provides small solar panels and batteries to local residents in developing countries, and requires them to use power for mobile payments. It currently operates a dollar settlement system, but plans to introduce a blockchain payment system. This paper designs a blockchain-based power transaction process that allows individuals (producers) to produce and use power themselves or sell the remaining power to others, rather than transmitting and using the developed power from the existing centralized system. Energy cloud The concept of the energy cloud comes from cloud computing, a service that uses all computing resources over the Internet, from applications to data. The energy cloud is a platform for the integration and competition of advanced technologies and solutions to gain market share within a dynamic market. In the field of power, large-scale centralized power supply structures, such as thermal, hydro, and nuclear power, are being transformed into decentralized ones. Distributed power is being extended to regulations on carbon emissions and the advent of the prosumer. In addition, the installation price of distributed power is cheaper and cheaper compared to existing power sources. These changes converge with the diversification of technologies like cloud computing, and evolve beyond distributed power. Technologies such as energy storage, energy efficiency improvement, and demand response will evolve into an energy cloud that allows power grids to be controlled. The energy cloud has economies of scale, flexibility, and resiliency. The energy cloud transformation increases distributed power, generates and sells power on its own, and results in a growth of the smart grid market. Dag-type blockchain: iota If the existing generation of blockchain is made in the form of a linear structure, the Internet of Things Association (IOTA) is the blockchain of a new nonlinear structure. Because of its nonlinear structure, it has the advantage of being able to handle all transactions in parallel, and the speed of transactions increases with more participants. In particular, if all blockchains had previously existing miners and participants, the IOTA would operate in a form that would allow all participants to approve and issue transactions equally. The IOTA operates a trading system based on the structure of the directed acyclic graph (DAG), called Tangle. The directed acyclic graph is one of the graph types, and is a non-circular directional graph. When you look at Figure 3, the peak of the graph is oriented and connected to another peak, but there is no cycle separately, and it cannot return to itself on any path. Tangle uses algorithms that leverage the direction of the DAG among transactions that accumulate over time to select tips based on the weight and reliability of a particular transaction (Popov, 2017). Tangle Tangle consists of a structure in which participants must approve two previous deals in order to request a transaction. The criteria for which a transaction is selected are determined by the tip selection algorithm (TSA) and are randomly chosen according to the cumulative weight of the transaction. In order to approve the selected transactions, the user requesting the transaction must go through proof of work (PoW). Although it is a calculation process similar to a blockchain, the actual amount of calculations required is not large, and all participants can proceed with the transaction without a fee, since the role given without paying the fee is a condition to approve the transaction. This is an optimized transaction structure for M2M micropayments that do not require high computational volumes and do not have fees. Looking at Figure 4, there are three types of transaction that the Tangle network makes: first is a fully fixed transaction, second is an inconclusive deal, and finally, a tip. This is determined by the confinement level of the transaction, which is determined by the number of times the previous transactions are approved directly or indirectly by newly created tips (Gal, 2015). Figure 5 is a way to ensure that the trading system of the Tangle network is maintained normally. Currently, the IOTA Foundation does not disclose the number of traders participating in, and the amount of transactions in progress within, the Tangle network. But if malicious users take advantage of the low unit and processing time of transactions when the network participates on a small scale, the transactions of other users may not be approved normally, taking up more than a certain percentage of the transactions in the Tangle network. So, the IOTA Foundation is now determining whether certain transactions, called milestones, are being approved in the network, occurring every two minutes on the coordinate nodes. Snapshot removes a portion of the Tangle network that has passed a certain time, and stores only the required portion of the user's information related to that part of the network as a way to avoid overloading the network as it grows. This eliminates the need for participants to store all status information for the Tangle network. In this paper, the service structure is created so that rapid processing and trust can be achieved simultaneously with the focus of the Internet of Things service, which requires real-time operation in the third-generation blockchain IOTA, mentioned in Section 2.2. Blockchain for power trading Blockchain technology is expanding its use from the parts that can be applied to transactions that exclude intermediary agencies. It is emerging as a hot topic in various applications, such as the Internet of Things and self-driving cars in financial transactions that can reduce virtual currency functions and transaction fees. In addition, various projects are underway in the energy sector, and many changes in existing power trading and supply systems are expected when they become commercialized [4,5,6,7,8]. In fact, blockchain-based technologies are applied to transaction systems that compensate for solar power production in Solar Coin prosumer trading systems that produce solar energy and that trade surplus electricity between neighbors, and electric vehicle charging stations (CSs). A blockchain-based system for energy trading consists of three parts (Aujla AND Kumar, 2019). There is an energy trading system between EVs and CSs, a computer system using Edge as a service, and a blockchain-based safe energy trading mechanism using Edge as a service. EVs in smart cities must be charged with energy from CSs placed in various locations. EVs need to exchange energy with a CS in geographic locations in order to achieve maximum benefits in terms of energy and price. Similarly, a CS sells the available energy to maximize profits. However, transferring data from an EV to the cloud or a server could cause a higher delay and incur additional costs for an energy transaction service provider. To overcome this, edge computing reduces additional delays and costs by enabling data processing and decision-making closer to enduser locations. Blockchain systems are used to provide security for transactions between electric vehicles and charging stations. The consensus algorithm is used to validate transactions shared between the approver nodes in the blockchain. All nodes selected as edge nodes serve as approver nodes used to calculate EV work. (1) Transaction initialization: EVs that initially wish to exchange energy with a CS transmit authentication information to trusted nodes. The edge node then calculates the hash function to initialize transactions on the network. (2) Block header generation: Once the deal for an EV is finalized, the block headers are generated by calculating the hash from the muckle hash tree. (3) Block validation and activity proof generation: The approver node calculates PoW for each transaction to be added from the blockchain upon receipt of a message from the edge node. All edge nodes present in the network act as approver nodes to validate transactions in that EV. All approver nodes calculate PoW, and when more than 50% of the general agreement is reached, the transaction is added to the blockchain according to the result. (4) If the value of the PoW corresponds to the received message, the block shall be deemed to have been verified on the approver node. All approver nodes calculate the PoW for the EV and send the results to the transaction server. If there is consensus on the general agreement that more than 50% of the blocks on the node are valid, the blocks are added in the blockchain; otherwise, the transaction is discarded. Meanwhile, Dinh et al. (2018) announced how blockchain could be used to address data privacy issues in the Internet of Things (IoT). Through a smart contract, a system model with access control mechanisms was developed to allow users to fully control their data and track how third-party services access the data. controlled data access using blockchain models and attribute-based encryption systems in the Internet of Things environment to protect personal information. In order to achieve granular access control, a smart shield was used to create a licensed permission access control table (PACT), and the owner first places a smart shack on the access control table of the block chain. Smart contract The smart contract was first proposed by Nick Szabo in 1994. The existing contract is written, and in order to fulfil the terms of the contract, the actual person must perform per the contract terms. However, if you create a contract with a digital command, you can execute the contract automatically according to the terms. The smart contract creates contracts for the terms of transactions between traders, monitors contract performance over a blockchain network, and can execute contracts quickly without separate verification by the central system as to whether the contract was fulfilled or not by automatically executing the contract. Through the smart contract, consumers and suppliers can quickly trade and settle production and savings energy, and further reduce transaction fees incurred through brokerage houses. The smart contract operates as seen in Fig. 7 inside the blockchain network when transactions occur. All nodes in the network will share content registration transactions and will store them in the transaction database. After this step, the smart contract application is executed according to the contents of the transaction, and the results are reflected in the smart contract database. Blockchain-based power trading process In this paper, a power trading process based on blockchain is constructed as shown in Fig. 8. Fig. 8 is largely divided into the access device layer, the blockchain layer, and the edge layer. The access device layer is a user layer that attempts to access shared data using smart devices, while the blockchain layer is configured for data access and security. The edge layer is configured to identify the users accessing the data, and provides the data. (1) The user accesses the smart contract using a smart device. (2) A smart contract requires a threshold to operate. (3) The user requests a threshold from the certification node. (4) The certification node requires the user to have a DACT consisting of an identity of things (IDoT). The certification node matches the DACT that is delivered to the DACT that was registered by the user in advance. When matched, the threshold is given to the user. (5) The user executes the smart contract using the threshold. (6) The user receives the hash address of the shared data from the smart contract that has been activated. (7) The data are transferred to the service node, which provides the shared data. The delivered service node shares data with the owner's privacy information if the number of hash addresses received is more than one. The proposed system consists of a blockchain-based secure energy trading mechanism using edge-as-a-service. Conclusion In this paper, we worked out a blockchain-based power trading model that allows individuals (producers) to produce and use power themselves or sell the remaining power to others. In other words, if prosumer1 wants to purchase electricity through blockchain's smart contract function and requests a transaction with prosumer2, which sells electricity, it generates trading information as a block, and anonymously releases the trading information to participants and utilities in power trading, which is verified and blocks previous trading.
5,433.4
2019-09-20T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Geometric Simulation of Design Objects in Aerospace industry The article outlines the approach used for geometric simulations of design objects in aerospace industry. The computational model describing the effect is presented, which is recommended for practical use in construction. Introduction While designing a new item the designer is aimed at the development of functions and geometry, then it is required to arrange these results in specifications. Below we will outline the basic approaches which we recommend to use in aerospace industry (basic approach was developed in [1]). The importance of new methods and innovations in aerospace industry is described in [2]. Geometry defines shape and dimensions of an item and its components, its functions describe operational principles and interaction of its subcomponents. Specifications are supplemented by information about the material and processing techniques. In the course of designing the designer not only develops concepts of item's geometry but works with it using established practices of geometry description. According to operating techniques, up till now in designing bureaus the designer expresses concept of the item's shape and dimensions in the form of 2D presentation -engineering drawing. Computer aided design can be promoted by engineering implementation and computerized presentation of the mentioned concepts of the item, as well as possibility of their processing at various levels of abstract description. The term "specifications" involves all information required for the item production which should be presented in the form, oriented at certain production technique. This concept covers all engineering drawings, specifications and flowcharts. Technical specifications embody the most important information required for the produced item. Computerized development of concepts of the item and computerized processing of the information in specifications required for relevant description. Geometric object requires description of the item using mathematical model in Cartesian space, and the presentation is arranged with consideration of the shape and dimensions. Presentation of functional links in the item, required upon designing, is not taken into account at this stage. The mentioned geometric objects are mathematical (algebraic) structures presenting more or less accurate embodiment of the item's geometry in the mentioned aspect. The presentation and description of such geometric objects are based on analytical geometry. Geometric simulations Problematic procedures, which should be applied to the item in the frames of considered engineering tasks, are referred to the item description as geometric object. Interrelation between such abstract geometric object with the array of human concepts is performed by presenting geometric object in the form of graphic presentations. A graphic presentation means projection of 3D geometric object onto 2D plane. A particular case includes 2D geometric objects. Interrelation between geometric object and human reasoning, together with object graphic presentation, can be performed by fabricating the material model of a geometric object. Specifications and graphic presentation are closely related by dependency relation, because graphic presentation is a specialized element of specification. Selection of reasonable form for describing geometric objects and graphic presentations is defined by actual practices and type of processed items. One of the classifications, oriented at industrial items, stipulates the following classes of geometric objects: prismatic, revolved bodies and arbitrary shapes. Prismatic items are restricted by planes; revolved bodies, respectively, by the second order surfaces (cylinders, cones, spheres); and items of arbitrary shapes by surfaces of higher orders or surfaces which cannot be described exactly. The task is set in principle as follows: in terms of analytical geometry the initial elements can be defined as points, lines, curves, planes, surfaces, as well as bodies. This set of objects is used to generate new objects according to the established syntax. The generation is performed as the recursive procedure, that is, the objects obtained from the main objects can be combined with other main objects into new objects. Two contradictory requirements are related with such description of geometric object: the description should be suitable for subsequent processing using computer aided designing; the description should be structured to such extent than it could be used for development of graphic presentation, that is, sequence of instructions controlling motions of the plotting instrument or the electron beam. Flat objects are described by the so called simplest elements: primitives. The primitives are considered as 2D objects characterized by two highlighted points: the beginning and the end. These points are the sites of possible combination of the primitives. The simplest elements are described in terms of classes containing the elements' names. The names facilitate recognition of data for initial and final points, as well as determination of other element's characteristics. This description is applied for development of drawings, graphic presentation of details on flat screen. Analysis of numerous engineering drawings demonstrates that about 99% of all drawing elements are segments of lines and circular arcs. 3D geometric objects are described on the basis of the following algebraic structures: ਉ -graph structure; F -surface element structure; K -body structure. Gstructure, that is the object comprised of points on plane or in space ( ) G is the set of points and segments and < ਉ, -, .> is a certain algebra. This form of representation (point model) can be considered as geometric image of a body restricted by flat surfaces. Graphic images of such geometric object are the so called transparent or wire lattice models (polyhedra). Graphic images of polyhedra are presented by flat graphs. The presented edges are intersection lines of surfaces restricting the geometric object. The items are presented in the form of transparent wire frame. F-structure serves for presentation of bodies and surfaces composed of elements. The main elements are curvilinear tetragons, described parametrically: Conclusions Descriptions of hierarchical generation of geometric objects and graphic images depend on a specific setting of tasks, thus, it is actually possible to obtain various modifications as well as particular cases of the aforementioned forms of description.
1,342.6
2017-01-01T00:00:00.000
[ "Engineering" ]
A Multimodal, SU-8 - Platinum - Polyimide Microelectrode Array for Chronic In Vivo Neurophysiology Utilization of polymers as insulator and bulk materials of microelectrode arrays (MEAs) makes the realization of flexible, biocompatible sensors possible, which are suitable for various neurophysiological experiments such as in vivo detection of local field potential changes on the surface of the neocortex or unit activities within the brain tissue. In this paper the microfabrication of a novel, all-flexible, polymer-based MEA is presented. The device consists of a three dimensional sensor configuration with an implantable depth electrode array and brain surface electrodes, allowing the recording of electrocorticographic (ECoG) signals with laminar ones, simultaneously. In vivo recordings were performed in anesthetized rat brain to test the functionality of the device under both acute and chronic conditions. The ECoG electrodes recorded slow-wave thalamocortical oscillations, while the implanted component provided high quality depth recordings. The implants remained viable for detecting action potentials of individual neurons for at least 15 weeks. Introduction In the last few decades, the range of experimental neuroscience methods has been extremely widened by various technological advances. A remarkable segment of this progress was fueled by the utilization of microelectromechanical systems (MEMS) technology for the fabrication of high density microelectrode arrays (MEAs). Following the appearance of the first silicon-based micromachined neural implants [1], such devices evolved rapidly and today a great variety of precisely and reproducibly fabricated MEAs are available, which make the recording of potential changes in the extracellular space with high spatial density possible [2][3][4][5]. The biocompatibility of the MEAs is crucial, especially if the devices are intended to be in contact with the tissue on the long term. Typical MEMS materials, such as Si, SiO 2 , Si 3 N 4 and metals such as gold, platinum and iridium are non-toxic and inert [6][7][8]. However, in terms of biocompatibility, these are only necessary, but not sufficient conditions, since inert materials can also trigger the foreign body response of the immune system and cause glial scar formation, which can compromise the functionality of the electrodes [9]. A huge advantage of polymerbased depth MEAs is their mechanical flexibility, which allows smoother coupling with the soft tissue than rigid materials [10]. A flexible neural implant can follow small motions and pulsations of the brain, therefore causes less disturbance in its environment. Several biocompatible polymers, e.g. SU-8 photoresist [11], Polyimide (PI) [12] and Parylene C [13,14] can be employed as bulk and insulator materials of neural sensors. The palette of polymer-based MEAs utilized in neurophysiology is diverse. It contains devices developed for interfacing both with the peripheral and with the central nervous system. Peripheral neurons can be contacted with either implants penetrating into the nerves [15,16] or cuff electrodes, which can be wrapped around them [17]. Such devices can serve as key elements of brain-machine interfaces (BMIs). Similarly, flexible retinal implants are used for vision restoration for patients suffering from retinitis pigmentosa [18]. Polymer-based probes which are implantable into the central nervous system have also been realized [19][20][21][22], including double-sided electrode arrays [23] and probes with drug delivery capabilities [24,25]. Flexible devices are also frequently utilized for electrocorticography (ECoG), a method that uses electrodes placed directly on the exposed surface of the brain [26]. In the clinics, ECoG is widely used during treatment of patients suffering from epilepsy, whose condition necessitates surgical resection [27][28][29]. Such surgeries require precise localization of the epileptogenic zones. Due to its higher spatial resolution and signal-to-noise ratio (SNR) compared to electroencephalography, ECoG is more suitable for this purpose [30]. The technique is employed not only to assess the location of the irritative zones from ictal spike and interictal epileptiform activity, but also for functional mapping to avoid causing damage to critical regions. In neurosciences, ECoG can be used for functional mapping of various cortical regions, e.g. the vibrissa/ barrel field of rat neocortex [31]. For such purposes, a variety of ECoG electrode arrays have been fabricated of polymers such as polydimethylsiloxane (PDMS), Parylene C and polyimide [32][33][34]. In spite of the wide range of flexible, polymer-based neural sensors, most of them are developed for a single type of measurement. In this paper we report the fabrication and functional characterization of a multimodal MEA, consisting of an ECoG part (with 8 electrodes) and a single-shank, implantable part (with 16 electrodes), allowing simultaneous surface and depth recordings. Design concepts The probe is an upgraded version of the thumbtack-like neural MEA, shown in Fig 1A, which had been successfully used for recording field potentials, multiple unit and single unit activities in behaving and anaesthetized humans [35]. The thumbtack-like sensor contains a laminar array of polyimide isolated platinum-iridium electrodes on a single shaft with an outer diameter of 350 μm and length of 3 mm. The shaft can be implanted into the cerebral cortex perpendicularly to the surface of the brain. It protrudes perpendicularly out of the center of an 8 mm diameter silicone disk. The disk facilitates the immobility of the shaft during recordings by plying to the brain surface. We intended to modify this device by equipping the disk with an electrode array as well, thus enabling ECoG recordings in the vicinity of the implanted shaft. At the same time, we substituted the hand-made shaft with a polymer-based MEMS component in order to achieve a more precise and reproducible fabrication and fine mechanical coupling between the probe and the tissue. The ECoG component was designed to be equipped with eight relatively large circular sites (d = 200 μm) for field potential recordings. The depth MEAs were designed to contain smaller electrodes (d = 30 μm), which might be suitable for the detection of action potentials of individual neurons within the tissue. Fabrication methods In this chapter we present the process flow used for the realization of the microfabricated components. The rapid and cost-effective procedure had already been successfully employed for the construction of a linear array of electrodes, used as an ECoG [36]. The processes resulted in a PI (bottom insulator)-TiO x /Pt (conductive)-SU-8 (top insulator) layer structure. The implantable MEA and the ECoG component were fabricated with the very same MEMS processes. Their layouts were merged onto a joint wafer layout in order to reduce the number of photolithographic masks needed. In the final form of the device, the TiOx/Pt conductive layer is employed for multiple purposes: it functions as electrode contact sites, wiring and connector pads. The bottom insulator layer (PI) provides electrical insulation for the bottom side of the probe everywhere, while the top insulator layer (SU-8) is opened above the electrode contact sites and connector pads and only insulates the leads. We have found this polymer composition beneficial in a couple of aspects. The adhesion of PI on SiO2 was sufficient for both enduring the fabrication steps and for ensuring easy sample removal. Since SU-8 is a commonly used negative photoresist, its patterning is more straightforward, which makes it suitable to form the top insulator layer. Fig 2 shows the schematics of the main process steps. 4-inch silicon wafers were used as handling substrates of the polymer layers. First a 1 μm thick SiO 2 layer was grown using wet oxidation at 1150°C. Following this, a 3.5 μm thick P84 polyimide layer (Evonik Industries, Essen, Germany) was spin-coated onto the front side of the wafer, as shown at step (A). An Al layer of 500 nm was deposited by evaporation, which was followed by the spin-coating of a 1.8 μm thick, Microposit 1818 (Dow Electronic Materials, Newark, DE, USA) photoresist. The resist was UV exposed, using a mask of 1 μm resolution. This pattern was transferred into the Al layer by wet chemical etching of the metal at room temperature, using a solution of H 2 O, CH 3 COOH, H 2 SO 4 , H 3 PO 4 , and HNO 3 in a ratio of 70:20:30:32:20. 15 nm thick TiO x layer was sputter-deposited for proper adhesion, followed by 270 nm of Pt, as shown at step (B). The rest of the photoresist and Al were etched away in acetone and in the solution mentioned before, respectively. The lift-off yielded a patterned layer of TiOx/Pt, which will function as the conductive material for the electrode sites, bonding pads and wirings (C). In the next step, a 20 μm thick SU-8 (by Micro Chem Corporation, Newton, MA, USA) layer was spin-coated and patterned with photolithography (D), during which the electrode sites and bonding pads were exposed and the contours of the microfabricated components of the devices were shaped. The process flow was continued with reactive ion etching (RIE) with a gas mixture of O 2 and CF 4 gases in a ratio of 1:1. In this step, the pattern of the SU-8 layer was transferred into the PI layer. While the exposed PI was etched completely, the SU-8 was only thinned to a thickness of approximately 12 μm. At the same time, Pt functioned as an etch-stop layer, protecting the PI below the future electrodes (S1 Fig shows thus created structures on a substrate wafer). Finally, the wafers were submerged into distilled water and the flexible MEMS structures were peeled off from the substrates with a pair of tweezers. The SiO 2 layer underneath them remained on the Si wafer. Photographs of the two microfabricated components of the device are shown in Assembly and packaging In order to assemble the device, we clamped the ECoG component between three pieces of 1 mm thick glass slides, as illustrated in Fig 4. At step (A), only the ECoG part was clamped with two slides from the bottom and one from the top so that the hole in the middle of its sensor region was not covered. The shank of the depth electrodes was inserted into the hole with a pair of tweezers (B). The shank was equipped with two handles, located on the sides, 300 μm above the electrode array. The insertion was complete when both of these handles mechanically contacted the ECoG component, and doing so they ensured the perpendicularity of the two components in one direction. Constraining the shaft of the depth MEAs with the bottom two glass slides provided perpendicularity in the other direction (C). The two components were fixed together with a drop of two-component epoxy resin, at their backsides, avoiding the electrodes (D). After one hour, the epoxy cured and the glass slides were removed (F). In the final step, the devices were equipped with connectors (Preci-Dip, Delémont, Switzerland). Their pins were stitched through the holes at the bonding pads of the microfabricated components and bonded onto exposed Pt sites, which had been formed in the vicinity of the holes with the same methodology as the electrodes. Electrode impedance measurement and reduction methods Characterization of the electrode impedances was performed by electrochemical impedance spectroscopy (EIS) in physiological saline (0.9% w/v of NaCl), employing an Ag/AgCl reference electrode (Radelkis Ltd., Hungary) and a platinum wire counter electrode with relatively high surface area. The probe signal was sinusoidal, with an RMS value of 25 mV. A Reference 600 instrument (Gamry Instruments, PA, USA) was used as a potentiostat and Gamry Framework 6.02 and Echem Analyst 6.02 software were used for experimental control, data collection and analysis. Experiments were performed in a Faraday cage. In order to reduce the impedance of the depth electrodes, additional Pt was electrochemically deposited onto them. The Reference 600 instrument was used again, as a potentiostat. The electrochemical cell consisted of a solution of 1 g PtCl 4 x 2HCl x 6 H 2 O + 2 cm 3 conc. HCl + 200 cm 3 , an Ag/AgCl reference electrode and a Pt counter electrode. The deposition was performed for 10 minutes, on 100 mV (vs. reversible hydrogen electrode). The durability of such a platinized platinum (Pt/Pt) layer has been previously tested on silicon probes [37]. 2.5. In vivo recording methods 2.5.1. Acute tests. Electrophysiological recordings were performed in the rat brain in order to test the functionality of the MEAs. A total of 4 Wistar rats, weighing 270-400 g, have been anesthetized with a ketamine-xylazine solution and prepared for stereotaxic operation as described elsewhere [38]. Animals for both acute and chronic tests were kept and handled in accordance with the European Council Directive when they were awake. Each rat was kept in a 39 cm long, 22 cm wide, 18 cm high cage. They were under deep anesthesia during operations and recording sessions, as well as at the time of sacrifice. During anesthesia, paraffin oil was administered to their eyes to prevent them from drying. They were sacrificed by the injection of a lethal dose of ketamine/xylazine into the heart. Craniotomy was performed -1.0 mm--6.0 mm anteroposterior (AP), 2.0 mm-7.0 mm mediolateral (ML) in reference to the bregma. The implantation of the depth MEAs were targeted at the stereotaxic location of -3.36 mm AP, 5.5 mm ML, perpendicularly to the brain surfacewhich allowed laminar measurements in the barrel cortex and reaching into the hippocampus [39]. The dura mater was incised above the target location in order to achieve a smooth implantation. The probe was cramped with a curved, flat tip forceps by its depth MEA component, above the location where the depth MEA and the ECoG part had been joined. The forceps was forced to remain closed during implantation with a clamper and it was also rigidly connected to the moving arm of the stereotaxic apparatus. The arm allowed the manipulation of the probe with 10 μm precision in the dorsoventral and mediolateral directions and 100 μm precision in the anteroposterior direction. After the recordings, the probes were removed from the brain and cleaned. They were soaked in an aqueous solution of 10 mg/ml Terg-A-Zyme (by Alconox Inc., White Plains, NY, USA) for 10-15 minutes. 3-4 times during this period and after the probes were removed from the solution and rinsed with distilled water. After such cleaning process, no signs of organic residues were found on them. Brain signal recordings were carried out using a 32-channel Intan RDH-2000 amplifier system (Intan Technologies LLC., Los Angeles, CA, USA) connected to a computer via USB 2.0, sampling with a frequency of 20 kHz. The reference electrode was a pointed stainless steel needle located beneath the skin posterior to the scalp. MATLAB 2014b (MathWorks Inc., Natick, MA, USA) and Edit 4.5 software of Neuroscan (Charlotte, NC, USA) was used for off-line signal visualization, filtering and analysis. Signals obtained by the depth MEA were subjected to CSD analysis. CSD was calculated with the MATLAB 2014b software (MathWorks Inc., Natick, MA, USA), with the utilization of the CSDplotter toolbox. For a clearer visualization, the CSD of 10 periods were averaged and plotted. The periods were aligned to each other based on the start of the upstates, i.e. the initiation of multiunit activity. 2.5.2. Chronic recording capability tests. In order to characterize the recording capabilities of the MEA on the long term, two additional rats were successfully implanted chronically (in two other cases, the surgery was unsuccessful because of reasons not related to the device). These probes were inserted into the somatosensory (Rat-1) and motor (Rat-2) cortex. Until the implantation, the course of these operations was almost identical to the course of the acute tests, with the difference that screws were driven into the skull at the perimeter of the scalp opening. One of these screws served as a reference electrode. Following implantation, the craniotomy hole was filled with Gelaspon gelatin sponge (Germed, Rudolstadt, Germany). Dental acrylic cement (Vertex Pharmaceuticals, Boston, MA, USA) was used in order to cover the hole and in order to attach the electrical connector of the probe to the skull. To avoid movement artifacts, the rats were anesthetized before chronic recordings with a mixture of 37.5 mg/ml ketamine and 5mg/ml xylazine at 0.2 ml/100 g. During each session, at least 10 minute long recordings were obtained with the same setup that was used for the acute tests. Later, 10 minute long sections of the signals were analyzed off-line for each session. The long-term stability of the surface electrodes was characterized by determining the amplitude spectral density of the signals measured with them. Average amplitude for frequencies corresponding to sleeping (below 4 Hz) was calculated. In case of depth electrodes, we focused on unit activity detection. in order to detect unit activities, the recording sections were band-pass filtered between 300 and 3000 Hz. The Klusters free software [40] was used for clustering (spike sorting), taking into account three principal components for each electrode. The clusters were manually accepted or discarded based on spike waveforms and autocorrelograms. Unit activities were only included in the analysis if the single unit signal-to-noise amplitude ratio (SU SNAR) of their clusters were higher than 2. Unit yield of the probe was determined as the total number of valid clusters on all of the 16 depth MEA channels. SU SNAR for each single unit clusters was calculated as follows. where i is the index of the cluster, n is the index of the recording channel containing the spike waveforms of cluster i. PP i is the mean peak-to-peak amplitude of the spikes (their corresponding 1.5 ms waveforms snippets) in cluster i, σ n is the standard deviation of the filtered signal of the nth recording channel, of which its clustered unit activities are extracted. Microfabricated and assembled devices An image of a device and a magnified view of its sensor region, containing the electrodes are presented in Fig 5. The microtechnological and assembly processes resulted in a probe geometry consistent with the design. The attachment of the shank (containing the depth electrodes) to the ECoG component was sufficient, the connection remained intact in all cases during the in vitro and in vivo tests. The similarly flexible meander transmission lines provided mechanical decoupling between the sensor region and the connector. The mechanical robustness of the lead was adequate, failures only occurred as a result of extreme pulling forces. Our overall experience was that these flexible tools do not require that much care during handling compared to the more brittle silicon-based depth MEAs. Original and reduced electrode impedances in saline Results of average values yielded by in vitro impedance measurements on a probe are shown in Fig 6 (For exact results, see S1 Table). The original, sputtered thin-film Pt depth electrodes (with geometric area of 707 μm 2 ) had an average impedance value of 559.5±148.4 kΩ at 1 kHz. We decided to reduce this value in order to obtain a better signal-to-noise ratio during measurements. The electrolytic deposition of platinum yielded a Pt/Pt layer of high roughness factor, hence the average impedance magnitude at 1 kHz reduced to 27.6±8 kΩ. As expected, the ECoG sites of larger (31400 μm 2 ) geometric area had much lower original impedances: 18.6 ±0.5 kΩ on the average. Since ECoG electrodes are only expected to record local field potentials without unit activities, we found this value to be sufficient for this purpose and did not apply electrolytic deposition on these sites. In vivo experimental results A representative sample of the recorded waveforms is presented in Fig 7 and in S1 Acute Data. Channels no. 1-8 represent local field potential (LFP) changes detected by ECoG electrodes. Channels no. 9-24 correspond to the depth electrode sites. A synchronous slow wave (1-1.5 Hz) oscillation can be observed on all channels, indicating slow-wave sleep (SWS), which is characteristic of the applied ketamine-xylazine anesthesia [41,42]. Each period of the oscillation can be divided into two alternating states. Active (upstate) periods, when neuron membranes are depolarized and the cells generate action potentials (spikes) frequently, are followed by inactive (downstate) periods, when membranes are hyperpolarized and spikes do not occur [43,44]. LFP has a positive peak during upstates on the brain surface and in the upper cortical layers, while in deeper layers the LFP polarity of the waves is reversed. This phenomenon can be observed on the ECoG channels and channels 9-18 of the implantable component. Elevated activity in higher frequency domains of the LFP signals on channels 19-24 indicate that the tip of the implanted shank reaches into the hippocampus, as expected. Unit activity was revealed by band pass filtering (500-5000 Hz). In the cortex, high intensity of multiunit activity can be observed within the upstate periods. In the hippocampus, unit activities do not follow the oscillation closely, which meets the expectations, since slow waves are supposedly generated by neocortical and thalamic oscillators [45]. The upstate phase of the oscillation begins with the formation of a current source in the upper and lower layers of the cortex and a massive sink in the middle layers. Entering the downstate phase, the CSD is transformed into a sink-source-sink pattern in the cortex, as shown in Fig 7B. This trend was also observed in humans during slow-wave sleep, although with different spatial pattern [46,47]. The experiment suggests that in case the investigated neural tissue has a laminar structure, the probe makes the determination of current sinks and sources possible as well. Fig 8A and 8B, S2 Table and S1 Spike Data show the results of the chronic stability tests of the surface and depth measurements, respectively. We obtained recordings 15 weeks after probe implantation. The data at week 0 represent signals recorded 3-4 days after surgery. Changes occurred during the 15-week period, but there was no radical deterioration in the amplitude spectrum density of the signals provided by the surface electrodes. Regarding depth recordings, unit yield varied between 4-7 during the entire period, typical SU amplitudes are 30-50 μV. Average SU SNAR changed between 2.42 ± 0.35 and 4.69 ± 1.49. Interestingly, Rat-1 provided the best result on the 15 th week regarding SNAR. Comparing our results to the ones yielded by similar, polymer- [21] and silicon-based [48] linear probes, the SU yield and SNAR of our MEA are below average. Nevertheless, the graphs indicate stable performance and suitability for spike detection in the timescale of several months. The study is limited by the low number (2) of chronically examined animals and only shows that these probes can be capable of such performance, but gives no insight to the reproducibility of the measurements. There are examples in the literature for the realization of ECoG measurements and simultaneous depth recordings underneath the ECoG-covered region with separate devices. MEMS surface MEAs with Parylene C-gold-Parylene C layer structure were used, along with tungsten fine wire microelectrodes, which could be inserted into the rat brain through holes on the surface arrays [49]. The configuration allowed LFP recordings with both the MEA and the fine wires, and the detection of unit activities with the latter. Surface grid arrays were synchronously used with the thumbtack laminar array and similar depth electrodes in humans with epilepsy [26,50]. The results of our in vivo experiments indicate that these measurements can be realized with a single, all-flexible probe. The application of polymer-based MEMS technology, along with microassembly allows high flexibility in the design of probe geometry, thus versatile three-dimensional sensor systems can be created. The number of electrodes on the ECoG and depth component, as well as their size and distance to each other can be adjusted in a wide range, tailored to different applications, and realized with high precision. There are no limitations against extending the depth component to a multiple shank MEA, neither in the microfabrication technology, nor in the assembly processes. Conclusions To the authors' knowledge, the device presented in this work is the first concerning polymerbased, flexible depth MEAs, combined with an array of also flexible brain surface electrodes. The applied microfabrication processes allowed us to precisely and reproducibly realize the designed probe with polyimide-platinum-SU-8 layer structure. The device is an upgraded version of the thumbtack-like fine wire electrode, which had been successfully used for obtaining depth recordings in the human neocortex. We characterized the new device in physiological saline, acutely and chronically in vivo in the central nervous system of rats, and showed its functionality. During in vivo recordings, electrodes both on the ECoG and on the extracellular component recorded LFP changes consistent with the expected waveforms and at the area of implantation. Furthermore, with the extracellular component, the detection of unit activities was possible for at least 15 weeks following the implantation. The applied rapid MEMS process flow along with a straightforward step of microassembly allows researchers to tailor threedimensional probe geometries to various electrophysiological measurements. Supporting Information S1 Acute Data. The file contains data of acute recordings, including signals presented in Table. The results of electrochemical impedance spectroscopy measurements on a probe. (XLSX) S2 Table. The file is a Microsoft Excel document. Its first page contains the data obtained by depth electrodes (Fig 8B). Its second page contains data obtained by surface electrodes (Fig 8A). (XLSX)
5,766.6
2015-12-18T00:00:00.000
[ "Engineering", "Materials Science", "Medicine" ]
Defense Mechanism against Adversarial Attacks Based on Chaotic Map Encryption During recent years, image classification through DNN has been applied to various fields, including payment security and image search. DNN in image classification is effective and convenient, yet susceptible to perturbations: non-targeted and targeted adversarial attacks against neural networks, such as FGSM and BIM respectively, exert modifications that are unrecognizable to naked eyes to image inputs, and will probably result in wrong classifications. To ensure the degree of safety of DNN image classification, researchers have been dedicated to the study of defense mechanisms to diminish or even eliminate the effects brought by adversarial attacks. Our proposed approach, aims at increasing the classifier’s resistance to perturbations by adding a pseudo-random matrix key generated by Logistic Chaos. Our defense mechanism with Logistic Chaos-generated secret random key utilized 1 key with mere 3 elements and is of high generality. We show empirically that our approach is efficient against most attacks. Introduction The deep neural network underwent numerous breakthroughs in the last decade, thanks to the endeavor and contributions of scientists. Because of the supreme accuracy which can be achieved by trained DNN through machine learning, deep neural network has been widely used in various artificial intelligence applications such as speech recognition, image recognition and point cloud recognition. However, DNN remains vulnerable to adversarial attacks which perturb the original samples and trigger DNN to make invalid responses. In the recent years, as the deep neural network (DNN) has been widely applied in more and more security-sensitive and trust-sensitive areas, the study of security of DNN is becoming increasingly important. Scientists have proposed various defense strategies against adversarial attacks, in general, the commonly used defense is divided into the following four categories:(1) Defense via retraining;(2) Defense via detection and rejection;(3) Defense via input pre-processing;(4) Defense via regeneration. In this paper we will propose an improved defense mechanism, which belongs to defense via input pre-processing, for the DNN classifier proposed by Olga Taran, Shideh Rezaeifar and Slava Voloshynovskiy [1] by pre-processing original samples through direct addition of a secret random matrix key generated by Gaussian noise, aiming at increasing the classifier's resistance to added attacking perturbations. The size of the secret random key matrix is equal to the size of an individual sample in the training dataset. Our improvement is to generate the pseudo-random matrix key by iterating Logistic chaos instead of by Gaussian noise. This improvement reduces the size of original key space from the size of a training sample to only two elements: the number of iterations , and the n parameter , in Logistic chaos. By the second Kerckhoffs's cryptographic principle that the less secret μ key the system contains, the higher the security of the system [2], reduced key's size contributes to higher safety in our proposed defense mechanism. We will verify the effectiveness of the new defense strategy on two standard data sets: MNIST and Fashion-MNIST. We will use FGSM, PGD and BIM adversarial attacks to test the reliability of proposed defense strategy. The main contributions of this paper are as follows: -Summarize and analyz the existing attack methods and defense strategies -Propose a new chaotic encryption defense method based on cryptographic principles -The experiment proves that this defense method that uses chaotic mapping to generate random numbers as a key has the performance to resist adversarial samples The remainder of this paper consists of the following parts: The second part briefly introduces the principle of chaotic mapping, and summarizes the general classification of existing attacks and defense methods; the third part explains the main defense methods we propose Thought; the fourth part shows the experimental results and analysis results of our defense method; the fifth part summarizes the full text. Logistic Map Pseudorandom number generation of piecewise logistic map. The piecewise logistic map (PLM), pseudorandom number generator (PRNG) proposed by Yong Wang et al, has mediocre ergodicity, uneven density probability and high efficiency which makes PLM an ideal PRNG. The logistic map is a discrete dynamical system defined by (1) where is the state value and is the control factor. When the x 0 ∈ (0,1) μ μ ∈ [3.57, 4] logistic map is chaotic [3]. Scenarios of Adversarial Attacks Based on the attackers' knowledge, the general cases of adversarial attacks can be grouped into three scenarios: 1) White-box scenario: The structure, parameters and training datasets of the defense system are all transparent and available to attackers. 2) Grey-box scenario: The structure, parameters and training datasets of the defense system are all transparent and available to attackers, while attackers have no access to the defense mechanism parameters [1] 3) Black-box scenario: The attackers do not have any information of the defense system, such as the structure, parameters, and training datasets. Based on the aims of attacks, we summarize the adversarial attacks into two groups: 1) Targeted adversarial attacks: The attackers intentionally induce the result of DNNs to a targeted result. 2) Non-targeted adversarial attacks: The attackers aim to amplify the prediction error of DNNs without inducing a specific outcome [4]. This group of attacks aim to challenge merely the reliability of DNNs. Based on the principles lying behind, current existing adversarial attacks on DNN classifiers can be categorized into Gradient based attacks and Non-gradient based attacks. 1) Gradient based attacks Gradient based attacks generate a perturbation vector by slightly modifying the image to the backpropagation algorithm-which is widely used in DNN training-to cause wrong classification. They consider the classifier parameters as constants and the inputs as variables, therefore able to acquire the corresponding gradient for each element of the input. As Ian J. Goodfellow et al. have noted, Fast Sign Gradient Method (FGSM) [5] is the fastest among all the gradient based attacks with relatively little cost; Basic Iterative Method (BIM) [6] makes simple improvements based on FGSM, utilizing the identical step to FGSM repeatedly, with smaller step size, and clipping the pixel value of each intermediate result; Projected Gradient Descent (PGD) [7] attack, as a multi-step variant of FGSM, achieves a higher accuracy. Based on enhanced PGD optimization, Yingpeng Deng et al. proposed UPGD [8], a new generation algorithm for generating universal adversarial attacks, which shows significant advantages in obtaining higher deception rates and lower accuracy. Even in a small training set, the algorithm also has good cross model versatility. 2) Gradient free attacks One Pixel Attack [9], though not particularly more robust than other attacks, may indirectly increase the system response time; Zeroth Order Optimization (ZOO) [10] based black-box attacks, are capable of causing perturbations to DNNs without training any substitute model as an attack surrogate. Defense via adversarial retraining. Defense via adversarial retraining is a robust generalization method [11]. In this mechanism, samples modified by adversarial attack method are mixed into the original training datasets. This mechanism is not adaptive to different types of adversarial attacks [11], which means that this defense system can only defense against attack methods that had been mixed into the training datasets. In addition, the accuracy of original model may be reduced after retraining. Adversarial training is a heuristic approach, which has no formal guarantee in convergence and robustness [12,13]. Defense via input monitoring. Input monitoring generally focuses on the classifying the input data as either original data or attacked data. This can be achieved by (a) adding an external augmented subnetwork of binary classification which classifies each input as attacked or un-attacked; in adaptive ML model assurance presented in [14], an external module called robust redundancy is proposed to resist potential hostile attacks and keep the trained ML model intact [11]; (b) feature squeezing which compares the model classification accuracy of outcomes between feature squeezed input and original input [15,16]. Defense via input pre-processing. Modifications exerted on input images by adversarial attacks can be removed by pre-processing defense mechanism. Bling Pre-processing (BP) [17] defense uses a combination of pre-processing layers, which has high robustness; Defense via regeneration. This method is based recovering modified date to original clean data via regeneration. Research by Tejas Borkar et al. 18] proposed a novel selective feature regeneration approach, which can effectively defend against universal perturbations and can significantly improve DNN adversarial robustness by shielding noise in some specific DNN activation [18]. Proposed Approach Based on the fundamental principles of cryptography, we propose an adversarial attack defense mechanism with input images encrypted by Logistic Chaos and output by classifiers. figure 1 is the basic scheme of defense against adversarial attacks by Logistic Chaos encryption (DLCE) mechanism. The mechanism begins by inputting image into the ∈ Chaotic Map Encryption module (CME), which is encrypted based on a secret key . After the ∈ above encryption, state of input images become unknown. The unknown-state images will then be inputted into a DNN classifier, which is a LeNet-5 neural network model. The structures of both CME and classifier are disclosed to the public. The transformation of CME module will be invertible and undifferentiable. Finally, a classification will be the output of each input , determined ∈ by classifier. We will offer a more specified output classification in later section 4.3. According to various types of defense strategies presented in section 2.3, DLCE is under the type of defense via input pre-processing. However, encrypting by secret key, neither filtration nor elimination of perturbations is required in the CME module of DLCE. It is supposed that the attacker knows information other than the key, including the structure of the classifier and the security module used. similarly, our approach follows the principle of key sharing in both the training and prediction phases, exposing all algorithmic details except the key, which is known only by the defender, confidential to the attacker, who can not access the internal variables of the defense structure. In other words, the input and output of CMA can not be accessed, and the output of CMA is the input of classifier. Furthermore, the attacker can not access the input of classifier, but can only observe the input and output of the whole architecture, as well as the structure of the classifier and CMA, thus a secret part is formed, lest the features of the training data be learned, or the gradient information of the system is obtained by BPDA [19] techniques. We use a key-based security module CME. In general, CME be integrated with various kinds of transformations, such as simple permutations. However, unlike the previous encryption form, this module is to transform the input matrix into data uncorrelated by generating keys by Iterating Logistic mapping. This secret key is unknown to the attacker. Therefore, it creates the information advantage of the defender over the attacker. The size of the key space we choose is no longer equal to the size of the input signal, and chaotic map encryption uses less key space to ensure higher security. this improvement reduces the size of the original key space from the size of the training sample to the two elements of iteration number n and parameter μ in Logistic chaos. logistic mapping is a discrete dynamic system, with pseudo-random number generated by the piecewise is the state value and μ is the control parameter. Moreover, when 0 ∈ (0,1) μ∈ [3.57,4], the logistic mapping is chaotic [3]. The number of iterations is chosen to be = 500 with initial value . In the fourth part, we will explain the experimental effect of using 0 = 0.2, = 4 chaotic encryption. Datasets In order to test the generalization ability of the proposed method, we have tested the effectiveness of the proposed method on three different data sets. We used a simple MNIST handwritten digit recognition data [20] set, which contains ten categories, including 60,000 training images and 10,000 test images, each of which is a 28*28 grayscale image. We also use the Fashion-MNIST dataset, which contains ten categories, including 60,000 training images and 10,000 test images, each of which is a 28*28 grayscale image. Examples of images in each data set are shown in Figure 2. Fig 2. The example of raw images of each category from MNIST(first row) and Fashion MNIST(second row) To clarify, in our experiment, 55,000 images in the training set of the MNIST and Fashion-MNIST datasets are used for training, and 5000 images are used for verification. Since the selected adversarial attack is generated very slowly, both data sets are tested with the first 1000 images in the testing set only Adversarial Attacks Details We use FGSM, BIM, and PGD as adversarial attacks to test the capability of the proposed method. The details of employed adversarial attacks in the virtual experiment are explained as below. [5], namely Fast Gradient Sign Attack, is a typical white-box adversarial attack. It adds perturbation to the gradient direction of the input vector along the error function of the output category and the target category to obtain an adversarial perturbation, and then the adversarial perturbation is added to the original sample to generate an adversarial sample. This method can generate adversarial samples quickly and at low cost through one-step iteration: FGSM. FGSM (2) x adv = x + ϵsign(∇ x (  ,x,y)) In the formula above,  is the added disturbance; is the adversarial sample; J(.) is the loss x adv function of the DNN classifier;  is the parameter of the model, and ϵ is used to represent the magnitude of the disturbance. [6] uses multiple smaller input change parameters along the gradient direction to perform iterative attacks instead of generating adversarial disturbances in one step like FGSM: BIM. BIM (Basic Iterative Methods) (4) X adv 0 = X (5) X adv N + 1 = Clip X ,ϵ{X adv N + αsign(∇ x (X adv N ,y true )} In the formulas above, ClipX,ϵ{X'}(x,y,z) = min{255,X(x,y,z) + ϵ,max{0,X(x,y,z) -ϵ,X' means that x, y, z, and X are in the 3-D space, which means that the image's width, (x,y,z)}} height, number of pathways. Descent) is a typical firstorder iterative attack, which can also be called K-FGSM, where K represents the number of iterations. The PGD algorithm first performs a random initialization search within the allowable range (spherical Result and Discussion In order to get the best experimental results, we have chosen the three existing attack codes, which are all from CleverHans based on Tensorflow. On the two data sets, the parameters used by the attack method and the corresponding examples of the generated adversarial samples are shown in Table 1 and Under the premise of the attack and the data set, we used the LeNet-5 based network structure to classify all attacks. The classifier structure is shown in Table 3; the classifier training parameters are shown in Table 4. Table 4, where "Original classifier" represents the original DNN classifier without any defense measures, and its parameters are shown in the table; "Classifier with Logistic chaos" represents the DNN classifier on which the Logistic chaos defense module is used. According to the experimental results shown in Table 5, on the MNIST data set, the classification error rate of the original classifier without any defense measures is about 81%-98% for the adversarial samples generated by the three adversarial attack methods--FGSM, BIM, and PGD; the classification error of the classifier using the Logistic chaos defense module is about 7%-20%. Similarly, on the Fashion-MNIST dataset, the classification error rate of the original classifier without any defense measures is about 58%-74% higher than that of the classifier using the Logistic chaos defense module. Therefore, the defense method of using Logistic chaos to generate the key greatly reduces the classification error rate of the classifier. The experimental results obtained are sufficient to prove the effectiveness of the defense method based on chaotic map encryption, which is a highly potential method to resist adversarial attacks. In the experiment, in order to verify the effectiveness of Logistic chaotic map encryption against adversarial sample attacks, we assume that the length of the key is equal to the length of the input image, making the key long enough to deal with brute force attacks. Taking into account security issues, the attacker may attack the model by brute force to unlock the key. Therefore, we ensure that the internal variables are unknown to the attacker, and only the input and output can be observed. This avoids the attacker from obtaining key information, thereby ensuring system security. Conclusions In the paper, in view of the less efficiency of deep neural networks to adversarial examples, we studied the existing adversarial attack methods and defense methods, and made a general classification and summary. On this basis, we propose a new defense mechanism based on chaotic map encryption to resist adversarial samples. This defense mechanism is mainly aimed at the existing white-box attack scenarios and two data sets. Experiments have proved that this method can obtain high classification accuracy and has advanced performance in defense against adversarial samples. Our defense method improves the classification accuracy. In future work, we will start to study the adaptability of chaotic map encryption defense in black box scenarios, and try our best to expand its application to other data sets.
3,976
2021-09-01T00:00:00.000
[ "Computer Science" ]
FinePrompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in GPT-4 , Introduction Large language models (LLM) such as GPT-4 (Ope-nAI, 2023) have demonstrated impressive capability to solve textual understanding problems at a level parallel to or surpassing state-of-the-art taskspecific models (Brown et al., 2020;Chowdhery et al., 2022).However, one of the characteristic pitfalls of LLMs is that they exhibit poor zero-shot and few-shot performance in tasks such as multihop reasoning (Press et al., 2022) and numerical reasoning over text (Brown et al., 2020;OpenAI, 2023), both of which involve compositional, multistep reasoning across multiple referents in text. To overcome such limitation of LLMs, previous works proposed various elicitive prompting strategies such as Chain-of-Thought (CoT) (Wei et al., 2022), Self-Ask (Press et al., 2022) and Leastto-most Prompting (Zhou et al., 2022).These prompting techniques have effectively unlocked the compositional, multi-step reasoning capabilities of LLMs by generating step-by-step rationales or breaking down an end task into a series of subproblems.Regardless of their efficacy in improving LLM reasoning, these prompting techniques still entail (i) significant amount of human effort to discover the right prompting strategy, and (ii) lack task specificity that takes into account the characteristic differences between end tasks. Prior to LLMs and prompt learning, many taskspecific finetuned LMs proposed a novel set of inductive biases to improve the compositional reasoning capabilities of finetuned LMs (Min et al., 2019;Groeneveld et al., 2020;Tu et al., 2020;Fang et al., 2020;Ran et al., 2019;Geva et al., 2020;Chen et al., 2020a,b) on tasks like multi-hop question answering (MHQA) (Yang et al., 2018;Ho et al., 2020;Trivedi et al., 2022) and numerical reasoning over text (Dua et al., 2019).For example, NumNet (Ran et al., 2019) injected the strict inequality inductive bias into LMs to significantly improve its performance on DROP (Dua et al., 2019), while De-compRC (Min et al., 2019) divided the multi-hop (a) Attribute-Infused Prompt You will be given a document preceded by "Document:" and a question … Attributes that can be useful for the given task (b) Pipeline-Infused Prompt You will be given a set of evidence documents, a multi-hop question … (c) Graph-Infused Prompt You will be given a set of evidence documents, a multi-hop question … Question: Which team scored more points, Texans or Eagles?Answer: Texans The sentences are prefixed with paragraph and sentence numbers.The prefixes can connect sentences.There are three connection types: 1) "Question": … 2) "Intra": … 3) "Inter": … Nodes Question Type: Bridging Answer: Clifton College, London Numbers have specific relationships ... where the "<" symbol represents a is less than b, the ">" symbol represents a is greater than b … Document: .. and their 45 points were the most in franchise history until Week 4 of the 2017 season, when they again defeated the Titans 57-14.. questions into a set of decomposed sub-problems to improve the performance on MHQA.However, these finetuning-based methods are difficult to implement in LLMs like GPT-4 because such LLMs are massive in size and their parameters are often inaccessible due to the proprietary nature.In this work, we show that the finetuned models, or more specifically, the set of inductive biases used in such models, can serve as prompt materials to improve GPT-4's compositional reasoning as illustrated in Figure 1.Our contributions are threefold; (i) Reproducibility2 : adopting the previously validated finetuned features into prompts to improve the LLM reasoning.(ii) Systematicity: providing a template process to turn a finetuned model into a prompt.(iii) Enhanced performance: some of the transferred models exhibit strong zero-shot and few-shot capabilities in MuSiQue and DROP. FinePrompt We propose to transfer validated finetuned features into prompts (hence, the name FinePrompt) to investigate whether (i) the finetuned features, in forms of prompts, have the same effect of improving the performance of GPT-4 on textual compositional reasoning tasks, and (ii) how the various models/approaches can be effectively transferred to structured prompts.To transfer the features into prompts, we divide models by their properties from Sections 2.1 to 2.3 as shown in Figure 2. In each section, we describe which characteristic of a finetuned model aligns with one of the three prompt-infusion strategies; Attribute-Infused ( §2.1), Pipeline-Infused ( §2.2) and Graph-Infused Prompts ( §2.3).Note that inductive biases of finetuned models can be manifested in various forms, because they can be derived from one of the following strategies: (i) integrating additional, taskrelated features into the models through training (e.g., learning to solve basic arithmetic problems prior to solving complex textual numeric problems (Geva et al., 2020)), (ii) formulating a pipelined process that decomposes a complex reasoning task into a sequential set of sub-problems, and (iii) incorporating external graph structures to leverage the connected, structural inductive bias. Our work, which aims to transfer the central inductive biases into prompts, directly adopts the previous works (Geva et al., 2020;Ran et al., 2019;Tu et al., 2020;Chen et al., 2020a) to minimize the human effort of extracting the features previously leveraged in these models.For example, as shown in Figure 2, while we have to manually construct the Task-specific Instruction and Finetuned Instruction chunks in the prompts, we can simply adopt the code bases of previous models to extract the necessary features used in In-context Samples. Attribute-Infused Prompt Attributes are a set of task-specific features conducive to the end task that provide prerequi-site knowledge.For instance, in order to perform numerical reasoning over text, the model needs to know beforehand how to perform addition/subtraction (Geva et al., 2020) or the definition of strict inequality (Ran et al., 2019) in order to perform a higher-order, compositional reasoning over numbers appearing in text.We define such task-specific features as attributes and formulate them as follows.Given a language model f θ (X; θ), our prompt input X can be defined as: (1) where , 19517.4 -17484 -10071.75 + 1013.21 = -7025.14).x i is the i th end task input.∥ denotes the concatenation operation.Unlike CoT or Self-Ask, which require manual human annotation of the rationale for the fewshot samples, our prompt simply provides P attr and s i to the LLM without any manual annotation. Pipeline-Infused Prompt Pipelines that break down a complex end task into a series of sub-tasks take the necessary inductive bias (i.e., decomposition) into account.Such biases are especially useful when addressing complicated, multi-hop QA tasks such as MuSiQue (Trivedi et al., 2022).While existing prompting techniques (Press et al., 2022;Zhou et al., 2022) also decompose questions into tractable sub-questions, our pipeline-infused prompts derive directly from existing pipelines implemented by previous works (Min et al., 2019;Groeneveld et al., 2020), reusing the already validated approach as a prompt.The pipeline-infused prompt input X can be defined as: where S k = {c(s 1 ), c(c 2 ), ...c(c k )} and c is the conversion function that converts few-shot samples into their corresponding pipeline-infused prompt. Note that c includes the decomposition process directly adopted from the existing code base of previous works, providing the decomposed subquestions, sub-answers and evidences to form c(s i ). Graph-Infused Prompt Graphs are often used by finetuned LMs through GNN-based modules (Tu et al., 2020;Chen et al., 2020a) to exploit the connectivity information among textual units (e.g., entities, sentences) that help the LM perform multi-step reasoning.To provide the features conveyed by graphs, we transfer the graph into prompts by identifying nodes within texts and directly inserting an edge preceded by each node as shown in Figure 2(c).Our graph prompt X is defined as follows: where g is a text conversion function that directly injects node-to-node information into the in-context sample s i and test input x i .It is worth noting that we do not manually construct the graph or identify nodes present in texts; we directly adopt the graph structures provided by previous finetuned models (Tu et al., 2020;Chen et al., 2020a) and the code bases thereof, an automatic process that does not necessitate manual annotation.The nodes (e.g., sentences) supplied by previous works are directly injected into texts in the form of an indicator token, e.g., P2S53 , along with the edges which are constructed as proposed in the finetuned models and appended to each node, e.g., connecting sentence nodes based on entity overlap (Tu et al., 2020), or connecting an entity with a number if they appear within a sentence (Chen et al., 2020a). Zero-shot Few-shot (k = 3) Datasets The datasets used in this experiment are a multi-hop QA dataset, MuSiQue (Trivedi et al., 2022), and a numerical reasoning over text dataset, DROP (Dua et al., 2019).Due to the heavy expenses incurring from evaluating on the full evaluation datasets, we sample 256 instances from each dev set as in previous works (Le et al., 2022) and iterate over them for 5 times to address variance. To investigate both the zero-shot and the few-shot performance of the prompt schemes, we test our proposed schemes and baselines along the two axes; the number of few-shot, k = 3 follows a previous work's setting on DROP (OpenAI, 2023). Metrics The metrics used for DROP (Dua et al., 2019) are answer exact match (Ans.EM) and F1 (Ans.F1), as the task is essentially a QA that deals with the text match of the generated answer.For MuSiQue (Trivedi et al., 2022), the task requires the model to perform both the answer generation and the evidence paragraph prediction.To accommodate both tasks and accurately measure whether the generated output sequence contains the answer string, we use answer F1 (Ans.F1) and supporting paragraph F1 (Sup.F1).The supporting paragraph F1 adopts the implementation by Trivedi et al. (2022). Results In Tables 1 and 2, we provide our results on both the zero-shot and few-shot settings over the two compositional reasoning tasks.Our evaluations span two axes: reproducibility of the prompts and their effect on compositional reasoning capability. Reproducibility On both datasets, all our proposed prompts improve markedly over the base GPT-4, demonstrating the same effect the finetuned models exhibit when they incorporate the same inductive biases into their models.Although this work does not exhaustively explore all finetuned models, this result hints at the possibility of incorporating other previously effective inductive biases into prompts to improve the LLM reasoning ability. Compositional Reasoning on DROP As shown in Table 1, attribute-infused prompts, especially GenBERT, excel in both the zero-shot and few-shot settings on DROP.While Self-Ask and CoT improve GPT-4's performance in the zero-shot setting, they show increased variance in the few-shot setting.This provides a stark contrast to the attributeinfused prompts, as they outperform other baselines in the few-shot setting.The graph-infused prompt also improves the numerical reasoning ability, demonstrating that graphs' usefulness, such as connections between different textual units, can effectively be infused to LLMs via prompt. Compositional Reasoning on MuSiQue For multi-hop reasoning on Table 2, both the pipeline and graph prompts outperform other baselines, except for QUARK.The performance drop after applying QUARK's pipeline prompt suggests that, unlike DecompRC which decomposes the question into a series of sub-questions, QUARK independently interprets a paragraph using the multi-hop question, which is not helpful in reducing the complexity of multi-hop reasoning.Moreover, SAE's performance improvement after injecting the graph suggests that even without the lengthy pipeline approach, textual graph prompts better elicit the compositional reasoning ability from the LLM. Related Works Task-Specific Models On tasks such as MuSiQue (Trivedi et al., 2022) and DROP (Dua et al., 2019), numerous models have been proposed to enable multi-step, compositional reasoning (Min et al., 2019;Groeneveld et al., 2020;Tu et al., 2020;Fang et al., 2020;Ran et al., 2019;Geva et al., 2020;Chen et al., 2020a,b).These models, prior to the LLM prompt learning paradigm took their place, proposed various novel ideas on this matter. Prompt Learning With prompting techniques like CoT (Wei et al., 2022), Self-Ask (Press et al., 2022) and Least-to-most prompting (Zhou et al., 2022) having shown to improve LLMs' compositional reasoning ability, our work explores how these prompts compare against the FinePrompt scheme.We do not deal with Least-to-most prompt-ing as it does not deal with compositional reasoning in a textual understanding setting. Conclusion This work studies the transfer of validated inductive biases from finetuned models to prompts and their effectiveness on compositional reasoning of GPT-4.Our empirical results suggest that (i) end task-related attributes and graphs help elicit robust multi-step reasoning capability from LLMs, and (ii) previous finetuned model pipelines, if they involve decomposing a task into a smaller sub-problems, are also effective for prompting.Our work suggests that more can be exploited from the previous pretrain-then-finetuned models and our proposed template can incorporate those features seemlessly into LLM prompting paradigm.We hope future works can explore further in this direction and easily leverage the power of LLMs with FinePrompt. Limitations Limited Dataset Size Using GPT-4 for our study incurs substantial cost because of its price ($0.03 per 1,000 tokens), which led us to randomly sample 256 instances from the evaluation sets of both datasets instead of evaluating on the full dataset.Following other previous works (Bai et al., 2023), we deal with reduced dataset size to address the high-cost of using OpenAI GPT model APIs. Symbolic Compositional Reasoning Datasets While our work deals with compositional reasoning datasets within the textual understanding environment, there are other tasks like last-letter concatenation task (Wei et al., 2022) and action sequence prediction such as SCAN (Lake, 2019).However, they do not deal with textual context understanding.Future works may explore other models on these end tasks as well. Extension to other LLMs While there are other LLMs available such as Alpaca (Peng et al., 2023) and Vicuna (Chiang et al., 2023), which are LLAMA-based (Touvron et al., 2023), instructiontuned models, they use GPT-generated instruction data to finetune their models.We also note that such LLMs are too compute-heavy to train in our local environment. Additional Finetuned Models We are aware that there are numerous pretrain-then-finetuned LMs for MHQA and DROP.Nevertheless, since we cannot exhaustively consider every single model that has been proposed to date, we select a few models that share commonalities as in Sections 2.1 to 2.3 to investigate their impact on LLMs as prompts. Manual Annotation Manual annotation is unavoidable in FinePrompt since it requires a human annotator to understand the central inductive bias of a model and translate them into textual prompts.Nonetheless, one of the main contributions of this work, which is to reduce the human effort in searching for an effective prompting strategy like CoT (Wei et al., 2022) and Self-Ask (Press et al., 2022) by transferring previously validated inductive biases, holds regardless of the manual effort.Moreover, FinePrompt adopts the code and data bases of previous finetuned models to further mitigate human effort of extracting the inductive bias features. A Additional Settings on Experiments Hyperparameters & Datasets We explain detailed settings such as the hyperparameter setting of GPT-4 and changes to the existing data settings to accommodate the LLM.GPT-4 hyperparameters used in this work is as follows: the temperature of GPT-4 is set to 0.0 to offset randomness during generation.For MuSiQue, there are originally 20 question-related documents for each QA instance.However, due to the context length limit (8K) of GPT-4 and delayed API response, we consider 5 documents; all of which are randomly sampled and consist of 2 gold documents (we adopt the 2hop question tasks from Press et al. ( 2022)) and 3 non-gold documents.This setting is unanimously applied to all baselines.For DROP, there is no additional change to the experiment setting as it deals with a single paragraph and a question. Models We adopt the Self-Ask (Press et al., 2022) and CoT (Wei et al., 2022) prompt techniques directly from their papers.While experiments conducted by Press et al. (2022) do not take into account the provided question-related documents (contexts) from MuSiQue (Trivedi et al., 2022), for the fairness of comparing the effectiveness of prompts, we report in Table 2 the results of prompting with the contexts. For QDGAT, as the official code on the Github repository of the model was not replicable, we implement the QDGAT graph generation with Stanza (Qi et al., 2020); it is used to extract entities and numbers from each document.Note that "DURATION" number type is removed in the process as Stanza does not support the number type.This leaves 7 number types (NUMBER, PERCENT, MONEY, TIME, DATE, ORDINAL, YARD).Moreover, in order to adopt the two essential connections from the QDGAT graph, entitynumber and number-number edges, without having to modulate a prolonged text, we denote numbernumber edges as a group after NUMBER-NUMBER: (e.g., NUMBER-NUMBER: YARD). On NumNet, we note that while it uses GNN to finetune number representations, it does not use the connectivity inductive bias as in other graph leveraging models like SAE and QDGAT.Therefore, we add number-specific features of NumNet as Attribute-infused prompt, not Graph-infused prompt. Zero-shot Few-shot (k = 3) Few-shot In-Context Samples For our Fewshot (k = 3) setting in Tables 1 and 2, we random sample k instances from the training datasets of DROP and MuSiQue for 5 times.With a total of 15 randomly sampled instances, we manually construct k-shot in-context samples for CoT and Self-Ask as both requires humans to provide an intermediate rationale to a question in each sample. B Additional Experiments: FinePrompt and finetuned Models While our work seeks to investigate the effectiveness of the validated, finetuned inductive biases in the form of prompts, we provide additional experiments on how the finetuned models used in this work fare against their FinePrompt counterparts. We have conducted additional experiments with GenBERT on DROP and SAE on MuSiQue to compare the original finetuned models and FinePrompt (shown in Table 4).The results demonstrate that our FinePrompt scheme outperforms its original finetuned counterparts, exhibiting the potential to understudy the finetuned models in a low-resource setting by leveraging LLMs. C CoT and Self-Ask without Contexts In Press et al. (2022), the base settings of Self-Ask and CoT do not take question-related documents (contexts) into account on the multi-hop question answering task, and do not perform supporting evidence paragraph prediction either; they use their parametric knowledge to decompose the given questions and generate answer rationales.However, as our models applying FinePrompt require contexts provided by documents, we present the performance of CoT and Self-Ask with contexts for a fair comparison in Table 2. To evaluate how the previous elicitive prompting strategies perform in our experimental setting when contexts not provided, we provide additional experiments in Table 3. Providing the questionrelated documents shows a substantial increase in both Self-Ask and CoT, notably in the few-shot setting. D Full Prompts Here we provide the actual prompts used in our work.Each prompt, divided into three distinct groups (Attribute, Pipline, Graph), consists of the following format: (i) Task Instruction, (ii) Finetuned Instruction, (iii) In-context Samples & Test Input.For the Attribute-Infused Prompt, we also inject the Input-related Attributes (see Figure 2 for details).At the end of each instruction, the fewshot samples (optional in case of zero-shot) and the end task query (Question:) will be appended for GPT-4 to responsd to. In the following, the DROP task instructions will be denoted by blue boxes, whereas the MuSiQue task instructions will be denoted by red boxes.The baselines like Self-Ask (Press et al., 2022) and CoT (Wei et al., 2022) are denoted by green as their zero-shot setting shares the same prompt in both datasets. Base Instruction for DROP (Dua et al., 2019) You are a question answering machine that answers a question based on a given document.You will be given a document preceded by "Document:" and a question preceded by "Question:". When you generate the answer, simply generate the answer after "Answer:" Document: ... Question: ... Answer: ... Instruction for GenBERT (Geva et al., 2020) You are a question answering machine that answers a question based on a given document.You will be given a document preceded by "Document:" and a question preceded by "Question:".When you generate the answer, simply generate the answer after "Answer:".You will also be given a set of related task examples to help you acquire the necessary knowledge to answer a given question based on the document.Document: ... Question: ... Answer: ... Related Examples: 1) 19517.4 -17484 -10071.75 + 1013.21 = -7025.14 2) most(1072.1, 17938, 5708.65, 14739.16)= 17938 3) argmax(toppy 8105.5, cockney 7111.0,nickelic 1463.16,tiredom 6929) = toppy 4) most recent(July 16, 134; June 23, 134; 24 July 134; 28 October 134) = 28 October 134 5) difference in days(April 21, 1381; 13 April 1381) = 7 6) percent not photochemist, floodgate, retiringly :: photochemist 0.82%, morningward 54.4%, floodgate 2.0%, reline 0.78%, retiringly 42% = 55.18 7) Document: "The commander recruited 16426 asian citizens and 15986 asian voters.The commander borrowed 7 foreign groups from the government.The government passed 3 foreign groups to the commander."Question: How many foreign groups did the commander recruit?Answer: 10 Instruction for NumNet (Ran et al., 2019) You are a question answering machine that answers a question based on a given document.You will be given a document preceded by "Document:" and a question preceded by "Question:".When you generate the answer, simply generate the answer after "Answer:" Numbers have specific relationships as shown in the following examples, where the "<" symbol represents "a < b" (a is less than b), the ">" symbol represents "a > b" (a is greater than b), and the "=" symbol represents "a = b" (a is equal to b): Document: ... Question: ... Answer: ... 5 < 6 10 > 6 117 > 25 978 < 979 0 = 0 1.6 < 7.2 9.0 > 8.9 2.6 < 2.9 Instruction for QDGAT (Chen et al., 2020a) You are a question answering machine that answers a question based on a given document.You will be given a document preceded by "Document:" and a question preceded by "Question:".When you generate the answer, simply generate the answer after "Answer:" Some entities and numbers in the provided document can have special connections.There are a total of two connection types.1) "ENTITY-NUMBER": Connections between entity and number in the same sentence. Base Instruction for MuSiQue 2) You should give the paragraph id you used to derive the answer after "Evidence:".3) You should provide the answer to the multi-hop question after "Answer:".Paragraphs: ... P1: ... P2: ... ... PN: ... Question: ... Evidence: Pi, Pj, ... Answer: ... Instruction for DecompRC (Min et al., 2019) You are a question answering assistant.You will be given a set of evidence paragraphs, a multi-hop question and you will be asked to do the following: First, decompose the given multi-hop question ("Question:") into all three different versions of single-hop, sub-question sets ("Sub-question 1:", "Sub-question 2:").The three different question types are as follows: 1) Bridging Type: requires finding the first-hop evidence for Sub-question 1 to find the evidence to answer Sub-question 2. 2) Intersection Type: requires finding an entity that satifies two independent conditions of the two Sub-questions. 3) Comparison Type: requires comparing the property of two different entities in the Sub-questions. Then, given a question, generate the sub-questions, the corresponding answer and the evidence paragraph ids for each sub-question in the following format: Paragraphs: P1: ... P2: ... ... PN: ... Question: ... Using the previously generated information about the sub-questions, the answers and evidence paragraphs, generate the most plausible answer to the question ("Question:") after "Answer:", and also generate which question type your answer is from as follows: Question Type: ... Answer: ... Instruction for QUARK (Groeneveld et al., 2020) You are a question answering assistant.You will be given a set of evidence paragraphs, a multi-hop question and you will be asked to do the following: 1) You will read a list of paragraphs (P1, P2, ..., PN) and a multi-hop question ("Question:"). 2) Find one question-related sentence for each paragraph ("Paragraph:") and write that sentence id after "Evidence Sentences:".3) Read the given set of sentences after "Evidence Sentences for Pi:", where "i" refers to the paragraph id.This set of predicted sentences will serve as your new context to help you answer the question.4) You should provide the answer to the multi-hop question after "Answer:".Paragraphs: ... P1: ... P2: ... ... PN: ... Question: ... Evidence Sentences for P1: Si Evidence Sentences for P2: Sj ... Evidence Sentences for PN: Sk Answer: ... Evidence Paragraphs: Pi, Pj, ... Instruction for SAE (Tu et al., 2020) You are a question answering assistant.You will be given a set of evidence paragraphs, a multi-hop question and you will be asked to do the following: 1) You will read a list of paragraphs (P1, P2, ..., PN) and a multi-hop question ("Question:"). 2) You should provide the answer to the multi-hop question after "Answer:".3) You should give the paragraph id you used to derive the answer after "Evidence:". The provided paragraphs and sentences within are prefixed with paragraph numbers and sentence numbers.For example, the prefix "P2S1" indicates the 1st sentence of the 2nd paragraph.Also, if sentences are related to other sentences, prefixes can connect them to each other in some form of connection.There are a total of three connection types: 1) "Question": Connections between sentences that are related to the question.2) "Intra": Connections between sentences within the same paragraph. 3) "Inter": Connections between sentences that are related but belong to different paragraphs. Figure 1 : Figure 1: FinePrompt transfers the existing task-specific inductive biases into natural language prompts, guided by the transfer template proposed in this work. Figure 2 : Figure 2: Illustration of FinePrompt.Each box includes Task-specific Instruction, Finetuned Instruction, and In-context Samples & Test Input.(a) Attribute-Infused Prompt injects a set of end task-specific features.(b) Pipeline-Infused Prompt guides the model to break down a complex end task into a series of subtasks and solve them iteratively.(c) Graph-Infused Prompt infuses the graph's connectivity information within the input text. (Tu et al., 202020a 2020)finetunes on number representations to infuse the strict inequality bias into the model.DecompRC(Min et al., 2019)decomposes a multi-hop question into different types of decomposition, generates an answer and evidence per type, and scores them to get the final answer.QUARK(Groeneveld et al., 2020)independently generates a sentence per retrieved paragraph and uses the sentences as context for the final answer.QDGAT(Chen et al., 2020a) is a model with entitynumber and number-number graphs to leverage the relationships among numbers and entities.SAE(Tu et al., 2020) is a model with a graph of sentence nodes that uses the sentence-level connections between (and within) paragraphs.Details about the models including CoT and Self-Ask, and hyperparameters are given in Appendix A. Table 2 : Results of FinePrompt on our sampled MuSiQue dev set (256 instances).Self-Ask and CoT do not perform supporting paragraph prediction.We provide the averaged score and standard deviation over 5 different iterations, each with a different few-shot sample set. Table 3 : Additional results on the Self-Ask and CoT performance with and without providing the questionrelated documents in our sampled MuSiQue dev set. Table 4 : Additional results on the original finetuned GenBERT on DROP and SAE on MuSiQue against our FinePrompt counterparts to compare the original finetuned models and FinePrompt.Each FinePrompt variant is denoted by the number of k-shot samples used.
6,218.8
2023-01-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
The modern structurator: increased performance for calculating the structure function The autocorrelation function is a statistical tool that is often combined with dynamic light scattering (DLS) techniques to investigate the dynamical behavior of the scattered light fluctuations in order to measure, for example, the diffusive behavior of transparent particles dispersed in a fluid. An alternative approach to the autocorrelation function for the analysis of DLS data has been proposed decades ago and consists of calculating the autocorrelation function starting from difference of the signal at different times by using the so-called structure function. The structure function approach has been proven to be more robust than the autocorrelation function method in terms of noise and drift rejection. Therefore, the structure function analysis has gained visibility, in particular in combination with imaging techniques such as dynamic shadowgraphy and differential dynamic microscopy. Here, we show how the calculation of the structure function over thousands of images, typical of such techniques, can be accelerated, with the aim of achieving real-time analysis. The acceleration is realized by taking advantage of the Wiener–Khinchin theorem, i.e., by calculating the difference of images through Fourier transform in time. The new algorithm was tested both on CPU and GPU hardware, showing that the acceleration is particularly large in the case of CPU. Introduction Dynamic light scattering (DLS) techniques have been used for decades to obtain information about the dynamical behavior of a variety of samples spanning from soft matter physics to biology [1]. The main idea of DLS is to measure the intensity of the light scattered by a transparent sample at a given angle and statistically analyze its fluctuations in time in order to obtain information on the motion of the components inside the sample. For example, DLS analysis of the Brownian motion of particles dispersed in fluid allows measuring their diffusion coefficient and then, ultimately, their size distribution thanks to the Stokes-Einstein relation between the particles' mobility and their size. The quantity classically obtained in DLS instruments is the autocorrelation function, i.e., the direct output of "correlators" that compute the scalar product of the intensity signal coming from the light detector by the same quantity at different delay times. An alternative approach to the autocorrelation function has been proposed several decades ago and consists in computing the a e-mail<EMAIL_ADDRESS>(corresponding author) structure function. The structure function is obtained by analyzing the autocorrelation of the differences of the signal at different times [2]. At the same time, it was proposed to develop "structurators" in place of the more widely known correlators [3]. With the spread of pixelated detectors, imaging techniques like dynamic shadowgraphy, dynamic Schlieren [4][5][6][7][8][9], and differential dynamic microscopy (DDM) [10][11][12] have taken advantage of the use of the structure function because of its improved robustness for data analysis in terms of rejection of background signal deriving from steady-state and slow-drift noise sources as compared to the autocorrelation function approach [2,13]. This is due to the intrinsic nature of the structure function that is based on the difference of signal elements of increasing time delay so that any spurious signal changing on times longer than the utilized time delay is subtracted. By using the spatial Fourier analysis, these imaging techniques allow scientists to investigate the temporal evolution of a sample at the different length scales present in a set of images recorded at different times [13]. For this reason, they have gained popularity, especially in the field of soft matter physics. In fact, the combination of the robustness of the structure function analysis applied to simple and/or already available optical setups has allowed them to be used both in traditional laboratories [10][11][12], and in orbiting experi-ments on the ISS [14][15][16] for investigating the dynamics of rather different samples ranging from colloidal particles [10] to bacteria [17], but also from biological cells [18] to density fluctuations in and outside thermal equilibrium [4,19], and many others, as also witnessed by several review articles [12,13,20,21]. As stated, the structure function approach can be combined with imaging techniques, thus requiring an optical system like transmitted light microscopy [10], fluorescence-based microscopy [22], dark-field imaging [23] to acquire series of images. The series of images should be processed by custom-made software to compute the structure function, as defined by Schultz-Dubois and Rehberg [2] and later implemented to Schlieren [8] and Shadowgraphy [8] and optical microscopy [10,24]. However, a rapid evaluation of the structure function is fundamental to achieve realtime analysis in laboratory conditions and may play a crucial role in the utilization of such an approach in industrial and commercial applications. The available software programs calculate the structure function in different ways. Some process the images by calculating the differences between pairs of images first, and then evaluate the bi-dimensional fast Fourier transforms (FFT) of the differences [11]. In other cases, they first compute the FFT of the images and then calculate the differences in Fourier space [25]. Since the number of images that can be acquired and the number of pixels therein have considerably increased in the latest two decades, the computational load to evaluate the structure function has increased consequently. In the meantime, also the computational capabilities of modern computers have grown, but a major breakthrough in reducing the computation time of the structure function was achieved when researchers started to implement the calculation on graphics processing units (GPU) [22,25]. The implementation of this computational task on GPU allowed a decrease in the computational time by a factor of 10-30, thereby reducing the data analysis time from several hours to a few tens of minutes. In the present article, we present a different route to calculate the structure function of the image series taking advantage of the Wiener-Khinchin theorem [26,27]. The calculation is performed by using the Fourier transform in time rather than by calculating differences of spatial FFTs. This approach enables a further optimization step and allows us to compute the structure function faster than state-of-the-art existing software. We obtain a considerable speed up of the calculation time, particularly when GPU acceleration is not available. The article is organized as follows. First, we provide an example of application by means of Shadowgraph images that are later utilized to test the software performances. Then, we discuss our method for calculating the structure function and compare it with state-of-theart algorithms [25]. Finally, we discuss the results and provide conclusions. Test case: shadowgraph observation of density fluctuations In this section, we describe a free diffusion experiment obtained by carefully layering two miscible fluids where the denser one is placed at the bottom of the container, so to obtain a gravitationally stable condition. The fluid system is investigated by shadowgraphy, i.e., an optical technique able to measure density fluctuations within the fluid in terms of series of images I m from which one can extract the density fluctuation structure function by means of the DDA algorithm. In the classical implementation of the DDA algorithm [2,4,8,10], the structure function is calculated by first evaluating the differences among all pairs of images and then by computing the power spectra of those differences, and finally, by averaging the power spectra over all the pairs of images acquired with the same time delay. This procedure can be defined as follows: where the indices n and m run from 0 to N − 1 and F xy indicates the bidimensional FFT of the images in space. The absolute value operation "|. . .|" is intended for every wave vector component of the FFT. The initial condition is prepared in two steps: First, we introduce pure water by completely filling a glass cylindrical cell (Hellma, 120-OS-20); second, we slowly inject the glycerol and water solution (20 % w/w) until reaching half of the cell. By using this procedure, we obtain a two-layer sample in which the two miscible liquids are separated by a vanishing horizontal interface and are stabilized by the gravitational force while the only mechanism relaxing the concentration gradient with time is mass diffusion. The dissolving concentration gradient provides a non-equilibrium condition that amplifies the spontaneous velocity fluctuations within the fluid [28]. This results in the appearance of nonequilibrium fluctuations at all wavelengths that can be visualized by means of the Shadowgraph setup as done in several publications [4]. For shadowgraph observation, the cell is illuminated by a collimated plane-parallel beam obtained by using a super-luminous diode (Superlum, SLD-MS-261-MP2-SM) with a wavelength of λ = (675 ± 13) nm and propagating along the vertical axis. The light propagates through the sample and the density fluctuations induce local fluctuations of the refractive index that scatter the light field. A charged coupled device (CCD) records the interference between the primary laser beam and the light scattered by refractive index fluctuations inside the fluid. We acquired sets of N = 2000 images I n of 512 × 512 pixels at the frame rate of 25 Hz. In Fig. 1 we show: (a) a typical Shadowgraph image I n , (b) a typical image difference (I n−m − I n ) with enhanced contrast to make the tiny density fluctuations visible, and (c) its bidimensional power spectrum |F xy (I n−m − I n )| 2 displayed in logarithmic scale. The two images considered for the difference were taken 4 s apart so that the signal is uncorrelated for most of the wave vectors, and the structure function has already reached its maximum. In panel (d), the azimuthal average d(m) φ of the structure functions are shown for many delay times m. The inset of panel (d) shows the structure function as a function of the delay time for three selected wave vectors. The strong oscillations as a function of the wave vector are related to the shadowgraph transfer function as described in literature [9]. The structure function increases as a function of the delay time at any wave vector and can be analyzed to investigate the diffusive behavior of concentration fluctuations during the diffusion process. The structure function is modeled as detailed in the literature [5], by providing a suitable model for the intermediate scattering function, which in the present case is a single exponential decay. The fitting procedure thus provides a measurement of the time decay τ (q) for any wave vector q as shown in panel (e). The right part of the plot shows the typical 1/(Dq −2 ) behavior of concentration non-equilibrium fluctuations from which one can extract the value of the mass diffusion coefficient, like it has been performed for thermodiffusion experiments [5]. The left part of the plot shows the effect of gravity on the decay times of concentration non-equilibrium fluctuations already reported in several ground-based experiments [8]. The latest part of such analysis is out of the scope of the present paper and will be published in a separate work. Different approaches to the structure function The calculation of the structure function involves evaluating differences, FFTs and averages that can be performed efficiently on a GPU as parallel operations [22]. This approach can be optimized by exploiting the linearity of the FFT and the available hardware memory as described in ref. [25]. The calculation of Eq. 1 can be approached via a two-step algorithm. First, all FFTsĨ n = F xy I n of the images I n are calculated and stored in the local memory. Second, each matrix d(m) is evaluated by averaging differences of the FFTs of images (Ĩ n−m −Ĩ n ) rather than FFTs of image differences F xy (I n−m − I n ) exploiting the linearity of the FFT operation. This approach reduces the number of operations to be performed because the matricesĨ n can be used several times for different d(m). Thus, for N images, the number of FFTs to be computed is reduced from O(N ×N ) to O(N ). While this optimization allows reducing the number of FFTs, the overall algorithm has a global computational complexity of O(N ×N ). We see this from Eq. 1 because there are as many time delays m as images, and for each m the matrix d(m) is obtained via a sum over (N − m) images. In this work, we present a new approach to reduce the global computational complexity of the algorithm to O(N ×log 2 (N )) by using the Wiener-Khinchin theorem [26,27], which states that, for a stationary random process, the autocorrelation function can be calculated by the power spectrum (in time) of the process. We expand the square modulus operation of Eq. 1 in the following way: where the symbol " * " indicates complex conjugation. In the sum, the first term |Ĩ n−m | 2 is the average of the first (N − m) spatial power spectra, while the second term |Ĩ n | 2 is the average of the last (N − m) spatial power spectra. Both terms have a computational complexity of O (N ). The last term, identified by the productĨ * n−mĨn , is the autocorrelation function of the image FFTs. The autocorrelation is the only term in Eq. 2 which has computational complexity of O (N × N ). By applying the Wiener-Khinchin theorem [26,27], the autocorrelation function can be evaluated via the power spectrum in the temporal frequency Fourier space. The advantage of computing the autocorrelation function via the Fourier transform in time is given by the speedup provided by the FFT algorithm allowing to reduce the computational complexity from Performance analysis To compare the new algorithm with other available reference software [25,[30][31][32], we developed a new software program that implements the algorithm described in ref. [25] and the new algorithm on CPU and GPU hardware, for a total of four execution modes. Further comparisons with other software are presented in "Appendix D" showing that the method reported in [25] was already one of the fastest approaches to calculate the structure function before the present work. To distinguish the two algorithms, we will refer to the method reported in ref. [25] as WITHOUT_FT and the technique discussed in this article as WITH_FT, where the label FT stands for Fourier transform in time. Both methods calculate the final result in two steps. The first step is common and consists of calculating and storing the FFTs of the images in the available free memory: RAM for the CPU versions and G-RAM (global RAM) for the GPU implementations. In the second step, the wave vectors are analyzed independently according to the different schemes. If the wave vector data exceeds the capacity of the available memory, both algorithms split the job into several groups at the price of recalculating the image FFTs several times (see "Appendix C" for more details). The program is written in C++11 and CUDA v.10.2 with graphical support of the OpenCV 3.0 library. We tested the program with the Fourier transform libraries CUFFT (version provided in CUDA v.10.2) for GPU execution and FFTW 3.3.3 [33] for the CPU implementations. The code was compiled with MS compiler v120 and the compiler of CUDA v.10.2 in Visual Studio 2019. The program was executed on a machine with the following specifications: -CPU: Intel Core TM i9-9880H, -32 GB DDR4 RAM, -Graphic card: NVIDIA Quadro RTX 4000 with 8GB of dedicated G-RAM memory, Test images of 512 × 512 pixels were taken from the experiment described in Sect. 2. For other sizes, synthetic images were generated with n × n pixels having 16 bit depth, similar to the real images. In our tests, we considered image sets composed by maximum 2 14 = 16384 images, and we limited the execution time of the program to less than 10 5 s. In the first test, we ran all the algorithms on CPU and GPU with images composed of 512×512 pixels. For comparison, we made use of 8 GB of RAM for executing the program on the CPU. In this way, the CPU and the GPU could access the same amount of RAM and G-RAM, respectively. The execution times of the program are presented in Fig. 2, in which the times for all four execution modes are plotted as a function of the number of images used for the test. As expected from the results reported in ref. [25], the WITHOUT_FT algorithm executes more than 30 times faster on GPU than CPU. The GPU hardware is also faster than the CPU in executing the WITH_FT algorithm, but the speedup factor never exceeds a factor of two. We see that the WITH_FT scheme is faster than the WITHOUT_FT method when the image number processed in one run of the program is larger than ∼ 1000. If the condition N 1000 is met, both CPU and GPU versions of the WITH_FT algorithm execute quicker than the GPU-WITHOUT_FT implementation, reaching a maximum speed-up factor of 10-12 for 16384 images. Figure 3 presents the fractional time spent by the program in the four modes to compute the images' FFTs (step one), process the time sequences (step two) and perform memory IO operations (disk and hostdevice). The IO operations named host-device include the data transfers between the RAM and the G-RAM, and it exists only in the GPU implementations. In In this test, we used images of 512×512 pixels. The first row (graphs (a), b) presents the fractional times spent in CPU mode and the second row (graphs c, d) in GPU mode. The first column shows the fractional times of the WITH_FT algorithm (graphs a, c) and the second column of the WITHOUT_FT algorithm (graphs b, d). Data of the CPU-WITHOUT_FT version for 16384 images are no reported because the total execution time was exceeding 10 5 s the figure, we normalized the fractional times by the total execution time to highlight the different workloads for executing each part of the program. As a function of an increasing number of images, the workload of step two compared to the other operations remains balanced in the CPU-WITH_FT implementation, and it reduces in the GPU-WITH_FT implementation. Conversely, the WITHOUT_FT algorithm spends more fractional time during the second step as the number of images increases both in the CPU and the GPU modes. Combining the information of Figs. 2 and 3, we see the advantage of the new implementation applied to the problem of calculating the structure function. The WITH_FT algorithm is faster than the WITHOUT_FT scheme for a large number of images as a consequence of the reduction in computational complexity in processing the time sequences of the wave vectors. In a second test, we analyzed the execution performance of the GPU-WITH_FT and GPU-WITH OUT_FT algorithms for squared images of different sizes. Figure 4 presents the ratio of execution times between the GPU-WITH_FT execution over GPU-WITHOUT_FT execution for a different number of images and different image sizes. In analogy to the 512 × 512 pixels example, the WITH_FT method is faster than the WITHOUT_FT technique for more than Fig. 4 Ratio of execution times on GPU of the WITHOUT_FT against the WITH_FT algorithm as a function of different numbers and sizes of images. The transparent red plane marks the condition in which both algorithms process the images within the same time ∼ 500 − 1000 images. The red plane in the figure marks the condition in which both algorithms complete execution in the same amount of time. We notice that small image sizes obtain a larger speedup gain as compared to large images. For example, images composed of 128 × 128 pixels obtain up to a ∼ 100 speed-up gain in the execution time, against only ∼ 4 obtained with images composed of 1024×1024 pixels. In fact, the number of pixels per image affects the load of data transfer operations and FFT of the images in two ways. First, calculating the bidimensional FFT requires more time for images composed of many pixels. Second, the FFTs are calculated several times if the wave vector components of all the images exceed the available memory. Therefore, at processing images composed of many pixels, both the WITH_FT and WITHOUT_FT algorithm must spend a large fraction of time preparing the time sequences before their analysis. Considering for example the WITH_FT at processing 16384 images, the first step and memory IO operations occupy 44% of the execution time with images composed of 1024 × 1024 pixels, and they occupy 62% of the execution time for the images composed of 2048 × 2048 pixels. The performance loss caused by large datasets can be partially mitigated by adopting larger memory areas to store the image FFTs. As a final test, we processed 16384 images of 512 × 512 pixels with the CPU-WITH_FT algorithm releasing to the program 23 GB of RAM. In this configuration, we obtained a speedup of a factor of 2 compared to the previous tests in which the RAM was limited to 8 GB, thanks to the larger available memory area. In fact, the image's FFTs are recalculated six times by using 8 GB of RAM, but only two times by using 23 GB of RAM. We describe this test in more detail in "Appendix C". Conclusion In this article, we presented a new algorithm to calculate the structure function for image sets obtained by means of suitable optical techniques, like dynamic shadowgraphy, dynamic Schlieren, or differential dynamic microscopy. The algorithm is based on the temporal FFT of the image 2D spatial FFTs, rather than on differences of the latter. The software developed to implement the new algorithm has been tested against several other software available and it outperforms all of them by different factors depending on the image size and number. In particular, we tested the new software with the one we developed a few years ago [25]. While the old approach executes ∼ 30 times faster in the GPU mode as compared to the CPU mode, the new method executes all the calculations within a similar amount of time on the GPU and the CPU. This result can be a valuable one for all the scientists that are not equipped with GPU hardware. The increased performance in terms of time-saving is in itself a non-negligible advantage. However, the main reason for developing more performing software is to try to achieve real-time analysis of the images, so that the scientist can judge the quality of the measurement and thus modify the experimental parameters during the measurement itself. Analyzing with a delay of some hour means that the experiment must be performed again if the resulting data are not good in terms of signal-to-noise ratio or affected by other experimental issues. The source code of the program developed in this work, which executes the algorithm both on CPU and GPU, is released under the GNU General Public License v.3 [34] and is freely available for download at [35]. The program can readily be used for calculating the structure function from an arbitrary set of images. A more efficient version of the code (about 10 times faster on GPU) is currently under development and will be commercially available in the next future. d'Avenir" French program managed by ANR (ANR-16-IDEX-0002). This work received funding from the French space agency CNES and the European Space Agency (ESA) within the ESA MAP Technes project. It also benefit from the European Union's Horizon 2020 research and innovation program under the Marie Sk lodowska-Curie Grant Agreement No 801110 and the Austrian Federal Ministry of Education, Science and Research (BMBWF). It reflects only the author's view, the EU Agency is not responsible for any use that may be made of the information it contains. We thank the "Institutional open access agreements" between Springer-Verlag and the University of Innsbruck that allowed this publication as open access. Author contribution statement GC designed the algorithm. MN and GC developed the program. MN acquired and analyzed the data. MC tested the software against the other ones available in the literature. All the authors were involved in preparing the manuscript. All the authors have read and approved the final manuscript. Data Availability Statement This manuscript has associated data in a data repository. [Authors' comment: The source code of the program developed in this work is released under the GNU General Public License v.3 [34] and is freely available for download at [35]. The datasets analyzed during the current study are available from the corresponding author on reasonable request.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Appendix A: Azimuthal average In this appendix, we describe how the azimuthal average of the structure function is calculated in our software program. Each d (m) matrix is reduced by the average in a vector d (m) φ = f k (m), where k is an integer number that indicates the amplitude of the wave vector. The average is performed over the pixels located at different circular sectors in the 2D-spatial-FFT plane as depicted in Fig. 5. We perform the average with an antialiasing algorithm that splits each pixel in an 8 × 8 matrix. Each sub-pixel is assigned its fractional position (kx, ky) inside matrix so that its value can be averaged at the wave vector: where "[. . .]" indicates the rounding operation. Appendix B: Analysis of the time sequences In this appendix, we describe how we implemented the computation of the structure function on a single time sequence by using Eq. 2 in order to obtain the final result. We split the calculation on the time sequence in two parts d (m) = da (m) + dc (m) where: The term da (m), i.e., the average of the 2D spatial power spectra of the images with a computational complexity of O (N ), is calculated by using the following iterative formula: where the index n is in the range [0, N − 1]. The term dc (m) expresses the autocorrelation function of the time sequence of image FFTs. This second term is calculated by using the FFT in time taking advantage of the Wiener-Khinchin theorem. In this process we consider two requirements. First, the maximum performance gain obtainable by using the FFT algorithm is expected if the support points of the time sequences are a power of two. Second, the summation over n of Eq. B.2 takes into account only N − m pairs ofĨn functions. These two requirements are incompatible with evaluating the FFT directly on N support points. The incompatibility emerges because the number N is not, in some cases, a power of two, and because the FFT algorithm imposes periodical boundary conditions on the time sequence. To meet both requirements, we zero-padded the time sequences to N2 support points, where N2 is given by: A few pixels of the d (m) are drawn as solid black squares on the plane. All the pixels that intersect a generic circular sector k are considered to compute the azimuthal average of a d (m) matrix at the wavevector k. A generic pixel is enlarged to describe the antialiasing method that was adopted in the analysis. The subpixel having different colors are assigned to different wave vectors k in the azimuthal average as indicated by the color-coded labels. In the figure, we depict the configuration in which the pixel is subdivided in a 5×5 matrix, but we used an 8×8 subdivision in the final analysis where " . . . " denotes the ceiling operation. This padding operation allows us to take advantage of the FFT speed-up, and calculate exactly Eq. 2, without any influence caused by the periodical boundary conditions. The calculation of dc (m) for a single time sequence can be broken down into the following operations. -The time sequence is zero-padded to N2 complex support points. Finally, da (m) and dc (m) are added together to obtain the structure function d (m). Appendix C: Comparison with other software In this appendix, we compare the execution time of our algorithm with other software programs that calculate the structure function. For the comparison, we used the programs specified in refs. [25,[30][31][32]. The program of ref. [25] is already introduced in the main text with the name of GPU-WITHOUT_FT. We will refer to the software reported in [30] as Soft_1, the software reported in [31] as Soft_2 and the software reported in [32] as Soft_3. Soft_1 and Soft_2 are written in Matlab and Soft_3 is written in Python. We benchmarked the execution time of all the programs against each other by analyzing images of 512 × 512 pixels. In Table 1, we present the total execution time of the different software to complete the calculation of the structure function as a function of the image number. We see that the GPU-WITHOUT_FT algorithm executes faster or within comparable times compared to Soft_1-3 and, thus, was selected as the software for comparison in the main text. We see that, compared to Soft_1-3, the new program speeds up the calculation by a factor larger than 10 while processing more than 1024 images. For example, the WITH_FT algorithm is about 415, 12, 18 times faster than Soft_1, Soft_2 and Soft_3, respectively, at processing 2048 images for both versions CPU and GPU and about 3 times faster than the GPU-WITHOUT_FT. Appendix D: Group execution The program described in this work splits the calculations into groups if the data of all wave vector components for all the images exceeds the available storage memory. The method WITHOUT_FT uses a first-in-first-out (FIFO) memory scheme already described in ref. [25]. This approach aims to calculate groups of complete d (m) matrices. The Table 1 Execution times of the programs Soft_1, Soft_2, Soft_3, GPU-WITHOUT_FT (label "NO_FT"), GPU-WITH_FT (label "GPU") and CPU-WITH_FT (label "CPU") GPU CPU 64 24 9 6 < 1 1 2 1 128 70 17 11 1 14 2 256 246 35 51 3 16 4 512 923 80 113 9 20 8 1024 3516 181 215 31 27 17 2048 14494 430 630 113 38 35 4096 51343 1022 1370 446 86 117 The label "N" indicates the number of images. For the comparison, we used images composed by 512 × 512 pixels Time (s) Step 2 Step 1 Disk Fig. 6 Execution time of the WITH_FT algorithm on CPU hardware on images of 512 × 512 pixels and limited RAM of 23 GB. The vertical line marks the crossing point in which the algorithm divides the execution from one into two groups. The labels "Step 1" and "Step 2" refer to the steps of the algorithm described in the main text and the label "Disk" refers to the memory I/O operations WITH_FT algorithm, instead, operates sequentially on different groups of wave vectors for all the d (m) and saves the partial results of each group on the hard drive. The partial results are merged at the final stage of the program. In practice, in both algorithms, the images are loaded and Fourier transformed one time for each group because only a part of the FFTs data can be saved on the local memory. The impact of repeating these operations over the entire execution time is presented in Fig. 6, in which we present the execution time as a function of the number of images to process. In this test, we executed the WITH_FT algorithm on CPU hardware with images of 512 × 512 pixels by releasing to the program 23 GB of RAM. In the figure, the vertical red line marks the crossing point from one-group to two-group execution. We see that the time spent by the program in memory operations and step one suddenly doubles by crossing the line. This happens because the bidimensional FFTs of the images and the corresponding I/O operations must be executed two times instead of only one. Appendix E: Number of threads The WITHOUT_FT algorithm executes efficiently with a parallel computing scheme on GPU hardware which makes use of all the available CUDA-threads. This is not the case for the WITH_FT scheme. To analyze the influence of parallel computing on the execution time of the WITH_FT algorithm, we implemented the WITH_FT method with a user-configurable number of threads both in the CPU mode and the GPU mode. The number of threads in the CPU mode refers to the number of threads spawned to execute a particular task, such as the FFT operations. In the GPU mode, the number of threads selects the amount of CUDA threads of each CUDA kernel. In both CPU and GPU modes, the number of threads also determines the number of time sequences that are processed in parallel. Figure 7 presents the total execution times of the program as a function of the different number of threads for 8192 and 16384 images. In the test, we selected images composed of 512 × 512 pixels. Parallel computing achieves a minimal or detrimental impact on the speed-up factor in the CPU In the GPU mode, the performance gain saturates at around 32 threads, with a peak performance at 256 GPU threads. Based on this result, we selected the optimal number of two CPU threads and 256 GPU threads for all other tests presented in this work. Appendix F: Numerical deviations In this appendix, we compare the numerical discrepancies of the program emerging from the execution of the DDM analysis by using different algorithms and hardware platforms. The deviations in the calculated structure functions are caused by the different numerical approaches and adopted libraries, each of which introduces different numerical errors. To quantify these discrepancies, we compared the analysis results of the data presented in Fig. 1 obtained by the four execution modes of the program. To quantify the deviations, we calculated the relative deviation δ as a function of the wave vector k for the azimuthal averages of the structure function. We define the deviation of the structure functions as: where m refers to the time delay in the range [1,2000] and f k (m) and g k (m) are the azimuthal averages of the d(m) matrices using different algorithms. Figure 8 shows the value of δ as a function of the wave vector k. The relative deviation between GPU and CPU in the case of WITHOUT_FT is always less than 10 −14 , and decreases for increasing wave vectors. A similar decreasing trend as a function of the wave vectors is also visible for the WITH_FT algorithm, even though the relative deviations at small wave vectors are approximately 10 −7 . We also note that the relative uncertainty shows an oscillatory trend, similar to the one visible in the structure function, but with an opposite phase.
8,191.4
2021-12-01T00:00:00.000
[ "Physics", "Materials Science" ]
Evidence for the maximally entangled low $x$ proton in Deep Inelastic Scattering from H1 data We investigate the proposal by Kharzeev and Levin of a maximally entangled proton wave function in Deep Inelastic Scattering at low $x$ and the proposed relation between parton number and final state hadron multiplicity. Contrary to the original formulation we determine partonic entropy from the sum of gluon and quark distribution functions at low $x$, which we obtain from an unintegrated gluon distribution subject to next-to-leading order Balitsky-Fadin-Kuraev-Lipatov evolution. We find for this framework very good agreement with H1 data. We furthermore provide a comparison based on NNPDF parton distribution functions at both next-to-next-to-leading order and next-to-next-to-leading with small $x$ resummation, where the latter provides an acceptable description of data. Entanglement entropy The proton is a coherent quantum state with zero von Neumann entropy. However it has been argued in [1,2] that when the proton wave function is observed in Deep Inelastic Scattering (DIS) of electrons and protons, this is no longer true. In DIS, the virtual photon, with momentum q and q 2 = −Q 2 its virtuality, probes only parts of the proton wave function, which gives rise to entanglement entropy, between observed and unobserved parts of the proton wave function, through tracing out inaccessible degrees of freedom of the density matrix. The resulting entanglement is then a measure of the degree to which the probabilities in the two subsystems are correlated; for other approaches where thermodynamical and momentum space entanglement entropy have been studied see [3][4][5][6][7][8][9][10][11]; for studies on Wehrl entropy [12] and jet entropy see [13]). Based on explicit studies of this entanglement entropy , both within a 1+1 dimensional toy model and leading order (LO) Balitsky-Kovchegov evolution [14][15][16], as well as entanglement entropy in conformal field theory, the authors of [1] conclude that DIS probes in the perturbative low x limit a maximally entangled state. With x = Q 2 /2p · q and p the proton momentum, the low x limit corresponds to the perturbative high energy limit, where Q 2 defines the hard scale of the reaction and sets the scale of the strong running coupling constant α s (Q 2 ) ≪ 1. The perturbative low x limit of [1] corresponds then to the scenario where parton densities are high, but not yet saturated and non-linear terms in the QCD evolution equations are therefore sub-leading. This is precisely the kinematic regime, where perturbative low x evolution of the proton is described through Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution, which resums terms [α s ln(1/x)] n to all order in α s ; it is this kinematic regime to which the results of [1] are supposed to apply at first. The proposal that DIS probes in the low x limit a maximally entangled state is closely related to the emergence of an exponentially large number of partonic micro-states which occur with equal probabilities P n (Y ) = 1/⟨n⟩, with ⟨n(Y, Q)⟩ the average number of partons at Y = ln 1/x and photon virtuality Q. Entropy is then directly obtained as Assuming that the second law of thermodynamics holds for this entanglement entropy, the above expressions yields a lower bound on the entropy of final states hadrons S h through S h ≥ S(x, Q 2 ) [1]. "Local parton-hadron duality" [17] and the "parton liberation" picture [18] then suggest that partonic entropy coincides with the entropy of final state hadrons in DIS, see also the discussion for hadron-hadron collisions in [3]. The hadronic entropy can be further related to the multiplicity distribution of DIS final state hadrons. The latter has been obtained from HERA data in [3], which allows for a direct comparison of Eq. (1) to experimental data. Confirmation of Eq. (1) is of high interest, since it links hadron structure to final state multiplicities through entropy. If confirmed, it provides an additional constraint on parton distribution functions (PDFs). Moreover, entropy is defined non-perturbatively and the proposed relation is therefore not necessarily limited to perturbative events, unlike PDFs. Last but not least, entropy is subject to quantum bounds [20][21][22] and through Eq. (1) such bounds translate directly on bounds on the number of partons in the proton [1]. This is of particular interest for the search for a saturated gluon state commonly called the Color Glass Condensate at collider facilities such as the Large Hadron Collider and the future Electron Ion Collider. The explicit model calculations of [1] were based on solutions of purely gluonic LO low x evolution, where quarks appear only as a next-to-leading order (NLO) correction; it is therefore natural to assume that at first the total numbers of partons agrees with the number of gluons. In the following we find that for the kinematic regime explored at HERA, quarks are indeed sub-leading, but nevertheless numerically relevant for a correct description of data. We therefore propose in this letter that the average number of partons in Eq. (1) should be interpreted as the sum of the number of all partonic degrees of freedom, i.e. of quarks and gluons. Our description is based on the NLO BFKL fit [23,24] (HSS). Initial conditions of the HSS unintegrated gluon distribution have been fitted to HERA data on the proton structure function F 2 and the HSS fit provides therefore a natural framework to verify the validity of Eq. (1) and its conjectured relation to the final state hadron multiplicity. Moreover, the HSS fit is directly subject to NLO BFKL evolution [25] and therefore provides a direct implementation of linear QCD low x evolution. Results To compare the HSS unintegrated gluon distribution to data, we need to determine first PDFs, which will yield the total number of partons through where g(x, µ F ) (Σ(x, Q)) denotes the gluon (seaquark) distribution function at the factorization scale µ F . To this end we use the Catani-Hautmann procedure [26] for the determination of high energy resummed PDFs. At leading order, the prescription is straightforward for the gluon distribution function, which is obtained as where µ F denotes the factorization scale which we identify for the current study with the photon virtuality Q, and F(x, k 2 ) the unintegrated gluon distribution, subject to BFKL evolution. To obtain the seaquark distribution, we require a transverse momentum dependent splitting function [26] , where k denotes the gluon momentum and ∆ = q − zk with q the t-channel quark transverse momentum; T F = 1/2. Note that this splitting function reduces in the collinear limit k → 0 to the conventional leading order DGLAP splitting function P qg (z) = The integrated seaquark distribution is then obtained as [26] xΣ Note that in [27,28] a corresponding off-shell gluon-to-gluon splitting function has been determined. Within the current setup, this would allow in principle for the determination of the gluon distribution at next-to-leading order. The use of this splitting function for the determination of the gluon distribution function at NLO has however not been worked out completely so far. Moreover, the HSS fit is based on a leading order virtual photon impact factors, which suggests the use of the leading order prescription Eq. (3) also for this study. The HSS unintegrated gluon density reads [29] F x, k 2 , Q = 1 k 2 Results are compared to the final state hadron entropy derived from the multiplicity distributions measured at H1 [3] whereĝ is an operator in γ space, withᾱ s = α s N c /π, N c the number of colors and χ(γ, Q, Q) the next-to-leading logarithmic (NLL) BFKL kernel which includes a resummation of both collinear enhanced terms as well as a resummation of large terms proportional to the first coefficient of the QCD beta function, see App. A for details. Eq. (3) and Eq. (12) is now used to calculate through Eq. (10) the partonic entropy Eq. (1); the result is then compared to H1 data [3]. To calculate entropy for the H1 Q 2 bins, we employ the following averaging procedure, The results of our study are shown in Fig. 2, where we evaluate all expressions for n f = 4 flavors. We find that the partonic entropy obtained from the total number of partons gives a very good description of H1 data [3] in case of the HSS fit. As anticipated in [1], the purely gluonic contribution is clearly dominant and amounts to approximately 80% of the total contribution; nevertheless the seaquark contribution is needed for an accurate description of H1 data. Given the approximations taken in the derivation of Eq. (10) as well as the possibility that sub-leading corrections are relevant for the determination of hadronic entropy form the multiplicity distribution, we believe that the above result provides an impressive confirmation of Eq. (10) and the results of [1] in general. In [3] the data shown in Fig. 2 have been compared to Eqs. (1) and (10). Based on the original proposal of [1], only the gluon PDF has been used, for which the LO gluon distribution of the HERAPDF 2.0 set [30] has been chosen. While the use of a LO gluon PDF is somehow natural, since Eq. (1) does at the moment clearly not address questions related to collinear factorization at NLO and beyond, it is well known that the convergence of the gluon distribution is rather poor in the low x region; differences between the LO and NLO gluon amount up to 100% in the low x region, see e.g. Fig. 26 of [30]. While there are still noticeable differences between NLO and NNLO gluon distribution (of the order of 30% at x = 10 −4 ), one can nevertheless argue that the gluon distribution starts to converge beyond leading order and the values provide by the NLO gluon might be taken as a more realistic reflection of the true gluon distribution. To substantiate this point, we show in Fig. 2 also results based on an evaluation of Eqs. (1) and (10) with NNPDF collinear PDFs at NNLO [31]. We further show results obtained using NNLO NNPDF with next-to-leading logarithmic (NLL) low x resummation [32]. In both cases we assume µ F = Q. While both PDF sets allow for an approximate description of data and may therefore serve as an additional confirmation of the correctness of Eq. (10), a satisfactory description of the x-dependence is only possible using the low x resummed NNPDF PDF set, which provides a very good description of the shape, with a slight off-set in normalization. A different description of these data has been provided in [33] which uses the sea quark distribution only. The authors use however for their LO BFKL description the quark-to-gluon splitting function instead of the required gluon-to-quark splitting. The former is enhanced in the low x limit and yields an incorrect sea quark distribution which is presumably of the order of the gluon distribution. We also could not reproduce the description which is based on the collinear NNLO sea quark distribution. Conclusions In this letter we followed the proposal of [1] to treat the low x proton in Deep Inelastic Scattering as a maximally entangled state with an entanglement entropy given as the logarithm of the average number of partons in the proton. Unlike [1,33] we interpret the total number of partons as the sum of quarks and gluon numbers, determined through the regarding PDFs. While we agree with [1,3,9] that the quark distribution is sub-leading in the low x limit, we find that the seaquark distribution provides a numerically relevant contribution of the order of 20%. Our description is based on the determination of PDFs from an unintegrated low x gluon distribution, subject to BFKL evolution. For the numerical study, the HSS unintegrated gluon, which follows NLO BFKL evolution, has been used. Comparing our result with the final state hadron entropy extracted by the H1 collaboration [3], we find a very good agreement with data, if the total number of partons is taken as the sum of gluons and sea-quarks. We also provided a comparison based on NNLO PDF sets by the NNPDF collaboration. While purely NNLO DGLAP PDFs provide only an approximate description of data, we find that NNLO DGLAP PDFs with NLL low x resummation provide a reasonable description of the slope of H1 data, which emphasis again the role of low x dynamics for the determination of the proton as a maximally entangled state of partons. Note that such an agreement is not obtained if the comparison is based on leading order collinear PDFs, as used by the H1 collaboration. This clearly hints at the need to further refine the underlying theoretical framework, in particular to clarify in a systematic way the relation between entropy and PDFs within the framework of collinear and/or high energy factorization. This need is immediately apparent if Eq. (10) is evaluated using PDFs beyond leading order, which immediately implies a scheme dependence of the extracted parton number; strictly speaking the latter can be therefore no longer related to the hadron multiplicity which is a physical observable and therefore scheme independent. The description based on NNLO PDFs and NNLO low x resummed PDFs is therefore an approximation at best. Note that a similar limitation does not apply to the description based on the HSS fit, since the resulting PDFs are leading order, from the point of view of collinear factorization and therefore scheme dependent, while similar issues arise due to the use of high energy factorization beyond leading order in that case. Moreover the relation to other frameworks as studied in [3][4][5][6][7][8][9] needs to be clarified. Furthermore it will be interesting to explore possible deviations from this framework at lower values of Q and x due to the onset of nonlinear low x evolution, in particular effects due to saturated parton densities [34,35]. process, while M sets the scale of the running coupling constant. For the current study we set M = M = Q and n f = 4 with Λ QCD = 0.21 GeV. Q 0 = 0.28 GeV, and δ = 6.5. have been determined from a fit to the F 2 structure function in [23]. In this fit the overall running coupling constant has been evaluated at the renormalization scale µ 2 = QQ 0 , with Q the photon virtuality. For the construction of parton distribution function µ 2 = Q 2 is however a more natural choice. We therefore reevaluated the underlying fit and found that data on the proton structure F 2 [37] are equally well described, if we use µ 2 = Q 2 for the photon impact factor with a normalization C = 4.31. It is then this convention which we use in this study. Figure 2: Partonic entropy versus Bjorken x, as given by Eq. (11) and Eq. (10). We furter show results based on the gluon distribution only as well as on quarks and gluons together. Results are compared to the final state hadron entropy derived from the multiplicity distributions measured at H1 [3] There was a mistake in the scale choice of the running coupling in the gluon density that was was used in the paper [1]; the mistake has been already corrected in [2]. The mistake was difficult to spot since the formulas that we used did not account for the fact that only the charged hadrons were measured. The numerical factors approximately canceled and the net result is only slightly changed. The number of partons in the corrected formulas is [2]: where g(x, µ F ) (Σ(x, Q)) denotes the gluon (seaquark) distribution function at the factorization scale µ F and as described above the factor 2/3 takes into account the fact that only charged partons were measured, see also the more detailed discussion in [2]. To calculate entropy for the H1 Q 2 bins, we employ the following averaging procedure, The corrected results are shown in Fig. 2. There was also a typo in Eq. (5) of [1] which yields the formula for our determination of the sea quark distribution. The argument of the unintegrated gluon distriubtion has been given as x instead of x/z on the LHS of this equation. The corrected formula, which was actually used in the calculation reads
3,784.2
2021-10-12T00:00:00.000
[ "Physics" ]
Implementation of Legendre Neural Network to Solve Time-Varying Singular Bilinear Systems : Bilinear singular systems can be used in the investigation of different types of engineering systems. In the past decade, considerable attention has been paid to analyzing and synthesizing singular bilinear systems. Their importance lies in their real world application such as economic, ecologi-cal, and socioeconomic processes. They are also applied in several biological processes, such as population dynamics of biological species, water balance, temperature regulation in the human body, carbon dioxide control in lungs, blood pressure, immune system, cardiac regulation, etc. Bilinear singular systems naturally represent different physical processes such as the fundamental law of mass action, the DC motor, the induction motor drives, the mechan-ical brake systems, aerial combat between two aircraft, the missile intercept problem, modeling and control of small furnaces and hydraulic rotary multi-motor systems. The current research work discusses the Legendre Neural Network’s implementation to evaluate time-varying singular bilinear systems for finding the exact solution. The results were obtained from two methods namely the RK-Butcher algorithm and the Runge Kutta Arithmetic Mean (RKAM) method. Compared with the results attained from Legendre Neural Network Method for time-varyingsingularbilinear systems, the output proved to be accurate. As such, this research article established that the proposed Legendre Neural Network could be easily implemented in MATLAB. One can obtain the solution for any length of time from this method in time-varying singular bilinear systems. Introduction Differential Equations (DEs) are algebraic relations that exist between functions and their derivatives. These DEs are the backbone of any sort of physical system. Partial differential equations (PDE) or ordinary differential equations (ODE) are the basis upon which most of the chemistry, physics, math, engineering etc., are modeled. In most cases, it is not simple to get an analytical solution for DEs. Therefore, researchers started considering new and dynamic numerical methods to approximate their solutions. Numerical methods have few limitations, for instance, high computational cost. However, they are widely used for resolving DEs, and they evolved since the first differential equation was derived. Finite differences, finite elements, finite volumes and spectral methods are some of the conventional methods available for spatial discretization of Partial Differential Equations (PDEs) [1]. In this case of discretizing Ordinary Differential Equations (ODEs), some of the following conventional methods are applied i.e., the Euler Method, the Runge-Kutta Method, the RK-Gill Method [2] and the RK-Butcher Algorithm [3]. Artificial Intelligence (AI) experienced rapid development in recent years due to the researchers shifted their attention towards neural network methods [4]. Artificial Neural Networks (ANN) are applied in a wide of domains such as control systems [5], image processing techniques [6] and pattern recognition [7] since they produce promising output. With a proven track record, neural network methods, especially neural network function approximation capabilities, are applied to solve DEs through neural network models. Legendre Neural Network was leveraged in the study conducted by Mall et al. [8], in which a novel method was proposed as a solution for ODE. To solve two DEs such as Linear Coefficients Delay Differential-Algebraic Equations and Singularly Perturbed DE [9], Legendre Neural Network was proposed by Liu et al. [10]. Yang et al. [11] used Legendre Neural Networkbased algorithm for elliptical partial DEs. In the research conducted by Chen et al. [12], the researchers used Block Trigonometric Exponential Neural Network to find a probable solution for Continuous-Time Model. A new algorithm was proposed by Toni Schneidereit et al. based on Artificial Neural Network to resolve ODs [13]. In the current research paper, the author proposes a novel approach to resolve timevarying singular bilinear systems with the help of the highly accurate Legendre Neural Network method [14]. Legendre Neural Networks There are two components present in single layers Legendre Neural Network such as input node and output node [15]. Its functional expansion depends on Legendre polynomials. Legendre polynomials constitute a set of orthogonal polynomials which are obtained as a solution for Legendre differential equations. Legendre polynomials are simply denoted L n (u) in which n is the order of polynomial whereas u lies between −1 and 1. Legendre polynomials are a group of orthogonal polynomials and attained to resolve Legendre differential equations. Fig. 1 shows the structure of the Legendre Neural Network. Having its functional expansion based on Legendre polynomial P n (x), Legendre Neural Network for a single layer has one input and one output. The mathematical model for Legendre Neural Network for N nodes of a polynomial P n (x) is as follows Here, the network's input value is denoted by x, the output is denoted by y A , the weight of the input node of j th hidden node is denoted via w j , b j corresponds to the threshold for j th hidden node, and finally, the weight vector of the j th hidden node is denoted by α j . To simplify the Eq. (1), let us take w j = 1 and b j = 0, then the model in the Eq. (1) becomes As per the universal approximation theorem, Singularly Perturbed Differential Equations (SPDEs) represent its analytical solution, whereas y A (x) represents its approximate solution Here, the intervals are discretized, which denotes the boundary points. The weight α j can be solved as given herewith. This can be described simply as follows H matrix is the first left term in Eq. (4) that corresponds to the neural network's output matrix after the linear L ∈ operator and Bf is the first proper Eq. (4). To mitigate the error between proper solution y (x) and approximate solution y A (x), the optimization should be done by using extreme Machine Learning (ML) algorithm [16] as given as follows. Time-Varying Singular Bilinear Systems Here, the first-order time-varying singular system is considered. In this equation x (0) = x 0 , K corresponds to n × n singular matrix, whereas n × n matrix is denoted by A and n × r matrix is denoted by B. The n-state vectors are denoted by x (t), while the r-input vector corresponds to u (t). Based on the above discussion, the time-varying singular bilinear system is rewritten in the following form. The rewritten form of Eq. (9) is given below. Here, E (t) ∈ R n×n denotes the singular matrix whereas the state corresponds to x (t) ∈ R n , control. It is challenging to solve a time-varying singular bilinear system compared to its counterpart i.e., time-invariant singular bilinear system [17]. So, various researchers attempted different transformation methods to get rid of this challenge. The current research study leveraged Legendre Neural Network to find a highly accurate solution for a time-varying singular bilinear system [18]. Simulation Example In this research work, the author considered a time-varying singular bilinear system as proposed earlier [19,20]. If Eq. (5) is solved, then the exact solution for x(t) is as shown below Legendre Neural Network was used to further assess the discrete solution for Eq. (10), and in this stage, 0.25 is considered the step size (t). The results attained from different methods such as the RK-Butcher algorithm and the Runge Kutta Arithmetic Mean Method (RKAM) were compared with that of the solution attained from Legendre Neural Network. Tabs. 1 and 2 show the results and the analytic solution determined using Eq. (12). These tables further shows the error between analytic solution and discrete solution. Conclusions Legendre Neural Network obtained highly accurate discrete solutions compared to other methods such as the RK-Butcher algorithm and the Runge Kutta Arithmetic Mean Method (RKAM). It can be observed from Tabs. 1-2 that Legendre Neural Network Method attained only minimal absolute error in contrast to RKAM and RK-Butcher Algorithms because these algorithms produced a considerable error. To conclude, the current study results established that Legendre Neural Network is a promising candidate to evaluate time-varying singular bilinear systems.
1,827
2021-01-01T00:00:00.000
[ "Mathematics" ]
Deep Learning Spectroscopy: Neural Networks for Molecular Excitation Spectra Abstract Deep learning methods for the prediction of molecular excitation spectra are presented. For the example of the electronic density of states of 132k organic molecules, three different neural network architectures: multilayer perceptron (MLP), convolutional neural network (CNN), and deep tensor neural network (DTNN) are trained and assessed. The inputs for the neural networks are the coordinates and charges of the constituent atoms of each molecule. Already, the MLP is able to learn spectra, but the root mean square error (RMSE) is still as high as 0.3 eV. The learning quality improves significantly for the CNN (RMSE = 0.23 eV) and reaches its best performance for the DTNN (RMSE = 0.19 eV). Both CNN and DTNN capture even small nuances in the spectral shape. In a showcase application of this method, the structures of 10k previously unseen organic molecules are scanned and instant spectra predictions are obtained to identify molecules for potential applications. The ORCID identification number(s) for the author(s) of this article can be found under https://doi.org/10.1002/advs.201801367. DOI: 10.1002/advs.201801367 materials properties must be known to design novel applications. For example, bandgaps are critical for solar cells, optical spectra for organic electronics, vibrational spectra to discover new thermoelectrics for waste heat recovery, X-ray spectra for better medical diagnostic materials, or conductivity spectra for light-weight batteries with high storage capacity. Different spectroscopic techniques reveal different properties, and every material is characterized by a variety of spectra. Current spectroscopic methods, such as absorption, emission, scanning tunneling, Raman or electron-paramagnetic resonance, are well established. However, experiments are often timeconsuming and sometimes require large, multi-million-Euro facilities, such as synchrotrons. Complementary theoretical spectroscopy methods based on quantum-mechanical first principles are similarly time consuming and require large-scale, high-performance computing facilities. Spectroscopy has seen many technical advances in individual spectroscopic methods, but no recent paradigm shift that would overcome the time-cost conundrum. Here we show that artificial intelligence (AI) has the potential to trigger such a conceptual breakthrough toward data driven spectroscopy. We present the first step toward building an AI-spectroscopist to harvest the wealth of already available spectroscopic data. The AIspectroscopist is based on custom made deep neural networks that learn spectra of organic molecules. Our neural networks predict the peak positions of molecular ionization spectra with an average error as low as 0.19 eV and the spectral weight to within 3%. This accuracy is already sufficient for our example application on photoemission spectra, which typically have an experimental resolution of several tenth of eV and theoretical error bars of 0.1-0.3 eV. Once trained, the AI-spectroscopist can make predictions of spectra instantly and at no further cost to the end-user. In this new paradigm, deep learning spectroscopy would complement conventional theoretical and experimental spectroscopy to greatly accelerate the spectroscopic analysis of materials, make predictions for novel and hitherto uncharacterized materials, and discover entirely new molecules or materials. We demonstrate this by using our AI-spectroscopist to make predictions for a new dataset of organic molecules that was not used in training the deep neural networks. At no further Deep learning methods for the prediction of molecular excitation spectra are presented. For the example of the electronic density of states of 132k organic molecules, three different neural network architectures: multilayer perceptron (MLP), convolutional neural network (CNN), and deep tensor neural network (DTNN) are trained and assessed. The inputs for the neural networks are the coordinates and charges of the constituent atoms of each molecule. Already, the MLP is able to learn spectra, but the root mean square error (RMSE) is still as high as 0.3 eV. The learning quality improves significantly for the CNN (RMSE = 0.23 eV) and reaches its best performance for the DTNN (RMSE = 0.19 eV). Both CNN and DTNN capture even small nuances in the spectral shape. In a showcase application of this method, the structures of 10k previously unseen organic molecules are scanned and instant spectra predictions are obtained to identify molecules for potential applications. Introduction Spectroscopy is central to the natural sciences and engineering as one of the primary methods to investigate the real world, study the laws of nature, discover new phenomena and characterize the properties of substances or materials. Spectroscopic computational cost, we make spectra predictions for the 10 000 mole cules of the diastereomers dataset of Ramakrishnan et al. [1,2] This gives us an overview over the spectral characteristics of the new dataset and helps us to identify interesting molecules for further analysis. In the future, we could extend this quick screening application to large numbers of organic molecules whose spectra have not been measured or computed, but are required for developing an application or analyzing an experiment. Previous Machine Learning Attempts for Spectral Properties AI methods, which encompass machine learning methods, are gaining traction in the natural sciences and in materials science. However, previous work has focused on scalar quantities such as bandgaps and ionization potentials. For solids, only bandgap values and densities of states at the Fermi level have been learned with kernel ridge regression, [18,26,27] support vector machines, [28] reduced-error pruning trees and rotation forests, [19] gradient boosted decision trees, [25] and Bayesian optimization. [23] For molecules, kernel ridge regression [29] and neural networks [5,24] have been applied to learn ionization potentials and electron affinities or nuclear magnetic resonance (NMR) chemical shifts. [30] Both bandgaps and ionization potentials are single target values. The learning of continuous curves, such as spectra, is not frequently attempted. In this study we compare the performance of three deep neural network architectures to evaluate the effect of model choice on the learning quality. We perform both training and testing on consistently computed (theoretical) spectral data to exclusively quantify AI performance and eliminate other discrepancies, unlike an early study [31,32] which compared predictions from theory-trained neural networks against experimental data. In further contrast with early work, [31] we probe model performance with dataset size by utilizing spectra for 10 5 -10 6 organic molecules, sizes increasingly available from modern database resources. Molecular Representation In this work we approach molecules from an atomistic perspective, in which the atomic structure, that is coordinates of all the constituent atoms, is known precisely. This atomistic representation is natural to theoretical spectroscopy, as the spectral properties can then directly be calculated from approximations to the Hamiltonian of each molecule. In general, representation (or feature engineering) is an important aspect in machine learning. How to best present molecules and materials to an AI for optimal learning, prediction and inference has been a pressing question in chemistry and materials science for the last few years, [33] and several different representations have been tried. [3,4,9,21,25,27,29,31,[33][34][35][36][37] To represent the molecules to two of our three neural networks, we use the Coulomb matrix where Z i is the atomic number (nuclear charge) of atom i and R R i i its position. The diagonal elements i = j have been fit to the total energies of atoms (0.5 2.4 Z i ). [4] A typical Coulomb matrix is shown in Figure 1 for the N-methyl-N-(2,2,2-trifluoroethyl)formamide molecule. The Coulomb matrix is appealing due to its simplicity and efficiency. We will show here that it provides sufficient input for learning molecular spectra, if the neural network architecture is sophisticated enough. Method-Neural Network Architectures In this work, we chose neural networks due to their ability to learn complex mappings between input and target spaces (such as the Hamiltonian in quantum mechanics). Neural network models have surged in popularity recently, since they can express complex function mappings using inputs with very little or no feature engineering. Here we explored three neural network architectures illustrated in Figure 2: a) the multilayer perceptron (MLP), which is one of the simplest architectures and accepts vectors as input, b) the convolutional neural network (CNN), which accepts tensors as inputs, and c) the deep tensor neural network (DTNN), a custom design for molecular data by Schütt et al. [22] Each of the above is a deep network architecture. The depth, for example, in an MLP arises from stacking multiple hidden layers. Each hidden layer accepts output from the previous layer as input, and returns a nonlinear affine transformation as the output. The MLP was chosen because of its architectural simplicity and also because a similar network was used earlier [5] to predict fourteen different molecular properties simultaneously. Conversely, the CNN is the neural network of choice in image recognition. Much like an image, which is a matrix (or tensor) representation of a real world object, the Coulomb matrix is a matrix representation of a real molecule containing spatially repeating patterns, so we expect the CNN to perform well. The Making another conceptual leap, we adopt the DTNN architecture [22] that has been motivated by previous architectures for text and speech recognition [38,39] and recently been used to predict atomization energies of molecules. [22] In the DTNN, the atoms are embedded in each molecule like words in a text. The interaction between atoms and their surroundings are represented by an interaction tensor (the red block in Figure 2c) which is learned iteratively. Each atom in the molecule has its own interaction tensor, which in the first interaction pass encodes interatomic distances. In the second interaction pass the tensors learn angles between three different atoms and in subsequent passes higher order interatomic relations (e.g., dihedral angles). The DTNN encodes local atomic environments in a similar fashion as the many-body tensor representation (MBTR) recently proposed by Huo and Rupp. [21] However, the DTNN is designed to learn this representation rather than to expect it as input. Training and Hyperparameter Optimization The hyperparameters of each neural network (e.g., the number of hidden layers and nodes within them) are determined with Bayesian optimization for each dataset. This is a critical step, since it has been shown [40] that effectively tuned network hyperparameters can achieve higher prediction accuracy than those with manually chosen ones. We used 90% of each data set for training and the rest was split equally between validation and test sets. The networks were trained by backpropagation, with the Adam [41] update scheme. Root mean square errors (RMSE) and squared correlations R 2 were evaluated for the test set of molecules that the neural networks had not "seen" before. We take R 2 as a quality measure for the learning success of our neural networks, whereas the RMSE quantifies the predictive accuracy for excitation energies. We refer the reader to the Supporting Information for details on the DNN architecture, hyperparameters, and training algorithm. Datasets We use the QM7b [5,42] and QM9 [2,43] datasets of organic molecules to train the AI-spectroscopist. We optimized the structure of all molecules with the Perdew-Burke-Ernzerhof (PBE) [44] density functional augmented with Tkatchenko-Scheffler van der Waals corrections (PBE+vdW) [45] as implemented in the Fritz Haber Institute ab initio molecular simulations (FHIaims) code. [46,47] After discarding molecules with fewer than sixteen occupied energy levels, we were left with 5883 and 132531 molecules, which are henceforth referred to as 6k and 132k datasets, respectively. In each set we collect the highest 16 occupied PBE+vdW eigenvalues as excitation energies for each molecule. The molecular spectra are then computed by Gaussian broadening (0.5 eV) these eigenvalues into the occupied density of states. The resulting curve was discretized with 300 points between −30 and 0 eV. Level broadening encompasses vibrational effects, finite lifetimes and spectrometer resolution; we discuss our dataset choices in relation to our findings further on. For the application test, we use the 10k diastereomers dataset of Ramakrishnan et al. [1,2] It contains 9868 "additional" diastereoisomers of 6095 parent C 7 H 10 O 2 isomers from the 134k dataset. [2] The molecules in this 10k set are not part of the 134k set and were used by Ramakrishnan et al. to validate their delta-learning approach. [1] We here use only the molecular coordinates from the 10k set and obtain the corresponding spectra with the trained deep learning framework. Results First we discuss the simultaneous prediction of the 16 molecular eigenvalues in our datasets. Figure 3 shows the RMSE and R 2 values for the three different neural network architectures and the 6k and 132k datasets. We observe that only the DTNN 132k performs uniformly well across all 16 states. For the other networks the predictions of the deeper levels have the highest R 2 values and are therefore learned "best" regardless of the model and the dataset size. However, the predictive accuracy is still relatively low (high RMSE) for some networks. This . Green circles to the left represent the molecular input and yellow circles to the right the output (here 16 excitation energies or the molecular excitation spectrum). The gray blocks are schematics for fully connected hidden layers, convolutional blocks, pooling layers, and state vectors. Nodes corresponding to atom types in the DTNN are represented as blue squares and the distances matrix between different atoms as pink squares. Parameter tensors (red squares) project the vectors encoding atom types and the interatomic distance matrix into a vector with same dimensions as the atom type encodings. The DTNN is evaluated iteratively, building up more complex interactions between atoms with each iteration. seemingly contradictory behavior likely arises because lower energy levels (from 11 to 15) for smaller molecules correspond to electronic core states, which have a significantly higher absolute energy than valence states. While the cores states are easily learned, predictions with a low relative error at this end of the spectrum can result in absolute errors of several tens of eV and give rise to high RMSE values. The learning quality then decreases gradually (the R 2 value decreases) the closer the state is to the highest occupied molecular orbital (state number 0) and then rises again from state 3 to 0. Interestingly, the RMSE exhibits an inverse correlation. It first improves and then rises again for the last 4 states. The best predictions are given by the DTNN 132k and have an RMSE of only 0.16 eV with an average RMSE of 0.186 eV (see Table 1). Next we consider spectra predictions for the CNN and DTNN trained on the 132k set as shown in Figure 4. For spectra we calculate the relative difference (or relative spectral error (RSE)) between the predicted and the reference spectrum. The first column of Figure 4 shows RSE histograms for 13 000 test molecules from the 132k dataset. The RSE distribution is narrow and the typical error is around 4% for the CNN and 3% for the DTNN; very low for both neural networks. To understand the spectra predictions better, we picked three spectra that are representative of the best, average and worst predictions made by the CNN and DTNN and plotted them in Figure 4 with the corresponding reference spectrum. We observe that the best predictions are able to capture all features of the reference spectrum. The average predictions for the CNN miss spectral features, but capture the average shape of the spectrum. The worst CNN predictions do not represent the reference spectrum well. The DTNN does much better in both categories. It captures most spectral features, but still averages through some. Table 1 provides a performance summary for the neural networks we have tested. It confirms our observations that both the amount of training data and complexity level of the neural network improve the predictive power. The DTNN is our best performing network with an average error of 0.19 eV for energy levels and 3% for spectral intensity. Application To showcase the power of our deep spectra learning method, we present a first application of the AI-spectroscopist. For the 10k dataset we have information on the structure of each molecule, but no spectra. Computing the spectra with DFT would take considerable computational effort and time. With the AIspectroscopist, we gain an immediate overview of the spectral content of the dataset. A summary of the prediction is shown in Figure 5. Panel a shows a histogram of the number of molecules that have spectral intensity (above a 0.1 threshold) at a given energy. It tells us that spectral intensity in this dataset is uniformly distributed between −18 and −2 eV for all molecules. Only four molecules have peaks below this range. The average spectrum, obtained by summing up all predicted spectra and dividing by the number of molecules, is shown in Figure 5c. This is the typical spectrum to expect from this dataset. The spectral scan in Figure 5a also allows us to quickly detect molecules of interest in a large collection of compounds. The four molecules with spectral intensity below the main region and the molecules with the highest ionization energy can be easily identified, as illustrated in Figure 5b. Various molecules of interest, e.g., structures with peaks in particular regions of the spectrum, could then be further investigated with electronic structure methods or experiments to determine their functional properties. In this fashion, the fast spectra prediction mode of our AI-spectroscopist could be applied to the inverse mapping problem. Here, we seek to learn the structures of molecules or materials that exhibit certain properties. Inferring the atomic structure from a measured spectrum can be achieved with generative models, [48] where AIs exposed to certain content are trained to produce similar content. However, most machine learning research to date has focused on generative models for continuous data like images and audio and not for the more difficult problem of generating discrete data such as molecules or materials. For solid clusters, simple inverse relations have recently been established between X-ray absorption (XAS) spectroscopy [49][50][51] and coordination shells of atoms. For molecules, Adv. Sci. 2019, 6, 1801367 Table 1. Summary of the RMSE for the 16 excitations and the RSE for spectra for the 6k and the 132k datasets. The results are averages over 5 runs, except for the spectra predictions of 132k dataset which were averaged over 3 runs. The resulting statistical error is at most ±0.003 and has therefore been omitted from this neural-network based auto encoders and decoders [52] were combined with a grammar-based variational autoencoder [53] to map from the discrete molecular space into a continuous latent space (in which optimizations can be performed) and back. Even with such sophisticated models it is not easy to generate valid, synthesizeable molecules and inverse predictions remain difficult in practice. The AI-spectroscopist can facilitate inverse predictions for molecules, given a trial dataset of molecular structures with possible relevance. Spectral scan data can be produced at the press of a button, and analyzed for structures with the desired spectral features. These could then be screened for the best spectral match to produce candidate molecules. With scientific expertise and intuition, relevant trial datasets could be assembled for instant screening. Should it emerge that the trial dataset did not contain structure types associated with desired properties, the spectral scan search would require widening the structural pool. In such cases the search may not be successful, and it may be necessary to resort to a generative approach. 3 also allows us to distinguish the effects of training set size and network complexity. Both the CNN and DTNN trained on the 132k dataset perform better than the corresponding models trained on the 6k dataset. As expected, their respective accuracies increase with the number of data points. However, the DTNN trained on only the 6k dataset almost outperforms the CNN trained on the 132k set. This illustrates that a purpose designed NN architecture can learn from fewer data points. Discussion Regarding spectra predictions, even the worst predictions of the DTNN might still look good to a spectroscopist, as the overall shape and peak positions of the spectrum are captured well. The main differences between the DTNN prediction and the reference spectrum are slight peak shifts and an overall spectral weight reduction. Slight peak shifts lead to a large intensity difference, but only small difference in the peak energies, which is the more important observable in spectroscopy. Our current spectral metric is very sensitive to peak positions. This is in principle desirable, since it forces the neural networks to prioritize on peak positions (and thus excitation energies). However, for many complex spectra, peaks due to individual excitations merge into a broader spectral structure. In such cases, it might be more suitable to adjust future metrics to better capture spectral shapes. In X-ray diffraction (XRD) and low energy electron diffraction (LEED) studies the same problem arises, as theoretical spectra computed for model structures are compared to experimental spectra to find the best structural model. We will investigate the cosine or Pearson correlation coefficient and the Jensen-Shannon divergence measure [54] as well as the Pendry R-factor [55] in the future. This will also help us to prevent negative peaks in the predicted spectral functions. In this work we used the Kohn-Sham spectrum for simplicity. While Kohn-Sham eigenvalues do not correctly represent molecular excitation energies, they provide us with a convenient and large approximate dataset for developing and testing the AI-spectroscopist. In the future, we will extend our study to photoemission spectra computed with the GW method. [56,57] Due to the much higher computational expense, we will always have more data from lower fidelity methods such as DFT-PBE. To reconcile datasets at different fidelity levels, we are considering Δ-learning techniques [10] that would learn the difference between two different fidelity levels (here PBE and GW or co-kriging techniques [23,58] that learn different fidelity levels simultaneously. Our deep learning schemes are fully transferable to better accuracy computational datasets, but also to experimental spectra. We chose a relatively large broadening of computed electronic levels to mimic the resolution of common photoemission experiments, which produce broad and often fairly featureless molecular spectra. Future studies will address the effect of this broadening on the learning success, but our current findings indicate good quality predictions on broad spectral curves. Conclusion In summary, we demonstrated that deep neural networks can learn spectra to 97% accuracy and peak positions to within 0.19 eV. Our neural networks infer the spectra directly from the molecular structure and do not require auxiliary input. We also show that, contrary to popular belief, neural networks can indeed work well will smaller datasets, if the network architecture is sufficiently sophisticated. The predictions made by the neural networks are fast (a few milliseconds for a single molecule), which facilitates applications to large databases and high throughput screening. Our proof-of-principle work can now be extended to build more versatile AI-spectroscopists. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
5,241.6
2019-01-29T00:00:00.000
[ "Chemistry", "Computer Science" ]
Anti-Aging Properties of Chitosan-Based Hydrogels Rich in Bilberry Fruit Extract Photoaging is a process related to an increased level of reactive oxygen species (ROS). Polyphenols can scavenge free radicals in the body, which can delay skin aging. Therefore, our work aimed to prepare a biologically active extract from dry fruits of Vaccinium myrtillus or Vaccinium corymbosum and use it for the preparation of hydrogels for topical application. Therefore, eight different extracts (using V. myrtillus and V. corymbosum and different extraction mixtures: methanol, methanol–water 1:1, water, acetone–water 1:1) were prepared and their phytochemical (total polyphenolic content, total flavonoid content, total anthocyanin content) and biological properties (antioxidant, anti-hyaluronidase, and anti-tyrosinase activity) were assessed. Cytotoxicity towards HaCaT keratinocytes was also determined. Based on the results, the acetone–water extract from V. myrtillus was selected for further study. Using the Design of Experiments approach, chitosan-based hydrogels with bilberry fruit extract were prepared. The content of extract and chitosan were selected as independent factors. The activity of hydrogels depended on the extract content; however, the enzyme-inhibiting (anti-hyaluronidase and anti-tyrosinase) activity resulted from the presence of both the extract and chitosan. Increased concentration of chitosan in the hydrogel base led to increased viscosity of the hydrogel and, consequently, a slower release of active compounds. To get optimal hydrogel characteristics, 1% extract and 2.5% MMW chitosan were utilized. The research suggests the validity of using bilberry fruit extracts in topical preparations with anti-aging properties. Introduction Skin aging is a natural physiological process that depends on increasing intrinsic factors such as genetics, cell damage, and changes in the intercellular matrix.Repeated excessive exposures to UV solar radiation and other harmful environmental factors induce skin cell alterations similar to those observed during the aging process [1].Thus, UV radiation absorbed by skin leads to an increase in the concentration of reactive oxygen species (ROS) in the tissues and, as a consequence, may cause lipid peroxidation, DNA damage, and modification of proteins and genes [2,3].Moreover, the presence of ROS may stimulate the activity of some enzymes, including hyaluronidase, which destroys hyaluronic acid, and is one of the factors involved in maintaining the proper configuration of elastin and collagen fibers in the skin also ensuring proper skin hydration [4].Therefore, a reduced amount of hyaluronic acid decreases skin elasticity and firmness [4].Because of that, current strategies to decrease the skin aging process include the inhibition of enzymes that can destroy the structural integrity of the skin, e.g., hyaluronidase [5].It has also been shown that UV radiation-induced ROS cause the (HA), phosphate buffer), tyrosinase, 3,4-Dihydroxy-L-phenylalanine (L-DOPA), quercetin, gallic acid. Optimization of Extraction Process of Vaccinium myrtillus and Vaccinium corymbosum Fruits Regarding Its Biological Activity The first part of the study was dedicated to choosing the most promising extracts from one of the Vaccinium species' dry fruits.For this purpose, the extracts were prepared using four solvents in the first step.Subsequently, phytochemical characterization (the total content of polyphenols, flavonoids, and anthocyanins) was performed using spectrophotometric analysis.The screening of the biological potential of the obtained extracts was effectuated by in vitro methods.Antioxidant properties were determined using DPPH analysis; the ability to inhibit hyaluronidase and tyrosinase was also tested.Finally, the cytotoxicity of the extracts towards human keratinocytes (HaCaT) was determined using the MTT method. Extracts Preparation To obtain extracts with different phytochemical and biological characteristics, 5.0 g of finely ground dry fruits of V. myrtillus or V. corymbosum, was extracted using an ultrasonic bath (40 • C).The extraction process was repeated four times, each for 20 min, using fresh portions (50 mL) of an appropriate solvent (methanol, water, methanol-water 1:1 v/v, or acetone-water 1:1 v/v).The obtained extracts were filtered and concentrated using a rotary evaporator to a volume of 50 mL (0.1 g fruit dry weigh/mL) and kept until further studies at −20 • C. Total Polyphenols Content, Total Flavonoids Content, Total Anthocyanins Content Evaluation, and Content of Active Compound Total polyphenolic content (TPC) was examined using the Folin-Ciocalteau method [21].The gallic acid calibration curve was used to calculate the content of polyphenolic compounds, and the obtained results were presented as mg gallic acid equivalent/g of fruit dry weight (mg GAE/g of FDW).The average from n = 6 measurements was presented. Total flavonoids content (TFC) was determined using the AlCl 3 methods [21].The quercetin calibration curve was used to calculate the content of flavonoids, and the obtained results were presented as mg quercetin equivalent/g of fruit dry weight (mg QE/g of FDW).The average from n = 6 measurements was presented. For the evaluation of total anthocyanins content (TAC) in V. myrtillus and V. corymbosum dry fruits, the spectrophotometric method from the Polish Pharmacopeia was performed which takes into account the European Pharmacopoeia 9.0 regulations [22].The extract was filtered and the dilution in 0.1% v/v HCl in methanol was used to measure absorbance.The results were presented as a percentage of cyanidin 3-glucoside.The average from n = 2 measurements was presented. The chlorogenic acid content in prepared extracts were determined with the previously described HPLC method [23] at a detection wavelength of 325 nm (Figure S1, Supplementary Material).The method was validated for chlorogenic acid, and the validation parameters are presented in Table S1 (Supplementary Material). Biological Activity Evaluation Antioxidant activity was tested using methods assessing the ability of the extracts to scavenge free radicals (DPPH assay). The DPPH analysis was performed using the protocol described previously by Studzi ńska-Sroka et al. [21].The fluid extracts were tested and the concentration of the tested sample extracts was 12.5 mg/mL to 0.391 mg/mL (concentrations of the examined samples in the reaction mixture were 1.56 µg/mL to 0.05 µg/mL in the tested sample).The blanks and controls contained the solvent of the examined extracts in exchange for extract in the tested sample.The results were expressed as IC 50 (mg DW/mL).The average from n = 2 measurements was presented. The enzyme inhibitory assays were performed in vitro using hyaluronidase and tyrosinase.The anti-hyaluronidase activity was measured using the procedure described by Studzi ńska-Sroka et al. [24].The fluid extracts were tested and the concentration of the tested extract samples was 50 mg/mL (concentration of the examined samples in the reaction mixture was 5.0 mg/mL).The controls and blanks contained the solvent of the examined extracts instead of the sample.Absorbance was measured at 600 nm (Multiskan GO 1510, Thermo Fisher Scientific, Vantaa, Finland).The results were expressed as % of enzyme inhibition.The average from n = 5 measurements was presented. The anti-tyrosinase activity was evaluated using the protocol described previously [24] with some modifications.The fluid extracts were tested and the concentration of the tested extract sample was 10 mg/mL (concentration of the examined samples in the reaction mixture were 0.25 mg/mL).Therefore, the concentration of several reagents was adjusted (4 mM L-DOPA solutions was used), and incubation times during the experiment were 10 min and 25 min.The controls and blanks contained the solvent of the examined extracts instead of the sample.Absorbance was measured at 475 nm (Multiskan GO 1510, Thermo Fisher Scientific, Vantaa, Finland).The results were expressed as % of enzyme inhibition.The average from n = 3 measurements was presented. Cell Viability Assay The evaluation of the effect of the studied extracts was performed using the MTT assay, as previously described [25].Briefly, HaCaT immortalized keratinocytes (Cell Lines Service, Eppelheim, Germany) were grown in DMEM supplemented with 10% FBS and 1% antibiotics solution.Cells were seeded into 96-well plates, and after 24 h pre-incubation, fresh medium containing the increasing concentrations of the studied extracts was added into wells, and cells were incubated for a subsequent 72 h.Next, wells were rinsed with warm PBS buffer, and fresh medium supplemented with 0.5% MTT salt was added into wells.Absorbance (570 nm, Infinite M200 plate reader, Tecan, Austria) was measured after 4 h incubation followed by dissolving formazan crystals using acidic isopropanol.Experiments were repeated four times with at least four technical replicates per assay.Impact on viability was calculated as percentage of results obtained for cells treated with the appropriate vehicle only. Obtaining Hydrogels Containing Blueberry Fruit Extract Using the Design of Experiments (DoE) approach, chitosan-based hydrogels were prepared with evaporated acetone-water (1:1 v/v) extracts from dried blueberry fruits.The content of extract and chitosan (medium molecular weight) were selected as independent factors.To select the optimal composition of hydrogel, a two-valued fractional plan was proposed (Statistica 13.3 software, TIBCO Software Inc., Palo Alto, CA, USA) and is presented in Table 1.The parameters used to assess hydrogel properties were chosen: antioxidant activity (using DPPH scavenging assay), hyaluronidase inhibition, tyrosinase inhibition, dissolution profiles of standards from hydrogels, and their viscosity. To prepare the hydrogels, water was weighed into the beaker, and the appropriate amount of extract was added and stirred for 5 min.Chitosan was then added and dissolved by adding 1% acetic acid.The hydrogel was stirred for 24 h before testing. Biological Activity Evaluation Antioxidant, anti-hyaluronidase, and anti-tyrosinase activities were measured using the methods described above. Release of Active Compound Vertical Franz cells (PermeGear, Inc., Hellertown, PA, USA) holding 5 mL of acceptor solution (phosphate buffer; pH = 5.5) were used for the hydrogel in vitro release tests.Membranes made of regenerated cellulose ( ® Nalo Cellulose, Kalle GmbH, Wiesbaden, Germany) with pore diameters of about 25 Å were installed in the cells.Prior to the experiment, the membranes were in the acceptor fluid at 37.0 ± 0.5 • C for a full day.Gel samples (1.0 mL) were put on the artificial membrane's surface at the donor compartment and distributed uniformly.The cells in use have an effective diffusion area of 0.64 cm 2 .During the test, the temperature of the receptor fluid was maintained at 37.0 ± 0.5 • C and it was swirled at 400 rpm.At the designated intervals, 2.0 mL samples were removed from the acceptor compartment and quickly replaced with an equivalent volume of brandnew acceptor fluid.The chlorogenic acid concentrations in the collected samples were determined with the HPLC method described in Section 2.3.2. Bioadhesive Properties Evaluation A viscometric method was used to predict bioadhesive properties [26].The viscometer AMETEK Brookfield DV2T (Middleborough, MA, USA) was used to measure the viscosity of the hydrogels that were created. Statistical Analysis The obtained data were expressed as the means ± SD.Statistical analysis was performed using a one-way analysis of variance (ANOVA), and statistical differences (using Duncan's post hoc tests) with a significance threshold of p 0.05 were determined using Statistica 13.3 software (Statsoft, Krakow, Poland).Correlations were examined using principal component analysis (PCA) with PQStat Software version 1.8.4.142 and Statistica 13.3. Optimization of Extraction Process of Vaccinium myrtillus and Vaccinium corymbosum Fruits Regarding Its Biological Activity Polyphenols (including flavonoids or anthocyanins) determine the pleiotropic bioactivity of plant products.The most frequently described biological properties are antioxidant and anti-inflammatory potential.Moreover, their chemopreventive importance is also indicated [13].Since ROS, generated both during the physiological aging process and under the influence of external factors (e.g., UV solar radiation), play an important role in the skin aging process [1], the polyphenol content may be one of the criteria for assessing the biological potential of natural raw materials. Considering that some of the more important plant polyphenols are flavonoids and anthocyanins, we evaluated the TPC, TPF, and TAC.Our analyses prove that extracts prepared from both Vaccinium species contain compounds with a polyphenol structure, among which a certain pool consists of substances with a chemical structure of flavonoids and anthocyanins.Our results show that MeOH-H 2 O, H 2 O, and Ace-H 2 O extracts from V. myrtillus contain a higher polyphenol content than extracts from V. corymbosum (Table 2).Moreover, V. myrtillus extracts have a higher flavonoid and total anthocyanin content than V. corymbosum ones.Among all the tested extracts, the H 2 O ones usually had the lowest concentration of polyphenolic substances.To the best of our knowledge, no previous studies compare extracts prepared with dry fruits of V. myrtillus and V. corymbosum.However, it is worth noting that anthocyanin content results are consistent with earlier reports, which indicated 3-4 times higher anthocyanin contents in frozen bilberry than in blueberry fruits [14].Further, the content of chlorogenic acid in four types of prepared V. myrtillus extracts was assessed (Table 3).The highest content was determined in the Acu-H 2 O extract.Oxidative stress caused by excessive ROS generation contributes to the skin aging process.It disrupts the existing homeostasis of biological processes, leading to the peroxidation of the lipids of plasmatic membranes and organelles in cells [13].All of these processes contribute to the loss of the primary properties of the skin.Thus, the ability to scavenge free radicals was measured to evaluate the antioxidant potential of V. myrtillus and V. corymbosum extracts.The results indicate that Ace-H 2 O extract from V. myrtillus and MeOH, MeOH-H 2 O as well, strongly scavenged free radicals in the DPPH test (Figure 1a).We showed that the antioxidant activity of V. corymbosum extracts, among which the Ace-H 2 O extract had the highest antioxidant activity, was lower than that of V. myrtillus.However, our results demonstrate the interesting antioxidant activity of Vaccinium dry fruits, which corroborate previous reports highlighting the powerful antioxidant effect of some Vaccinium species extracts evaluated with different antioxidant tests [27,28]. The proper hydration of skin protects against the appearance of wrinkles and the harmful effects of the environment.Hyaluronic acid determines proper hydration; hence, the higher its content in the intercellular matrix, the better the skin's hydration.Moreover, the literature data indicate that low-chain fragments of hyaluronic acid correlate with tissue inflammation [29].Since the presence of high-chain hyaluronic acid molecules in the skin determines the maintenance of the skin in good condition, inhibiting the breakdown processes of hyaluronic acid by blocking the activity of hyaluronidase has become the subject of research on the activity of blueberry and blueberry fruit extracts.We have proven that the extracts with the highest ability to inhibit hyaluronidase, regardless of the tested species, are the Ace-H 2 O extracts (Figure 1b).Thus, at the tested concentration (0.05 g/mL), they were characterized by very high hyaluronidase inhibitory activity (>90%).Methanol-water extracts also had a strong effect.On the other hand, water and methanol extracts from V. corymbosum did not inhibit the enzyme at all.Data on the hyaluronidase inhibitory properties of extracts from dried blueberries of the Vaccinium sp. are limited and concern species other than V. myrtillus and V. corymbosum.inhibit this enzyme can prevent aging pigmentation [30].Therefore one of our goals was the evaluation of the inhibitory effect of Vaccinium spp.extracts on tyrosinase activity.The results (Figure 1c) have indicated that in the tested concentration, all of the V. myrtillus extracts blocked the activity of the enzyme.On the other hand, the V. corymbosum extracts' activity was more varied and some of the extracts were inactive (MeOH and H2O).Moreover, a higher inhibition of tyrosinase was detected for the Ace-H2O extracts.Alhough, some of the Vaccinium spp.preparations are reported as tyrosinase inhibitors [31,32].Tyrosinase is a key enzyme that controls the production of melanin in the skin.Because the aging process is related to higher activity of tyrosinase, substances which inhibit this enzyme can prevent aging pigmentation [30].Therefore one of our goals was the evaluation of the inhibitory effect of Vaccinium spp.extracts on tyrosinase activity.The results (Figure 1c) have indicated that in the tested concentration, all of the V. myrtillus extracts blocked the activity of the enzyme.On the other hand, the V. corymbosum extracts' activity was more varied and some of the extracts were inactive (MeOH and H 2 O).Moreover, a higher inhibition of tyrosinase was detected for the Ace-H 2 O extracts.Alhough, some of the Vaccinium spp.preparations are reported as tyrosinase inhibitors [31,32]. To assess the potentially cytotoxic effect of the extracts obtained from V. myrtillus and V. corymbosum on the skin cells, we performed the MTT test (Figure 2).The experiment was conducted in a model with the HaCaT cell line (normal human keratinocytes), which is currently most frequently used in skin research.Our results proved that in the concentrations used, the extracts from blueberry dry fruits are not cytotoxic (HaCaT viability > 90%).On the other side, MeOH, Ace-H 2 O, and H 2 O extracts from V. myrtillus show an HaCaT viability of 80-90%, which proves its very low cytotoxicity in a very high concentration (600 µg/mL).The obtained results suggest that these extracts have a high safety profile.This is consistent with the data of other authors who assessed the toxicity of blueberries in relation to normal HaCaT cell lines as low or not disturbing cell viability [33,34].The low cytotoxicity we detected, especially for V. myrtillus (revealed only at very high concentrations), may result from the important content of polyphenolic compounds (including flavonoids and anthocyanins) in the tested extracts.The content of Anthocyanins in bilberry extracts is higher, which can contribute to stronger effects on cell viability.Indeed, the activity of berry juices could be attributed to the total anthocyanin content or to specific anthocyanin compounds [35].Polyphenols are a group of substances with health-promoting significance, including high antioxidant potential [36], and a beneficial effect on the regulation of redox homeostasis in cells.However, the literature indicates that under certain conditions (e.g., high concentration), polyphenols may have a pro-oxidant effect [33], which may impair cell viability.A recent study has shown that bilberry extract may show pro-apoptotic activity through a redox-sensitive caspase 3 activation-related mechanism [37]. of blueberries in relation to normal HaCaT cell lines as low or not disturbing cell viability [33,34].The low cytotoxicity we detected, especially for V. myrtillus (revealed only at very high concentrations), may result from the important content of polyphenolic compounds (including flavonoids and anthocyanins) in the tested extracts.The content of Anthocyanins in bilberry extracts is higher, which can contribute to stronger effects on cell viability.Indeed, the activity of berry juices could be attributed to the total anthocyanin content or to specific anthocyanin compounds [35].Polyphenols are a group of substances with health-promoting significance, including high antioxidant potential [36], and a beneficial effect on the regulation of redox homeostasis in cells.However, the literature indicates that under certain conditions (e.g., high concentration), polyphenols may have a pro-oxidant effect [33], which may impair cell viability.A recent study has shown that bilberry extract may show pro-apoptotic activity through a redox-sensitive caspase 3 activation-related mechanism From the results obtained in this preliminary section, taken together, we conclude that the Ace-H2O V. myrtillus extract exerts a high antioxidant potential, and their inhibitory activity on hyaluronidase and tyrosinase is the highest in the tested concentration among all the tested extracts.Importantly, the content of chlorogenic acid is the highest in Ace-H2O.Chlorogenic acid is a compound with antioxidant and antiinflammatory potential [38], as well as the ability to stimulate the upregulation of skin barrier genes [39].It indicates that the Ace-H2O extract is an interesting material for our further studies.In addition, the V. myrtillus and V. corymbosum extracts are characterized by low cytotoxicity in the MTT assay.Considering all the obtained results, the Ace-H2O V. myrtillus extract was chosen for the subsequent studies.From the results obtained in this preliminary section, taken together, we conclude that the Ace-H 2 O V. myrtillus extract exerts a high antioxidant potential, and their inhibitory activity on hyaluronidase and tyrosinase is the highest in the tested concentration among all the tested extracts.Importantly, the content of chlorogenic acid is the highest in Ace-H 2 O. Chlorogenic acid is a compound with antioxidant and anti-inflammatory potential [38], as well as the ability to stimulate the upregulation of skin barrier genes [39].It indicates that the Ace-H 2 O extract is an interesting material for our further studies.In addition, the V. myrtillus and V. corymbosum extracts are characterized by low cytotoxicity in the MTT assay.Considering all the obtained results, the Ace-H 2 O V. myrtillus extract was chosen for the subsequent studies. Obtaining Hydrogels Containing Blueberry Fruit Extract Based on the results of the cytotoxicity assessment of the extract, it was estimated that the maximum concentration of the extract in the hydrogel was 1%.This concentration does not cause any toxic effect. The preparation of all hydrogels was carried out without any problems.All the samples were converted to gels in room temperature in <10 min.The aim of the study was to assess the influence of the composition of hydrogels on the biological and pharmaceutical properties of the systems.The assessment of the morphological structure was not in focus, but instead, the antioxidant, anti-hyaluronidase, and anti-tyrosinase properties were assessed first (Table 4). The antioxidant activity of hydrogels results primarily from the presence of the extract and increases with an increase in its concentration in the hydrogel (Figure S2, Supplementary Material).As previously shown, the antioxidant activity of chitosan (CS) is negligible, which is due to insufficient H-atom donors [40].On the other hand, the analysis of data on anti-hyaluronidase activity indicates that the activity of CS is much higher than that of the extract.It has been shown that the inhibition of hyaluronidase increases with an increase in the concentration of CS in the hydrogel and a decrease in the extract concentration (Figure S3, Supplementary Material).The process by which hyaluronidase's activity is inhibited could be associated with the interaction of an increasing number of -NH 2 groups of CS, perhaps resulting in modifications to the enzyme's secondary structure [40].Finally, anti-tyrosinase activity was assessed, which showed similar relationships as in the case of anti-hyaluronidase activity.Namely, with an increase in CS concentration, the activity of the hydrogel increases, and at the same time, it increases with a decrease in extract concentration (Figure S4, Supplementary Material).The literature data suggest that CS can inhibit tyrosinase activity by binding a copper ion in the active site of the enzyme and by binding to the active site of the enzyme, rather than changing the tertiary structure of tyrosinase [41,42].The next aspect was the assessment of the dissolution behavior of chlorogenic acid, as one of the main active compounds of the extract (Figure S4, Supplementary Material).So, chlorogenic acid's release rate from hydrogels was estimated by in vitro release tests (Figure 3).After 24 h, the release was only 26-36% depending on the formulation.In this case, the cumulative amount of released chlorogenic acid should also be taken into account.The greatest amount of chlorogenic acid was released from formulation H2.Evidently, in this case, the most important factor influencing the amount of chlorogenic acid released was extract concentration (Figure S6, Supplementary Material).Moreover, in both cases, the release of the substance is influenced by the concentration of chitosan.The increase in CS concentration is correlated with the increase in hydrogel viscosity, which effectively slows down the release of chlorogenic acid (Figures S5 and S6, Supplementary Material).Comparing the similarity of the profiles, it was noticed that the profiles for H1 and H2, and H3 and H4 are similar, which shows again the strong influence of CS, because in the first two hydrogels, the CS concentration is 2%, and in the next ones, it is 3%, which significantly indicates the determination of the percentage of substance released by CS (Table S2, Supplementary Material). Antioxidants 2024, 13, x FOR PEER REVIEW 10 of 15 Supplementary Material).Comparing the similarity of the profiles, it was noticed that the profiles for H1 and H2, and H3 and H4 are similar, which shows again the strong influence of CS, because in the first two hydrogels, the CS concentration is 2%, and in the next ones, it is 3%, which significantly indicates the determination of the percentage of substance released by CS (Table S2, Supplementary Material). The release profiles indicate that the drug release rate is independent of concentration and that, for each formulation, the process may be satisfactorily connected to and explained by the zero-order kinetics model (Table S3, Supplementary Material).This confirms the literature reports on the usefulness of chitosan for constructing systems for the controlled delivery of active substances [43,44].One of the key factors influencing a system's ability to be bioadhesive is its viscosity.So, the systems' viscosity was examined (Table 5).Previous work assessed the impact of both molecular weight (MW) and the degree of deacetylation on the viscosity of the resulting systems, where it was shown that the viscosity increases with increasing MW.Therefore, only CS MMW, which shows optimal parameters, was used in this work.Of course, as the CS concentration increases, the lightness increases, which can be easily seen The release profiles indicate that the drug release rate is independent of concentration and that, for each formulation, the process may be satisfactorily connected to and explained by the zero-order kinetics model (Table S3, Supplementary Material).This confirms the literature reports on the usefulness of chitosan for constructing systems for the controlled delivery of active substances [43,44]. One of the key factors influencing a system's ability to be bioadhesive is its viscosity.So, the systems' viscosity was examined (Table 5).Previous work assessed the impact of both molecular weight (MW) and the degree of deacetylation on the viscosity of the resulting systems, where it was shown that the viscosity increases with increasing MW.Therefore, only CS MMW, which shows optimal parameters, was used in this work.Of course, as the CS concentration increases, the lightness increases, which can be easily seen on the Pareto chart (Figure S7, Supplementary Material).The creation of mutually entangled chains between two contacting interfaces and charge interactions are the primary causes of chitosan hydrogels' capacity for tissue adhesion [45].Be aware that basic CSs may not adhere as well as modified ones, which are created by conjugating various adhesive agents to the CS backbone [46].To evaluate the correlation of all experimental results, PCA analysis was performed (Figure 4; Table S4, Supplementary Material).A strong negative correlation was found between the viscosity of the hydrogels and the release of chlorogenic acid, which confirms the previously obtained results.Additionally, a correlation was demonstrated between the viscosity of anti-hyaluronidase and anti-tyrosinase activity, which indicates a weakening of the anti-aging effect with an increase in the viscosity of the system, mainly resulting from an increase in the concentration of chitosan.Using experimental data and statistical analysis, it was possible to estimate the model and determine the hydrogel's optimal composition (Figure 5).To get optimal hydrogel characteristics, 1% extract and 2.5% MMW chitosan were utilized.Using experimental data and statistical analysis, it was possible to estimate the model and determine the hydrogel's optimal composition (Figure 5).To get optimal hydrogel characteristics, 1% extract and 2.5% MMW chitosan were utilized.Using experimental data and statistical analysis, it was possible to estimate the model and determine the hydrogel's optimal composition (Figure 5).To get optimal hydrogel characteristics, 1% extract and 2.5% MMW chitosan were utilized. Conclusions In conclusion, our research confirms that V. myrtillus and V. corymbosum dry fruit extracts possess important phytochemical and biological potential that can differ respectably from the species and solvents used for extraction.This diversity allowed us to select the most valuable extract for further research.We also proved that the DoE approach can be successfully used to determine the optimal parameters to prepare hydrogels with interesting biological and physicochemical properties.The parameters of hydrogel, along with the proven antioxidant, anti-hyaluronidase, and anti-tyrosinase properties of prepared hydrogels, suggest the anti-aging potential of prepared V. myrtillus extracts in a topical preparation. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/antiox13010105/s1, Figure S1: Chromatogram of acetone-water extract at concentration 0.1 g/mL; Table S1.Validation parameters of HPLC method; Figure S2 S2: Comparison of chlorogenic acid release profiles (expressed in %) from H1-H4 hydrogels using factors f 1 and f 2 ; Table S3: Parameters of mathematical models fitted to the release profiles (expressed in mg/cm 2 ) of formulations H1-H4; Figure S7: Statistical analysis for hydrogels' viscosity: (a) Pareto plot of standardized effects; (b) Response surface plots presenting the dependence of extract concentration and chitosan concentration on hydrogels' viscosity; Table S4: Correlation matrix. Figure 1 . Figure 1.The biological activity of Vaccinium myrtillus and Vaccinium corymbosum extracts; the antioxidant potential (a), the anti-hyaluronidase potential (b), the anti-tyrosinase potential (c); na-not active. Figure 2 . Figure 2. The effect of Vaccinium myrtillus (a) and Vaccinium corymbosum (b) extracts on the viability of HaCaT cells. Figure 2 . Figure 2. The effect of Vaccinium myrtillus (a) and Vaccinium corymbosum (b) extracts on the viability of HaCaT cells. Antioxidants 2024 , 13, x FOR PEER REVIEW 11 of 15the previously obtained results.Additionally, a correlation was demonstrated between the viscosity of anti-hyaluronidase and anti-tyrosinase activity, which indicates a weakening of the anti-aging effect with an increase in the viscosity of the system, mainly resulting from an increase in the concentration of chitosan.(a) (b) Figure 4 . Figure 4.The relationship of hydrogels properties on the factorial plane formed by the first two principal components (a), and principal component analysis (PCA) showing the factor loading plot (b). Figure 4 . Figure 4.The relationship of hydrogels properties on the factorial plane formed by the first two principal components (a), and principal component analysis (PCA) showing the factor loading plot (b). Figure 4 . Figure 4.The relationship of hydrogels properties on the factorial plane formed by the first two principal components (a), and principal component analysis (PCA) showing the factor loading plot (b). Figure 5 . Figure 5. Prediction of the optimization model for obtaining hydrogels for effects with positive sign-antioxidant activity, dissolution and viscosity (a) and with negative sign-anti-hyaluronidase and anti-tyrosinase activities (b). : Statistical analysis for antioxidant activity: (a) Pareto plot of standardized effects; (b) Response surface plots presenting the dependence of extract concentration and chitosan concentration on hydrogel antioxidant activity; Figure S3: Statistical analysis for anti-inflammatory activity: (a) Pareto plot of standardized effects; (b) Response surface plots presenting the dependence of extract concentration and chitosan concentration on hydrogel anti-hyaluronidase activity; Figure S4: Statistical analysis for anti-tyrosinase activity: (a) Pareto plot of standardized effects; (b) Response surface plots presenting the dependence of extract concentration and chitosan concentration on hydrogel anti-tyrosinase activity; Figure S5: Statistical analysis for release studies: (a) Pareto plot of standardized effects for chlorogenic acid release (in %); (b) Response surface plots presenting the dependence of extract concentration and chitosan concentration on the chlorogenic acid release from hydrogels (in %); Figure S6: Statistical analysis for release studies: (a) Pareto plot of standardized effects for chlorogenic acid release (in mg/cm 2 ); (b) Response surface plots presenting the dependence of extract concentration and chitosan concentration on the chlorogenic acid release from hydrogels (in mg/cm 2 ); Table Table 2 . Total polyphenols content (TPC) and total flavonoids content (TFC) in Vaccinium myrtillus and Vaccinium corymbosum extracts.Mean values within a column with the same letter are not significantly different at p < 0.05 using Duncan's test.The first letter of the alphabet is used for the highest values, the next for statistically significant decreasing values; mg GAE/g DW-mg gallic acid equivalent/g of fruit dry weight; mg QE/g DW-mg quercetin equivalent/g of fruit dry weight. aMean values within a column with the different letter are significantly different at p < 0.05 using Duncan's test.The first letter of the alphabet is used for the highest values, the next for statistically significant decreasing values. Table 4 . Biological activities of hydrogels.
7,123.2
2024-01-01T00:00:00.000
[ "Materials Science", "Medicine", "Environmental Science" ]
Trip purpose imputation using GPS trajectories with machine learning : We studied trip purpose imputation using data mining and machine learning techniques 1 based on a dataset of GPS-based trajectories gathered in Switzerland. With a large number of 2 labeled activities in 8 categories, we explored location information using hierarchical clustering 3 and achieved a classification accuracy of 86.7% using a random forest approach as a baseline. The 4 contribution of this study is summarized below. Firstly, using information from GPS trajectories 5 exclusively without personal information shows a negligible decrease in accuracy (0.9%), which 6 indicates the good performance of our data mining steps and the wide applicability of our 7 imputation scheme in case of limited information availability. Secondly, the dependence of model 8 performance on the geographical location, the number of participants, and the duration of the 9 survey is investigated to provide a reference when comparing classification accuracy. Furthermore, 10 we show the ensemble filter to be an excellent tool in this research field not only because of the 11 increased accuracy (93.6%) especially for minority classes, but also the reduced uncertainties in 12 blindly trusting the labeling of activities by participants, which is vulnerable to class noise due to 13 the large survey response burden. Finally, the trip purpose derivation accuracy across participants 14 reaches 74.8%, which is significant and suggests the possibility of effectively applying a model 15 trained on GPS trajectories of a small subset of citizens to a larger GPS trajectory sample. 16 Introduction Trip purpose imputation is an important part of constructing travel diaries of 20 individuals and has attracted the attention of many researchers due to its significance imputation precision.Generally, the socio-demographic characteristics of participants 92 are gathered together with GPS trajectories and are taken to be important supplementary 93 information [1].Land use data and POI could be used to indicate possible activities for a 94 stopping point on GPS trajectories [8].In addition, the popularity of POI inferred from 95 social media data (e.g.Twitter) [5], travel and tourism statistics [9], and mobile phone 96 billing data [10] have also been utilized to derive travel purpose.Data pre-processing, which has been intensively investigated in the data mining 99 field [11], receives much less discussion than it deserves in trip purpose imputation 100 research.Therefore, we discuss the issue in-depth below.García et al. [12] summarized 101 the three most influential data pre-processing requirements to improve data mining 102 efficiency and performance, i.e. imperfect data handling, data reduction, and imbalanced 103 data pre-processing. 104 An important aspect of imperfect data handling is noise filtering [13], which aims 105 at detecting the attribute noise and the more harmful class noise [14].For class noise 106 removal, ensemble filters proposed by Brodley and Friedl [15,16] have been widely 107 applied as an excellent tool.Ensemble filters adopt an ensemble of classifiers to eliminate 108 the mislabeled training data that cannot be correctly classified by all or part of the 109 classifiers using n-fold cross-validation.To avoid treating an exception that is specific to 110 an algorithm as noise, multiple algorithms are used.Basically, there are two strategies 111 for implementing ensemble filters: majority vote filters, which mean the instances 112 that cannot be correctly classified by more than half of the algorithms are treated as 113 mislabeled; and conservative consensus filters, which mean only the instances that 114 cannot be correctly classified by all algorithms are treated as noise.Majority vote filters 115 are sometimes preferred to conservative consensus filters, as retaining bad data is more 116 harmful than discarding good data especially when there are ample training data [16]. 117 Nevertheless, we chose conservative consensus filters, with the results of these two 118 strategies being similar. 119 Missing data is another typical problem in transport research that normally involves 120 survey processes.The first step to handle missing data should be understanding sources 121 of "unknownness" [17], which might be due to lost, uncollected, or unidentifiable in 122 existing categories.Besides omitting the instances or features with missing values, which 123 is usually not suggested, approaches for missing data inference can be classified into two 124 groups [18]: data-driven, e.g.mean or mode; and model-based, e.g.k-nearest neighbors 125 (kNN).kNN has gained popularity because of its simplicity and good performance in 126 dealing with both numerical and nominal values [19]. 127 Attribute selection, as a classic part of data reduction, is conducive to generating 128 a simpler and more accurate model and avoiding over-fitting risks [12,20].For feature 129 selection, feature importance measured by mean decrease in the Gini coefficient in 130 the random forest approach can be used as a reference [21].However, such a rank-131 based measure cannot take feature interactions into account and might suffer from 132 stochastic effects [22].Conventionally, feature selection techniques can be grouped into 133 two categories: filter methods, i.e. variable ranking techniques; and wrapper methods, 134 which involve classifiers and become an NP-hard problem [20].One of the most popular 135 algorithms for feature selection is minimum redundancy maximum relevance based on 136 mutual information [23], which is initially designed as a filter and then developed to 137 be a wrapper as well [12].Another popular wrapper algorithm that is designed for the 138 random forest is provided in an R package Boruta [22], which aims at identifying all 139 relevant features rather than an optimal subset and is employed for our analysis. 140 An imbalanced distribution of categories might result in unbalanced accuracies 141 of classification.This problem also troubled the machine learning community, where 142 Ling and Li [24] suggested duplicating small-portion classes and Kubat and Matwin 143 [25] tried to downsize large-portion classes.One of the most prevalent ways to cope with imbalanced data is the Synthetic Minority Over-Sampling Technique (SMOTE) 145 introduced by Chawla et al. [26], which suggests formulating new samples as randomized 146 interpolation of minority class samples.SMOTE is widely used because of its simplicity, 147 good performance, and compatibility with any machine learning algorithm [12].As The methods used to derive trip purposes can be divided into two main categories 153 [28]: rule-based systems with an accuracy of around 70% [29], which rely predominantly 154 on land use and personal information, as well as timing, duration, and sequence of 155 activities; and machine learning approaches, which focus more on activities than position 156 and show varying accuracy between 70% and 96% depending on different algorithms, 157 data set, activity categories, and so on [8].Although manual trip purpose derivation 158 approaches using rules give satisfactory results, there is no standard set of accepted rules 159 for mining travel information and thus it relies on researchers' experiences.Compared to 160 conventional deterministic approaches, machine learning algorithms like random forest 161 and dynamic Bayesian network models could even rank possible activities, which are 162 particularly helpful when activities are ambiguous [5].Consequently, we opt for machine 163 learning approaches that have already been widely applied in this area, such as decision 164 trees [30], random forests [28], artificial neural networks [31], and dynamic Bayesian 165 network models [5].Because of the good performance of random forests compared to 166 other methods demonstrated by numerous studies [32][33][34], we employed it as a starting 167 point for analysis.An introduction to random forests is given in Section 3.2.As a classification method, kNN [42] is also shown to be a good missing value 206 imputation technique [12,19].Here we give a short introduction to the kNN algorithm.objects is obtained.To define the "similarity" between two clusters, [45] summarized six 233 strategies, from which we selected the "Group-average" strategy as it is more reasonable 234 and conservative than its alternatives.In our case, the similarity between two activities Through the process of hierarchical clustering, d XY will increase gradually.There-242 fore, we can define an appropriate threshold to stop the process and get intermediate 243 clustering results.In our study, a threshold of 30 meters is chosen to restrict the size of 244 each cluster considering the GPS accuracy [37] and results in a radius of fewer than 30 245 meters for each cluster. 246 A random forest is an ensemble of classification and regression trees [46].Since 247 its introduction, classification and regression tree (CART) has been an important tool 248 and received lots of attention in different research fields [42].A detailed description of 249 CART can be found in Song and Ying [47].As a further development of CART, Breiman An advantage of the random forest is that it provides an inherent measure of 317 feature importance using Gini impurity as shown in Fig. 1, which provides an important 318 reference on feature selection.Among the 21 features, the most important six features 319 are more useful in classification, whereas the personal-based attributes are less relevant: 320 except for "Age", all personal information belongs to the least relevant 7 features.To 321 assess the importance of three sets of features grouped in Table 2, we conduct three additional experiments by leaving one set of features out and present the results in Fig. 323 2. When leaving all the personal information unused, the overall accuracy decreased 324 around 0.9%.Although the Boruta method [22] shows that all features are relevant, 325 which indicates a good result of our preliminary feature selection, we omit the personal 326 information from further analysis for the following reasons: This could indicate the 327 strength and applicability of our method even when no personal information is available, 328 i.e., we can undertake trip information enrichment at high accuracy using only GPS As a baseline, we achieved an overall accuracy rate of 86.7% for eight activity 403 categories using the highly heterogeneous data (3689 participants) with random forests. 404 Through feature importance analysis using the inherent measure of the mean decrease 416 In this context, it is important to note, this is misleading to compare accuracy rates 148a variation of SMOTE, Adaptive Synthetic Sampling Approach (ADASYN) proposed 149 by He et al. [27] puts more weight on minority samples that are harder to learn when 150 selecting samples for interpolation. 152 168 2 . 4 . Model Performance Assessment 169 Model performance can be assessed in various ways, which act as an important 170 component of model development.Although reported trip information might suffer 171 from memory recall errors or other issues, it is probably the best candidate as ground-172 truth for model validation and assessment [35].Innovatively, Li et al. [36] used the 173 visualized spatial distribution of recognized trip purposes to validate simulation outputs.174Albeitclassification models might be used to generate travel diaries for citizens that are 175 not in the training dataset, Montini et al.[32] found that the accuracy of trip purpose 176 detection is participant-dependent.As proportion and categories of trip purposes have a 177 significant influence on the accuracy of classification[9], high-frequency activities should 178 be treated with special care. 181 In this study, we analyzed GPS trajectories collected from 3689 Swiss participants 182 from September 2019 to September 2020 through the "Catch-my-day" GPS tracking 183 app, developed by Motion Tag.Considering solely the 91% of all activities that are 184 within Switzerland, it amounts to 1.82 million activities above a time threshold of 5 185 minutes, of which 43% is labeled by participants.Although a threshold of 5 minutes to 186 extract activities from GPS trajectories might ignore some short activities, we use it as a 187 simplification for the current study.As a GPS-integrated mobile phone has a position 188 error of 1 to 50 meters with a mean of 6.5 meters as shown by Garnett and Stewart [37], 189 this is taken into account when conducting spatial clustering of activities.More details 190 about the study design and research scope can be found in Molloy et al. [38] and Molloy 191 et al. [39].192 Based on the "Mobility and Transport Microcensus 2015" in Switzerland, we 193 grouped activities into eight categories as shown in 207 Given a training set T = {U, V}, where U are predictors and V are labels, we can 208 estimate the distance between a test object w 0 = {u 0 , v 0 } and all training objects w = 209 {u, v} ∈ {U, V} to find its k nearest neighbors.Then the label v 0 for this test object 210 w 0 is determined as median of v of its k nearest neighbors in the case of numerical 211 variables and mode in the case of categorical variables.The Gower distance computation 212 between u 0 and u, which is applicable for both categorical and continuous variables, 213 can be referred to in Kowarik and Templ[43].Two issues might affect the performance 214 of kNN: one is the choice of k, where a small value of k could be noise sensitive and a 215 large value of k might include redundant information; another issue is that an arithmetic 216 average might ignore the distance-dependent characteristics, where closer objects have 217 higher similarities.These two issues can be addressed by weighting the vote of each nearest neighbor for the final result by their distance, i.e. weighted kNN.Missing 219 value imputation for personal-related information in this work is conducted using the R 220 package "VIM" developed by Kowarik and Templ[43], which also provides weighted 221 kNN methods for better performance.222Toexplore implicit information contained in the data, data mining techniques like 223 clustering can be employed[28].Using the hierarchical clustering method introduced by 224 Ward Jr[44], we grouped the spatial location of activities for each participant to make use 225 of repetitive patterns of human behaviors.Hierarchical clustering optimizes the route 226 by which groups are obtained[45], so it might not give the best clustering result for a 227 specified number of groups[44].However, compared to another widely known k-means 228 clustering technique, hierarchical clustering allows us to define the distance used for 229 grouping rather than defining the number of groups.The basic steps for hierarchical 230 clustering are illustrated below: 1) Treat initial x objects as individual clusters; 2) Group 231 a pair of the most "similar" clusters; 3) Repeat step 2 until a single cluster containing all 232 235 is defined as the Euclidean distance of their geographical location.Next, we use two 236 general activity clusters X and Y to illustrate the estimation of their average distance.237 Assuming there are m and l activities in clusters X and Y, respectively, while i and j 238 are single elements of the m and l activities, respectively.We use d ij to represent the 239 distance between activities i and j, d XY the distance between clusters X and Y. Then we 240 can calculate d XY as: 329trajectories; The inclusion of socio-demographic data might lead to overfitting of models 330 to current participants and limit the applicability of models on GPS trajectories of other 331 users.While the elimination of activity information gives similar results, the removal of 332 cluster-based information leads to a dramatic decrease in model performance, which 333 strongly suggests the effectiveness of our usage of hierarchical clustering algorithms. Figure 1 . Figure 1.Feature importance in trip purpose imputation measured with mean decrease in Gini in random forests. Figure 2 . Figure 2. The model performance for each activity categories and the overall acuracy in four experiments, where we use all features or leave one set of features unused to measure the significance of each set of features. Figure 3 Figure3shows the spatial distribution of labeled activities and accuracy rate using Figure 3 . Figure 3.The spatial distribution of the number of labeled activities (a) and accuracy rate (b) using grids with an area of 4 km 2 in Switzerland.The exponential scale in (a) is used to account for the unevenly distributed activities.To investigate the dependence of classification performance on the number of partic- Figure 4 . Figure 4.The impact of the number of participants and the duration of the survey on model performance. Figure 5 . 5 . Figure 5.The model performance on the original data and the ensemble filtered data through four classification algorithms.5.Discussion and Conclusions399 402 405in Gini of random forests and the Boruta method, we verified that current features are of 406 high relevance and the features extracted with hierarchical clustering are crucial in model 407 performance.Additional experiments that leave out a set of personal-related features 408 reveal the possibility of trip purpose imputation with only GPS trajectories.Thanks 409 to the innovative application of hierarchical clustering in extracting relevant features, 410 the answer to the first research question becomes obvious: the required data sources 411 for a satisfactory model performance are minimized to GPS trajectories.Although 412 many researchers managed to achieve better performance by incorporating various 413 data sources, we advocate considering limited data availability on a larger scale, where 414 collecting personal information along with GPS trajectories is impossible or the quality 415 of data sources varies considerably, is vital to generalize our results. 417 among papers due to the different sample sizes (persons and length of observation Table 2 . Selected features for trip purpose imputation.The categorical features are indicated by *, while m() and std() denote "mean of" and "standard deviation of", respectively. [3]eover, POI information from Google Places API as adopted in Ermagun et al.[3]199 was investigated for a pilot study and not considered further due to the large monetary 200 cost for large datasets such as the one used here and its comparatively minor benefits.201Residential zoning information in Switzerland as land use information is also tested 202 with very little effect on trip purpose derivation accuracy and hence excluded from the 203 final models.2043.2.Methods Table 3 . Confusion matrix of labeled versus predicted trip purposes using random forests (Overall accuracy: 86.7%). Table 4 . Although it has been still be improved for wider applicability in transport management, where the possibility 394 might exist in including other data sources.While the division of activity categories 395 is primarily subject to practical applications, its effects on model performance could 396 be quantified in further analysis.In addition, the complexity of specific activities like Classification accuracy of multiple algorithms with ensemble filter and across participants imputation. 4.2.Ensemble Filter with Multiple Classification Algorithms358 A large data set is more vulnerable to class noise than smaller ones because of the 359 heavier and longer survey response burden of participants.It is a challenging topic that 360 has not been considered in the context of trip purpose imputation.398
4,247.6
2020-11-01T00:00:00.000
[ "Computer Science" ]
Endothelin-1 induces LIMK2-mediated programmed necrotic neuronal death independent of NOS activity Background Recently, we have reported that LIM kinase 2 (LIMK2) involves programmed necrotic neuronal deaths induced by aberrant cyclin D1 expression following status epilepticus (SE). Up-regulation of LIMK2 expression induces neuronal necrosis by impairment of dynamin-related protein 1 (DRP1)-mediated mitochondrial fission. However, we could not elucidate the upstream effecter for LIMK2-mediated neuronal death. Thus, we investigated the role of endothelin-1 (ET-1) in LIMK2-mediated neuronal necrosis, since ET-1 involves neuronal death via various pathways. Results Following SE, ET-1 concentration and its mRNA were significantly increased in the hippocampus with up-regulation of ETB receptor expression. BQ788 (an ETB receptor antagonist) effectively attenuated SE-induced neuronal damage as well as reduction in LIMK2 mRNA/protein expression. In addition, BQ788 alleviated up-regulation of Rho kinase 1 (ROCK1) expression and impairment of DRP1-mediated mitochondrial fission in CA1 neurons following SE. BQ788 also attenuated neuronal death and up-regulation of LIMK2 expression induced by exogenous ET-1 injection. Conclusion These findings suggest that ET-1 may be one of the upstream effectors for programmed neuronal necrosis through abnormal LIMK2 over-expression by ROCK1. Background Necrosis and apoptosis are two major cell death patterns: Necrosis is a passive cell death, while apoptosis is a highly controlled process [1,2]. Interestingly, some necrotic processes can be mediated by receptor interacting protein kinase 1 (RIP1), which is termed programmed necrosis or necroptosis [3][4][5][6]. Recently, we have reported that aberrant cyclin D1 expression induced by up-regulation of LIMK2 (one of F-actin regulators) expression evokes programmed necrotic neuronal death following SE (prolonged seizure activity, [7]). Briefly, SE down-regulates p27 Kip1 expression by ROCK activation, which induces cyclin D1/cyclin dependent kinase 4 (CDK4) expressions in neurons vulnerable to SE, and subsequently increases LIMK2 expression independent of RIP1 and caspase-3 activity. In turn, upregulated LIMK2 impairs DRP1-mediated mitochondrial fission that finally provokes programmed necrotic death. Indeed, LIMK2 knockdown and rescue of mitochondrial fission attenuates this programmed necrotic neuronal death. However, we could not elucidate the upstream effecter for LIMK2-mediated programmed necrotic neuronal death. ET-1 is one of vasoactive peptides that may be responsible for maintaining the tone of the cerebral vasculature. ET-1 exerts various actions by binding to two specific G-protein-coupled receptors subtypes, ET A and ET B receptors. ET B receptors predominantly express in the brain parenchyma. In contrast, ET A receptors localize in vascular smooth muscle within the brain parenchyma [8]. ET B receptor activations elevate intracellular Ca 2+ concentration in cultured neurons and hippocampal slices in an autocrine-signaling mode [9][10][11]. This intracellular mobilization of Ca 2+ rapidly leads to Ca 2+ -dependent NO synthesis. NO reacts with superoxide anion to form peroxynitrite anion (ONOO − ), which is a highly reactive oxidizing agent capable of causing tissue damage [12] and regulating mitochondrial length [13]. ET B receptors activations also stimulate cyclin D1 expression, which coordinates mitochondrial bioenergetics and provokes dysfunction of mitochondrial fission [7,14,15]. These events all participate in the neuronal damage in various neurological diseases. Indeed, exogenous ET-1 injection into the brain parenchyma results in pan-necrosis [16]. Therefore, it is likely that ET-1 may involve LIMK2mediated impairment of mitochondrial dynamics during neuronal death in ET B receptor-mediated NOS activation-independent or -dependent manner. To elucidate this hypothesis, we investigated whether ET-1 is involved in LIMK2-mediated neuronal death. Here, we describe a novel action of ET-1 in LIMK2-mediated neuronal death. Following SE, ET-1 up-regulated ROCK1 and LIMK2 expressions in neurons vulnerable to SE via ET B receptor activation independent of NO production. In addition, exogenous ET-1 injection impaired mitochondrial fission resulting LIMK2-mediated neuronal necrosis. Therefore, our findings suggest that ET-1 may be one of the inducing factors for LIMK2-mediated programmed necrosis following SE. SE rapidly releases ET-1 and induces ET B receptor expression in neurons In the present study, microdialysis analysis revealed that big ET-1 concentration in the hippocampus was 6.1 ± 0.9 pg/ml in basal condition. Big ET-1 concentration was elevated to 18.1 ± 3.9 pg/ml at 4 h after SE (Fig. 1). ET-1 mRNA in the hippocampus was also increased to 3.12-fold of non-SE animals following SE (Fig. 1). ET B receptor expression was weakly detected in a few CA1 pyramidal neurons of non-SE animals ( Fig. 2a-c). Six hr -1 day post-SE animals, ET B receptor expression was markedly elevated in CA1 pyramidal cells ( Fig. 2a-e, p < 0.05 vs. non-SE animals). In this time point, ET B receptor expression was also elevated in astrocytes (Fig. 2e, g and h, p < 0.05 vs. non-SE animals). Three days after SE, ET B receptor expression was significantly reduced in CA1 neurons due to massive neuronal loss, while its expression was enhanced in astrocytes (Fig. 2f). These findings indicate that SE may increase ET-1 synthesis and up-regulate ET B receptor expression in neurons as well as astrocytes. ET B receptor activation induces neuronal death in NOS-independent pathway following SE Since ET-1 triggers signaling cascades for the production of NO [17], we confirmed whether ET-1-mediated NO production involves in neuronal damage induced by SE. The present data demonstrated that NO product level was increased from 213.9 ± 81.1 to 547.6 ± 94.9 nM at 4 h after SE (Fig. 3a). Consistent with our previous study [18], vasogenic edema and reduction in SMI-71 (a BBB marker) were detected in the hippocampus 1 day after SE (p < 0.05 vs. non-SE animals, Fig. 3b, c and e). Both BQ788 and Cav-1 peptide (a NOS inhibitor) treatments effectively attenuated vasogenic edema and BBB breakdown induced by SE (p < 0.05 vs. vehicle, Fig. 3a-c and e). However, Cav-1 peptide infusion did not affect SE-induced neuronal damage, while BQ788 infusion attenuated it at 3 days after SE (p < 0.05 vs. vehicle, Fig. 3d and f). These findings indicate that ET-1 may be involved in neuronal death via ET B receptor-mediated pathways independent of NOS following SE. Blockade of ET B receptor function prevents SE-mediated LIMK2 induction Next, we tested whether ET B receptor activity influences SE-induced LIMK2 induction. Similar to our previous study [7], western blot study showed the up-regulation of LIMK2 expression at 3 days after SE (p < 0.05 vs. non-SE animals, Fig. 4a and b). LIMK2 mRNA was also increased to 4.35-fold of the non-SE level in this time point (p < 0.05, Fig. 4d). BQ788 infusion effectively inhibited up-regulation of LIMK2 mRNA/protein expression at 3 days after SE (p < 0.05 vs. vehicle, Fig. 4a and b). However, Cav-1 peptide treatment did not affect LIMK2 mRNA/protein expression in this time point ( Fig. 4a and b). Immunofluorescence data also showed up-regulated LIMK2 expression in CA1 pyramidal cells following SE (p < 0.05 vs. non-SE), and only BQ788 attenuated this up-regulation of LIMK2 expression induced by SE (p < 0.05 vs. vehicle, Fig. 4c-g). Taken together, the present data indicate that ET B receptor activation may play an important role in SE-induced LIMK2 induction independent of NO productions. ET B receptor-mediated ROCK1 expression induces neuronal death following SE Since ROCK involves SE-induced neuronal death by LIMK2 induction [7], we validated the effect of BQ788 on ROCK1 expression in SE-induced neuronal death. Three days after SE, up-regulated ROCK1 expression was observed in CA1 pyramidal cells (p < 0.05 vs. non-SE, Fig. 5), of LIMK2 mRNA/protein expression level in the hippocampus (n = 10 per each group). Significant differences from vehicle, *p < 0.05. c-f Representative photographs of LIMK2 and NeuN in the CA1 pyramidal cells. As compared to vehicle, BQ788 inhibits LIMK2 induction following SE. Bar = 50 μm. g Quantitative values (mean ± S.E.M) of LIMK2 and NeuN in the hippocampus (n = 10 per each group). Significant differences from vehicle, *p < 0.05 which was alleviated by BQ788 infusion (p < 0.05 vs. vehicle, Fig. 5). These findings indicate that ET B receptor activation may result in ROCK1-mediated LIMK2 induction following SE. Recently, we have reported that impairment of LIMK2-mediated mitochondrial dynamics may participate in the neuronal necrosis following SE [7]. Since DRP1 S616 phosphorylation accelerates mitochondrial fission, but S637 phosphorylation increase the detachment of DRP1 from mitochondria resulting in inhibition of mitochondrial fission [19], we investigated whether ET B receptor activation is related to impairment of mitochondrial dynamics following SE. Consistent with our previous study [7], SE reduced DRP1 expression and DRP1 S616/S637 phosphorylation ratio (Fig. 6a-c), but induced mitochondrial elongation and sphere formation in CA1 neurons (p < 0.05 vs. non-SE animals, Fig. 6d-h). Both BQ788 and Y-27632 (a ROCK inhibitor) attenuated the reductions in DRP1 S616/S637 phosphorylation ratio and DRP1 expression (p < 0.05 vs. vehicle, Fig. 6a-c), and inhibited mitochondrial elongation and sphere formation following SE (p < 0.05 vs. vehicle, Fig. 6d-h). These findings indicate that ET B receptor activation may involve LIMK2-DRP1-mediated impairment of mitochondrial fission during programmed necrotic cell death. Exogenous ET-1 injection induces LIMK2-mediated neuronal death in the hippocampus To investigate the direct role of ET-1 in LIMK2mediated neuronal death, we injected ET-1 into the hippocampus of normal rats. As compared to vehicle, Fig. 6 Effect of BQ788 and Y-27632 on dysfunction of mitochondrial fission at 3 days after SE. a Western blot images of DRP1, DRP1 S616 and DRP1 S637 in the hippocampus. As compared to vehicle, both BQ788 and Y-27632 (a ROCK inhibitor) attenuate the reductions in DRP1 and DRP1 S616 expression, but increase DRP1 S637 expression. b Quantitative values (mean ± S.E.M) of DRP1, DRP1 S616, DRP1 S637 level (n = 10 per each group). Significant differences from vehicle, *p < 0.05. c Quantitative values (mean ± S.E.M) of DRP1 S616/S637 ratio in the hippocampus (n = 10 per each group). Significant differences from vehicle, *p < 0.05. d Quantitative values (mean ± S.E.M) of mitochondrial length in the CA1 neurons (n = 10 per each group). Significant differences from non-SE animals, *p < 0.05 e-h Representative photographs of mitochondria and NeuN in the CA1 neurons. SE increases mitochondrial length and sphere formation. Both BQ788 and Y-27632 alleviate mitochondrial elongation and sphere formation induced by SE. Bar = 6.25 μm ET-1 (40 pmol/μl) increased neuronal LIMK2 expression, accompanied by reduction in NeuN expression at 3 days after injection (p < 0.05, Fig. 7a, b and d). Cotreatment of ET-1 and BQ788 attenuated up-regulation of LIMK2 expression induced by ET-1 in this time point (p < 0.05 vs. vehicle, Fig. 7c and d). ET-1 injection also induced mitochondrial elongation and sphere formation, as compared to vehicle (p < 0.05, Fig. 7e-g). Co-treatment of ET-1 and BQ788 prevented mitochondrial elongation and sphere formation induced by ET-1 (p < 0.05 vs. Fig. 7e and h). These findings also support that ET B receptor activation may play an important role in LIMK2-mediated impairment of mitochondrial dynamics during programmed necrotic cell death. Discussion The increases in the production and release of ET-1 are involved in the various pathological response of the brain other than vascular constriction [20][21][22][23][24][25]. Indeed, ET B receptor activation plays a substantial role as a proliferative and anti-apoptotic factor [26][27][28]. However, ET-1 also evokes necrotic neuronal damage [13], and causes reactive nitrogen species-mediated tissue injury [12]. In the present study, ET B receptor expression was markedly elevated in CA1 pyramidal cells and astrocytes following SE, accompanied by the rapid release of ET-1. The present study also demonstrates that both BQ788 and Cav-1 peptide effectively inhibited SE-induced vasogenic edema and BBB breakdown. Therefore, it would be likely that ET B receptor-mediated NOS activation might affect neuronal death via vasogenic edema formation or excessive reactive oxidizing production following SE. However, BQ788 infusion attenuated SE-induced neuronal damage, while Cav-1 peptide infusion did not affect it. Therefore, these findings indicate that ET-1 may participate in neuronal death via ET B receptormediated pathways following SE, which may be vasogenic edema-and NOS-independent mechanism. LIMK2 regulates cofilin activity, which is one of the regulators of actin dynamics. Interestingly, LIMK2 also modulates cyclin D1 repression [7,29]. Recently, we have reported that SE increases LIMK2 expression and dysfunctions DRP1-mediated mitochondrial fission in necrotic neurons, and LIMK2 knockdown attenuates necrotic neuronal damage by recovery of impaired mitochondrial fission [7]. Consistent with this previous study, the present data show that SE up-regulated LIMK2 expression in CA1 neuronal vulnerable to SE, which was accompanied by impairment of DRP1-mediated mitochondrial fission. Furthermore, exogenous ET-1 injection resulted in LIMK2 over-expression and dysfunction of mitochondrial fission. In addition, BQ788 significantly inhibited SE-and exogenous ET-1-induced LIMK2 expression. These findings indicate that ET B receptor activation may play an important role in SE-induced LIMK2 induction and dysfunction of mitochondrial fission independent of NO productions. A dysfunction of mitochondrial fission improperly segregates mitochondria, which decreases ATP levels [19,30]. Furthermore, elongated mitochondria cannot be transported to proper distal regions in either dendrites or axons resulting the local limit of ATP supply [31,32]. DRP1 deletion also inhibits mitochondrial respiratory function and increases the reactive oxygen species production [33,34]. Following DNA damage, DRP1 overexpression increases neuronal viability by restoring the mitochondrial dynamics [35]. Since DRP1 is required for caspase activation during apoptosis [36], it is likely that LIMK2-mediated reduction in DRP1 expression may prefer to inducing necrosis rather than apoptosis. Therefore, these findings suggest that ET-1 may be involve in neuronal necrosis by up-regulation of LIMK2, which provokes impairment of DRP1-mediated mitochondrial dynamics. Although the underlying mechanism is still unknown, ROCK inhibitors have neuroprotective effects from various neuronal injuries [37,38]. Recently, we have reported that ROCK inhibitor down-regulates LIMK2 expression by upregulation of p27 Kip1 expression following SE [7]. Furthermore, ROCK is one of the effectors for ET-1 mediated signaling pathway [39][40][41]. The present study demonstrates that DRP1 expression, DRP1 S616/S637 phosphorylation ratio and mitochondrial fission were reduced with ROCK1 over-expression following SE, which were inhibited by both BQ788 and Y-27632. These findings demonstrate that ROCK1-induced LIMK2 over-expression may be the novel underlying mechanism for ET-1-induced neuronal death. ET B receptor activation leads to severe vasogenic edema via the impairment of aquaporin-4 (AQP4, a water channel) in astrocytes within the piriform cortex (PC) following SE [42]. In the present study, up-regulation of ET B receptor expression was observed in astrocytes and CA1 neurons. Furthermore, BQ788 infusion effectively prevented SE-induced vasogenic edema formation as well as neuronal death in the hippocampus. Based on the inhibitory role of ET-1 in astroglial AQP4 functionality [42], these findings suggest that up-regulated ET B receptor expression in astrocytes may involve the dysfunction of AQP4 in astrocytes and lead to vasogenic edema in the hippocampus, like the PC. Conclusion In summary, ET-1-mediated signal is involved in mitochondrial dynamics during neuronal necrosis (Fig. 8). These findings suggest that ET-1 may be involved in SEinduced neuronal necrosis independent of NOS synthesis and BBB disruption. Therefore, ET-1-mediated signaling pathway may be an important therapeutic target for programmed necrotic neuronal death. Experimental animals and chemicals Male Sprague-Dawley (SD) rats were obtained from Experimental Animal Center, Hallym University, Chunchon, South Korea, and and housed in standard rodent cages (3 rats per cage) at 22 ± 2°C, 55 ± 5 % humidity and a 12:12 light/dark cycle. Animals had free access to food and water. After at least 1 week of adaptation in the animal. Experimental procedures were done on approval of the Institutional Animal Care and Use Committee of the Hallym university (Chunchon, Republic of Korea). All reagents were obtained from Sigma-Aldrich (St. Louis, MO, USA), unless otherwise noted. Surgery For microdialysis and ET-1 injection, rats were anesthetized with 1-2 % Isoflurane in O 2, and placed in a stereotaxic frame. A microdialysis guide cannula was inserted in the right hippocampus using the following coordinates: 3 mm posterior; 2 mm lateral; 3.2 mm depth from bregma [43]. Seven days after surgery, animals were used for microdialysis. Some animals were inserted with a cannula (27 G) by the same methods. ET-1 (40 pmol in 1 μl of saline) or mixture of ET-1 (80 pmol in 0.5 μl of saline) and BQ-788 (an ET B receptor antagonist; 6 pmol in 0.5 μl of saline) was infused over a 5-min period using a microinjection pump (0.2 μl/min, KD scientific, Hollistone, MA, USA). For control, rats were given 1 μl of saline instead of ET-1. Three days after injection, we used them for immunohistochemical study. Seizure induction Three days after surgery, SE was induced by a systemic injection of pilocarpine (380 mg/kg, I.P.). To reduce peripheral effects of pilocarpine, Atropine methylbromide (5 mg/kg, I.P.) was injected 20 min before a single dose of pilocarpine. Animals were maintained in SE for 2 h, after which diazepam (10 mg/kg, i.p.) was administered to terminate seizure activity, and repeated, as needed. As controls, age-matched normal rats were treated with saline instead of pilocarpine. ET-1 and NO assay One day before SE induction, a microdialysis probe (CMA 12) was inserted into the hippocampus. The microdialysis probe was perfused with Ringer's solution [42]. The perfusion rate was 1 μl/min for 4 h before/ after SE induction, and efflux from the microdialysis probe was collected 240 μl, respectively. To measure ET-1 and NO concentrations in perfusates, we used ET-1 ELISA kit (Enzo Life Science) and nitrate/nitrite assay kit (Cayman chemical company, USA), according to the manufacturer's instructions [42]. Tissue processing Rat were transcardially perfused with phosphate-buffered saline (PBS) followed by 4 % paraformaldehyde in phosphate buffer (PB, 0.1 M, pH 7.4) [42]. Brains were removed and post-fixed in the same fixative for 4 h, then moved to 30 % sucrose solution until saturated and then frozen and sectioned at 30 μm on a cryostat. Consecutive sections were contained in six-well plates containing PBS [48]. For western blot, the hippocampal protein was obtained by being homogenized and centrifugated [42], then the supernatant was collected. The total protein concentration was assayed by a Micro BCA Protein Assay Kit (Pierce Chemical, Rockford, IL, USA). For quantitative real-time PCR, total RNA in the hippocampus was Fig. 8 Scheme of programmed necrotic neuronal death based on the present data and a previous report [7]. Increased ET-1 release and ET B receptor expression induced by SE results in the up-regulation of ROCK1 expression, which subsequently increases LIMK2 expression independent of NOS activity. The up-regulated LIMK2 induces necrotic neuronal death by impairment of DRP1-mediated mitochondrial fission. Inhibition of LIMK2 expression and rescue of DRP1 function attenuate this programmed necrotic neuronal death obtained using Trizol Reagents, according to the manufacturer's protocol (Ambion, Austin, TX, USA) [42]. Immunohistochemistry To measure vasogenic edema lesion, tissue sections were immersed for 10 min in 3 % H 2 O 2 and 30 min in blocking solution (10 % normal horse serum in PBS). Horse anti-rat IgG (Vector, USA) was applied overnight at 4°C. Immunoreactivity was developed with 3,3′-diaminobenzidine. We analyzed the volume of vasogenic edema lesion with the modified Cavalieri method [18,49]. Table 1 is a list of the antibodies used in double immunofluorescent study. Sections were incubated in a mixture of antisera overnight at room temperature, and subsequently in a mixture of FITC-and Cy3-conjugated IgG (Jackson Immunoresearch Laboratories Inc., West Grove, PA, USA; diluted 1:250). To verify the specificity of the antibodies (negative controls), a primary antibody was omitted. Images were taken with an AxioImage M2 microscope. Fluorescent intensity was measured using AxioVision Rel. 4.8 software and Ima-geTool program V. 3.0. [42,50]. Fluoro-Jade B staining Sections mounted on gelatin-coated slides were immersed in in 80 % ethanol containing 1 % sodium hydroxide. Then tissue sections were immersed in 70 % ethanol for 2 min and distilled water for 2 min. After immersion in potassium permanganate for 15 min, tissues were washed with distilled water and then incubated in 0.001 % FJB (Histo-Chem Inc. Jefferson, AR, USA). Next, the slides were rinsed, dehydrated, and finally mounted with DPX. Two different investigators performed cell counts with optical dissector methods [7]. Quantitative real-time PCR Quantitative real-time PCR was performed using the MyiQ Single Color Real-Time PCR System (Bioneer, Taejon, South Korea). The primers used in the present study were as follows: Forward GACCAGCGTCCTT GTTCCAA, Reverse TTGCTACCAGCGGATGCAA for rat ET-1, Forward: CTTCCTGTGTTGTCCGCGCC, Reverse: AGGCCTCGTTGGCTGTCCTG for rat LIMK2. The reaction procedure was set as one cycle of 95°C for 3 min, 40 cycles of 60°C for 45 s and 95°C for 10 s. GAPDH (Forward ACATCAAGAAGGTGGTGAAG; Reverse ATACCAGGAAATGAGCTTCA) was used as normalization for qRT-PCR data. Specificity of the PCR reactions was assessed by analysis of melting curves for each data point [42]. Data analysis Student t-test or one-way ANOVA was applied for statistical analyses. For post-hoc comparisons, we applied Bonferroni's test. A p-value < 0.05 was considered statistically significant [52,53].
4,740.4
2015-10-06T00:00:00.000
[ "Biology" ]
On Some Weevil Taxa of the Subfamily Entiminae (Coleoptera, Curculionidae) Described by V. I. Motschulsky from Japan and New Data on the Morphology of the Tribes Cneorhinini and Tanymecini Lectotypes of Cneorhinus viridimetallicus Motschulsky, 1860, C. cuprescens Motschulsky, 1866, C. nodosus Motschulsky, 1860, and Dermatodes carinulatus Motschulsky, 1866 are designated. A new synonymy Catapionus viridimetallicus (Motschulsky, 1860) (= Cneorhinus cuprescens Motschulsky, 1866, syn. n.) and a new combination Scepticus carinulatus (Motschulsky, 1866), comb. n. are established. New data on the morphology of some genera of the tribes Cneorhinini and Tanymecini are presented. The record of Amystax fasciatus Roelofs, 1873 for Kunashir Island is shown to be based on misidentification of Scepticus carinulatus. This paper continues the series of publications reporting the results of my study of V.I. Motschulsky's collection kept at the Zoological Museum of Lomonosov Moscow State University (ZMMU) (Savitsky, 2018(Savitsky, , 2020. Here I designate the lectotypes of Cneorhinus viridimetallicus Motschulsky, 1860, C. cuprescens Motschulsky, 1866 Motschulsky, 1860, and Dermatodes carinulatus Motschulsky, 1866, establish a new synonymy and clarify the taxonomic position of D. carinulatus, give new morphological data for some genera of the tribes Cneorhinini and Tanymecini, and also demonstrate that the previous records of Amystax fasciatus Roelofs, 1873 for Kunashir Island were erroneous. This work was greatly helped by the publication of the major monograph by K. Morimoto and co-authors (Morimoto et al., 2015) on weevils of the subfamily Entiminae in the fauna of Japan. When labeling the insects from the first batch, he indicated the collection locality as "Japan," whereas in the labels of the insects from the second batch the locality was indicated as "Japonia." Besides, the type specimens from the first batch usually have a small additional label "type," while those from the second batch have no such label. This information should be taken into account when deciding whether a particular specimen belongs to the type series, because some species described from Japan by V.I. Motschulsky in his first paper (Motschulsky, 1860) were also represented in the second batch of material. MATERIALS AND METHODS This work is based on examination of the material kept in the ZMMU collection and kindly provided by my colleagues. The body length was measured with an eyepiece micrometer, from the anterior margins of the eyes to the tip of the elytra. The type specimens were mounted using the previously described technique (Davidian and Savitsky, 2017). The genitalia and terminalia were studied at magnifications up to ×400 and documented from glycerol preparations, using a Micromed-3 microscope equipped with a ToupCam 9.0 MP digital eyepiece camera. The type localities for the species described from Japan by V.I. Motschulsky were determined based on the historical facts described earlier (Savitsky, 2020). The lectotype designated herein is a male with the following labels ( Fig. 1, 6): (1) a tiny yellow square; (2) "type": a small label in V.I. Motschulsky's handwriting on white paper; (3) "Cneorhinus viridimetallicus Motsch Japan" in V.I. Motschulsky's handwriting on yellow paper; (4) "Lectotypus Cneorhinus viridimetallicus Motschulsky, 1860V. Savitsky des. 2021 The lectotype ( Fig. 1; Fig. 2) is remounted onto a rectangular cardboard plate, with the abdominal ventrites glued separately in the back left corner. The cleaned genitalia and terminalia are kept in a microvial with glycerol. The specimen is complete, with a hole from the mounting pin in the right elytron. The body is 11.4 mm long and 5.2 mm wide. Type locality of Cneorhinus viridimetallicus: Hokkaido Island, environs of Hakodate or Honshu Island, north of Tokyo. One syntype was studied: Cneorhinus cuprescens Motschulsky, 1866 (ZMMU), a female which is designated here as the lectotype. The specimen has the following labels (Fig. 4, 6): (1) The lectotype (Fig. 3, 1-4, 8; Fig. 4) is remounted onto a rectangular cardboard plate, with the abdominal ventrites glued separately in the back left corner. The cleaned genitalia, terminalia, and proventriculus are kept in a microvial with glycerol. The onychium of the middle right tarsus in the lectotype is missing, the right elytron has a hole from the mounting pin, the genitalia and terminalia are partly damaged (Fig. 3, 2-4), and the internal parts are destroyed by dermestids. The body is 12.2 mm long and 5.9 mm wide. The lectotype of Cneorhinus cuprescens differs from females of Catapionus viridimetallicus only in the color of scales and belongs to a color form of this variable species, as was justly supposed earlier by K. Morimoto and co-authors (Morimoto et al., 2015). On this basis, I establish the above synonymy. The data below were obtained by examining the types of Cneorhinus viridimetallicus and C. cuprescens and also the material from Kunashir and Sakhalin islands. They supplement the previously published description of Catapionus viridimetallicus (Morimoto et al., 2015). Male. Elytra usually merged along suture. Wings strongly reduced, lobe-shaped, 3.3-4.8 times as long as wide, without distinct veins; elytra 1.3-1.9 times as long as wings. Corbels of hind tibiae elongate oval, glabrous or only with several setae and small elongate scales. Abdomen 1.1-1.2 times as long as wide. Intercoxal protrusion of ventrite I 1.05-1.30 times as wide as coxal cavities. Anal ventrite with preapical depression, its apex dorsally with 2 protrusions. Sternite VIII usually with 2 small sclerites in proximal part. Tegmen with long manubrium and with parameres positioned wide apart. Apophyses and endophallus much longer than penis tube. Aggonoporium (flagellum in the terminology of Morimoto et al., 2015) composed of paired acicular sclerites extending along entire length of endophallus, connected basally by bridge, and gradually narrowing toward ostium. Ejaculatory duct developed along entire length of endophallus, positioned between Motschulsky,lectotype,[5][6][7]9,10) Cneorhinus viridimetallicus Motschulsky, paralectotype. sclerites of aggonoporium; duct walls membranous before endophallus and sclerotized inside it. Ventral wall of endophallus at the level of middle portion of penis tube with U-shaped plate, moderately sclerotized over greater part of its area and with strongly sclerotized inner margins of its lobes ("pair of supporting stripes" in Morimoto et al., 2015). Crop unarmed. Lobes of proventriculus distally with merged plates. Lamella of spiculum ventrale weakly transverse, with quite narrowly rounded apical margin and excurved lateral margins. Manubrium 1.25-1.75 times as long as lamella, narrow; caput quite small. Tergites VII and VIII transverse, with widely incurved lateral margins, covered only with simple setae; apical margin of tergite VII weakly emarginate, that of tergite VIII narrowly rounded or blunted. Distribution. Russia (Sakhalin Island, the Southern Kuril Islands), Japan. The previous records of C. viridimetallicus for East Siberia, Northeast China, and Korea (Morimoto et al., 2015;Alonso-Zarazaga et al., 2017, etc.) remain to be confirmed; all these records probably refer to the closely related species C. fossulatus Motschulsky, 1860. Morphological Notes on Weevils of the Tribe Cneorhinini All the species of the genus Catapionus studied by me (over 20 species, including the type species C. basilicus Boheman, 1842) have glabrous or sparsely pubescent corbels and glabrous articulatory areas of the hind tibiae, and also a characteristic structure of the aggonoporium and ejaculatory duct, described above for C. viridimetallicus. A similar morphology of the aggonoporium is also observed in species of the genera Dermatoxenus Marshall, 1916 andEustalida Faust, 1891. In my opinion, these genera are the closest to Catapionus but differ well from the latter in the hind tibiae with corbels completely covered with scales. In some species of Dermatoxenus from India, Vietnam, and Indonesia, the greater part of articulatory areas of the hind tibiae is also densely covered with scales (Marshall, 1916;Mahendiran and Ramamurthy, 2012;my data). In all the species of the genus Dermatodes Schoenherr, 1840 studied by me (7 species from the islands of Java, Sumatra, and Borneo), and also in Dermatodina prosvirovi Savitsky, 2021 and D. holynskiorum (Kania et Stojczew, 2001), the corbels and the greater part of articulatory areas of the hind tibiae are densely covered with scales (Kania and Stojczew, 2001;Savitsky, 2021). There are no exact data on the apical pubescence of the hind tibiae in other species of the genera Dermatodina and Antinia Pascoe, 1871. In all the species of the Madagascan genus Stigmatrachelus Schoenherr, 1840 studied by me (over 10 species, including the type species S. cinctus (Olivier, 1807)), the corbels of the hind tibiae are densely covered with scales while their articulatory areas are glabrous. Thus, the pubescence patterns of the corbels and articulatory areas of the hind tibiae may be useful for diagnostics of supraspecific taxa in the tribe Cneorhinini, even though these features are rarely mentioned in descriptions. It should also be noted that the aggonoporium in species of the genera Dermatodes, Dermatodina, Antinia, and Stigmatrachelus, as well as in most Curculionidae, is much shorter and has a different shape as compared with that in species of the genera Catapionus, Dermatoxenus, and Eustalida (e.g., see Kania and Dąbrowska, 1995;Kania and Stojczew, 2001;Morimoto et al., 2015;Kania and Piwnik, 2017;Savitsky, 2021). Strongly reduced lobe-like wings without distinct veins, as in Catapionus viridimetallicus, are also typical of all the other Catapionus species studied by me in this respect: C. basilicus, C. ballioni Heyden, 1880, C. fossulatus, and C. mopsus Grebennikov, 2016. In males of C. viridimetallicus and C. fossulatus these additional sclerites of sternite VIII vary strongly in the degree of development and may be totally absent in some specimens; therefore, additional research is needed to determine the diagnostic value of this character within the tribe Cneorhinini. The ejaculatory duct in most Curculionidae ends with the gonopore at its joining with the endophallus. The endophallus walls surrounding the gonopore are usually reinforced with an aggonoporium. During copulation the endophallus is completely everted through the ostium, so that the aggonoporium and gonopore get inserted into the vagina or the bursa copulatrix, and the sperms are transferred through the gonopore into the female reproductive tract. As described above, in species of the genus Catapionus the aggonoporium and the ejaculatory duct are developed along the entire length of the endophallus. Together they form a single element that lies freely in the endophallus cavity, while the gonopore opens near the ostium of the aedeagus. Correspondingly, it may be supposed that during copulation of Catapionus weevils the endophallus is not everted but folded inside the penis tube, and only the aggonoporium is completely or partially extended through the ostium. This hypothesis can only be confirmed by thorough observations of the mating process in Catapionus weevils. This functional variant may be less injurious to the membranous walls of the endophallus, as compared with the usual mechanism of complete eversion. One syntype was studied: Cneorhinus nodosus Motschulsky, 1860 (ZMMU), a female which is designated here as the lectotype. The specimen has the following labels (Fig. 5, 6): (1) a tiny yellow square; (2) "type" in V.I. Motschulsky's handwriting on white paper; (3) "Cneorhinus nodosus Motsch Japan" in V.I. Motschulsky's handwriting on yellow paper; (4) "Lectotypus Cneorhinus nodosus Motschulsky, 1860 V. Savitsky des. 2021" in V.Yu. Savitsky's handwriting on red paper; The lectotype (see Fig. 5; Fig. 6) is remounted onto a rectangular cardboard plate, with the abdominal ventrites glued separately in the back left corner. The metathorax and wings are glued onto a separate rectangular plate of transparent plastic; the genitalia, terminalia, and proventriculus are kept in a microvial with glycerol. The right elytron has a hole from the mounting pin. The body is 11.2 mm long and 5.2 mm wide. Type locality of Cneorhinus nodosus: Hokkaido Island, environs of Hakodate or Honshu Island, north of Tokyo. V.I. Motschulsky's collection also includes a female of Dermatoxenus caesicollis with the following labels: (1) a tiny yellow square; (2) "Dermatodes nodosus Motsch Japonia" in V.I. Motschulsky's handwriting on yellow paper; (3) an inventory label printed on pink paper with the number "№ ZMMU Col 02746." This specimen is not a syntype of Cneorhinus nodosus because it was received by V.I. Motschulsky with the second batch of material from Japan, when the description of C. nodosus had already been published. This is confirmed by the fact that V.I. Motschulsky indicated the collection locality as "Japonia" (see Introduction) and labeled the specimen as Dermatodes, in the same way as in his second paper on the Japanese insects (Motschulsky, 1866). Elytra not merged along suture. Wings reduced, with well-developed cubital vein and shortened apical membrane, 2.7 times as long as wide; elytra 1.2 times as long as wings. Corbels of hind tibiae wide, almost oval, as densely covered with scales as most of body surface; integument completely obscured with scales. Articulatory areas on hind tibiae glabrous. Abdomen approximately 1.2 times as long as wide. Intercoxal protrusion of ventrite I 1.2-1.3 times as wide as coxal cavities. Apex of anal ventrite dorsally with 2 indistinct protrusions. Lamella of spiculum ventrale noticeably longer than wide, with quite narrowly rounded apical margin and excurved lateral margins. Manubrium approximately 1.5 times as long as lamella, narrow; caput quite small. Tergites VII and VIII weakly transverse, of similar shape, covered only with simple setae; apical margin of tergite VII weakly emarginate, that of tergite VIII broadly blunted. Coxites long, moderately sclerotized; their dorsal surface distally along inner margin with areas of very fine and dense tubercles surrounding much larger, circular tubercles at bases of setae. Styli apical, elongate, dorsally densely covered with setae. Female reproductive tract without sclerotized elements. Vagina longer than coxites, approximately twice as long as bursa copulatrix. Collum of spermatheca much larger than ramus, gradually narrowing toward apex; cornu uniformly curved. Entire surface of spermatheca with reticulate microsculpture. Spermathecal duct not sclerotized, thickened in distal portion. Crop unarmed. Lobes of proventriculus distally with merged plates. Notes on Wing Reduction in Weevils of the Tribes Cneorhinini and Blosyrini According to K. Morimoto and co-authors (Morimoto et al., 2015), the hind wings in species of the tribe Cneorhinini are more or less reduced and non-functional. This is quite true of Dermatoxenus caesicollis as well as of species of the genus Catapionus and Eustalida sp. from Nepal, studied by me. In the latter species, similar to D. caesicollis, the scutellum is concealed in dorsal view; the elytra are not merged along the suture; the wings are 4.5 times as long as wide and have a well-developed cubital vein and a shortened apical membrane; the elytra are 1.15 times as long as the wings. However, many species of the tribe Cneorhinini, which used to be placed in a separate tribe Dermatodini, possess fully developed wings with complete venation and a long apical membrane. For instance, in two Dermatoxenus species from Vietnam the wings are approximately 2.8 times as long as wide and 1.55 and 1.40 times as long as the elytra, respectively. In the species with relatively longer wings the scutellum is concealed in dorsal view, while the other species has a well-developed scutellum, slightly raised above the level of the sutural interstriae. The wings are also fully developed in Dermatodes albarius Faust, 1892, D. dajacus Heller, 1915, and four species of the genus Stigmatrachelus studied by me. It remains unknown if their wings are functional, since there are no published data on the flight capability of these weevils. Marshall (1916) noted that the tribe Blosyrini comprised wingless species, whereas K. Morimoto and coauthors (Morimoto et al., 2015) stated that weevils of this tribe had rudimentary wings. However, wings are well developed at least in specimens of Blosyrus asellus (Olivier, 1807) from Vietnam. These weevils seem to be able to fly because they were collected in considerable numbers in light traps in sweet potato plantations on the Hawaiian Islands (McQuate et al., 2016). Blosyrus oniscus (Olivier, 1807) and B. ? herthus (Herbst, 1797), similar to B. asellus, possess a well-developed scutellum and non-merged elytra; at the same time, they have strongly reduced, narrow, lobe-like wings with no traces of venation and only 1/2-1/3 times as long as the elytra. Thus, in the process of wing reduction in weevils of the subfamily Entiminae the visible part of the scutellum may totally disappear (Dermatoxenus caesicollis) or be preserved (Blosyrus oniscus). In both cases, the elytra do not always merge along the suture. Of special interest are cases when the visible part of the scutellum disappears completely before the beginning of wing reduction, as in some Dermatoxenus species. On the whole, the degree of wing development in Entiminae weevils should be assessed by examination of the wings themselves and not merely inferred from external features consistent with aptery, such as disappearance of the scutellum and humeri, merging of the elytra along the suture, reduction of the metanotum length, etc. (for details, see Zherikhin and Egorov, 1990). Dermatodes carinulatus One syntype was studied: Dermatodes carinulatus Motschulsky, 1866 (ZMMU), a female which is designated here as the lectotype. The specimen has the following labels (Fig. 7, 6) The lectotype (Fig. 7; Fig. 8, 1-4, 6, 7, 10; Fig. 9, 1) is remounted onto a rectangular cardboard plate, with the abdominal ventrites glued separately in the back left corner. The cleaned genitalia, terminalia, and proventriculus are kept in a microvial with glycerol. Only the first tarsomere is preserved in the fore right and hind left tarsi; the elytra are separated in the apical half; the lateral body pubescence is partly worn out. The body is 7.7 mm long and 3.5 mm wide. Besides the lectotype of D. carinulatus, the ZMMU collection includes the following material of S. carinulatus. The specimens of Scepticus carinulatus studied by me generally correspond to the published descriptions of S. insularis and S. konoi (Morimoto et al., 2015). Some additional characteristics of the female of S. carinulatus are given below. Material. Russia. Sakhalin Province Pronotum at base 1.05-1.15 times as wide as at apex (1.15 times as wide in lectotype of Dermatodes carinulatus). Elytra in middle third almost parallel-sided; odd interstriae convex, 9th and 10th interstriae distinctly depressed at the level of hind coxae. Abdomen 1.15-1.22 times as long as wide. Intercoxal protrusion of ventrite I 1.75-2.0 times as wide as coxal cavities. Anal ventrite weakly transversely convex in middle part, its apex dorsally without protrusions. Lamella of spiculum ventrale noticeably longer than wide, with rounded apical margin and widely excurved lateral margins. Manubrium 1.65-2.30 times as long as lamella, narrow; caput quite small. Tergites VII and VIII of strongly different shapes (Fig. 8, 6, 7); tergite VII noticeably wider than long, with broadly blunted or weakly emarginate apical margin; tergite VIII barely wider than long, with broadly incurved lateral margins and relatively narrowly rounded apical margin. Lamella of spiculum ventrale and tergite VIII covered with simple setae; tergite VII covered with simple and pinnate setae (Fig. 8, 10). Coxites long, quite strongly sclerotized, distally with S-curved and strongly sclerotized inner dorsal margin. Styli apical, large, elongate, dorsoventrally flattened, strongly sclerotized, glabrous along nearly their entire length, only near base with dorsal tuft of long setae. Female reproductive tract without sclerotized elements; vagina about as long as coxites, bursa copulatrix very small. Spermatheca usually strongly sclerotized, ramus noticeably larger than collum, cornu almost straight in distal part, with beak-shaped bend only at its very apex (Fig. 9, 1-6). Collum and basal part of ramus with reticulate microsculpture, more distinct in less strongly sclerotized spermathecae; surface of cornu smooth. Spermathecal duct sclerotized along its entire length. Lobes of proventriculus distally with merged plates; each lobe with club-shaped apical structure bearing denticles directed inwards and backwards (Fig. 8, 5). Sclerotized parts of spicules directed into proventriculus lumen flattened, gradually narrowing toward apex. Comparative notes. Scepticus carinulatus resembles S. konoi in the elytra being almost parallel-sided in their middle third and in the convex odd interstriae. On the contrary, the distinct depression of the 9th and 10th interstriae at the level of the hind coxae corresponds to the description of S. insularis (Morimoto et al., 2015). The ratio of the pronotum width at its base to that at its apex is 1.05 in S. konoi and 1.15 in S. insularis (Morimoto et al., 2015), whereas in S. carinulatus it is 1.05-1.15. Scepticus insularis and S. konoi are widely distributed in Japan, but only S. konoi was recorded for Hokkaido Island (Morimoto et al., 2015). In my opinion, of all the characters proposed by the Japanese authors for differentiating these species, only the structural features of the elytra and aggonoporium ("flagellum" in the cited publication) may be reliable. The two species can hardly be differentiated by the ratio of the basal and apical pronotal widths, judging by the level of variation of this character determined by the measurements of only seven specimens of S. carinulatus. The distinct depression of the 9th and 10th interstriae at the level of the hind coxae most probably constitutes a secondary sexual character in this group. In particular, this depression is present in all the females of several species of the genus Scepticus and also a very closely related species Meotiorhynchus querendus Sharp, 1896, studied by me. At the same time, in males of S. tigrinus (Roelofs, 1873), S. noxius (Faust, 1886), and M. querendus the 9th and 10th interstriae are flat or weakly depressed at the level of the hind coxae. Thus, S. konoi is quite possibly a synonym of S. carinulatus (Motschulsky, 1866); however, for the time being I refrain from establishing the new synonymy since I do not have sufficient material of the genus Scepticus from Japan and I have not examined any males of S. carinulatus. The specimen of Scepticus carinulatus collected by M.V. Shestopalov on Kunashir Island has a label "Amystax fasciatus Roel. V. Zherichin det. 91." Before the finding of this specimen, the genus Amystax Roelofs, 1873 was not included in the review of weevils of the Far East of the USSR (Zherikhin and Egorov, 1990). Thus, the records of Amystax fasciatus for Kunashir Island (Egorov et al., 1996;Ren et al., 2013b;Alonso-Zarazaga et al., 2017) were based on misidentification and in fact referred to S. carinulatus. It should also be noted that, according to K. Morimoto and co-authors (Morimoto et al., 2015), species of the genus Amystax are distributed only in Japan, but they are unknown on Hokkaido Island and in the northern half of Honshu Island. Morphological Notes on Weevils of the Tribe Tanymecini My examination of Meotiorhynchus querendus, Scepticus tigrinus, S. noxius (Kyrgyzstan, Trans-Alay Range), and Scepticus sp. (India, Himachal Pradesh, Pir Panjal Range) has revealed generally the same structure of the female terminalia and genitalia, and also the crop and proventriculus, as that in S. carinulatus. In particular, tergite VII in females of all these species is covered with simple and pinnate setae (as in Fig. 8, 10, 11); the coxites distally have an S-curved and strongly sclerotized inner dorsal margin; the styli are apical, large, scoopshaped, and glabrous, with only a basal tuft of setae; the spermatheca has a characteristic shape with the cornu almost straight distally and beak-shaped at the apex (Fig. 9, 7-15) and the duct sclerotized along its entire length; the crop is covered with fine denticles proximally and with narrow spicules distally (as in Fig. 8, 8, 9); the proventricular lobes bear club-shaped apical structures with rather large denticles (as in Fig. 8, 5). To estimate the diagnostic significance of these characters, I have additionally studied species from 15 genera representing all the subtribes of Tanymecini. Species of the tribe Tanymecini vary strongly in the shape and sclerotization of the coxites; yet the exact sclerotization pattern of the inner coxite margins typical of Scepticus and Meotiorhynchus has not been found in any other species of this tribe. Most of the studied species of the tribe Tanymecini differ well from Scepticus and Meotiorhynchus in the spermatheca morphology (compare the drawings in Supare et al., 1990;Poorani and Ramamurthy, 1997;Ren et al., 2007Ren et al., , 2013aRamamurthy and Ayri, 2010;Morimoto et al., 2015;Kumar et al., 2016;Song et al., 2017). Only some species of the genera Geotragus Schoenherr, 1845 and Hyperomias have spermathecae of nearly the same shape as those in Scepticus and Meotiorhynchus. The spermathecal duct is more or less sclerotized along its entire length or part of it in most species of the tribe Tanymecini. In the subtribe Piazomiina the sclerotized duct is coiled (e.g., see Song et al., 2017: figs. 19, 22, 40, 41) or sinuous, similar to that of Scepticus sp. and S. noxius. Such species as Phacephorus nebulosus, Megamecus variegatus, Protenomus saisanensis, Diglossotrox mannerheimi, and Aspidiotes cottyi have membranous or barely sclerotized spermathecal ducts. The spermathecal duct of Ph. nebulosus, M. variegatus, P. saisanensis, and D. mannerheimi is considerably shorter than that of other species and thickened distally; the latter feature is also typical of species of the genus Chlorophanus. All the additionally studied species of the tribe Tanymecini differ well from Scepticus and Meotiorhynchus in the crop morphology. In most of them the crop is covered with narrow spicules along its entire length, while in Chlorophanus vittatus and Lepropus sp. from Vietnam the spicules are present only in the proximal part. The proventricular lobes in all the studied species of the tribe Tanymecini bear apical structures with denticles directed backwards. In most species these structures are triangular or trapeziform pads of varying width, covered with fine imbricated denticles (e.g., see Song et al., 2017: fig. 17). Only Xylinophorus sp. from Iran and ? Geotragus sp. from Vietnam have club-shaped structures with large denticles (as in Fig. 8, 5), similar to those in Scepticus and Meotiorhynchus. Thus, most of the specific features of the crop, proventriculus, and the female terminalia and genitalia typical of Scepticus and Meotiorhynchus can also be found in other genera within the tribe Tanymecini. However, the complex of these features very reliably characterizes Scepticus and Meotiorhynchus and differentiates them from other genera of this tribe. Besides, Meotiorhynchus querendus and all the Scepticus species have a specific structure of the antenna, with the 7th funicle segment strongly enlarged and incorporated into the club (as in Fig. 7, 2), and also a similar morphology of the aedeagus, including the aggonoporium. In addition, it should be noted that the elytra of both M. querendus and the Scepticus species examined by me are usually merged along the suture. The wings of M. querendus are strongly reduced, lobe-shaped, with no trace of veins, approximately 2.25 times as long as wide and 0.28 times as long as the elytra; the wings of Scepticus noxius are totally absent, and the degree of wing development in other Scepticus species is unknown. On the whole, my new data confirm the opinion of K. Morimoto and co-authors (Morimoto et al., 2015) that the genera Meotiorhynchus and Scepticus are taxonomically close, and also demonstrate that Meotiorhynchus querendus and species of the genus Scepticus from both the western and the eastern parts of its disjunct range form a single monophyletic group, distinct from the other genera of the subtribe Tanymecina. COMPLIANCE WITH ETHICAL STANDARDS All the applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All the procedures performed in studies involving animals were in accordance with the ethical standards of the institution or practice at which the studies were conducted. OPEN ACCESS This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
6,435.8
2021-06-01T00:00:00.000
[ "Biology" ]
A Multi-Lingual Speech Recognition-Based Framework to Human-Drone Interaction : In recent years, human–drone interaction has received increasing interest from the scientific community. When interacting with a drone, humans assume a variety of roles, the nature of which are determined by the drone’s application and degree of autonomy. Common methods of controlling drone movements include by RF remote control and ground control station. These devices are often difficult to manipulate and may even require some training. An alternative is to use innovative methods called natural user interfaces that allow users to interact with drones in an intuitive manner using speech. However, using only one language of interacting may limit the number of users, especially if different languages are spoken in the same region. Moreover, environmental and propellers noise make speech recognition a complicated task. The goal of this work is to use a multilingual speech recognition system that includes English, Arabic, and Amazigh to control the movement of drones. The reason for selecting these languages is that they are widely spoken in many regions, particularly in the Middle East and North Africa (MENA) zone. To achieve this goal, a two-stage approach is proposed. During the first stage, a deep learning based model for multilingual speech recognition is designed. Then, the developed model is deployed in real settings using a quadrotor UAV. The network was trained using 38,850 records including commands and unknown words mixed with noise to improve robustness. An average class accuracy of more than 93% has been achieved. After that, experiments were conducted involving 16 participants giving voice commands in order to test the efficiency of the designed system. The achieved accuracy is about 93.76% for English recognition and 88.55%, 82.31% for Arabic and Amazigh, respectively. Finally, hardware implementation of the designed system on a quadrotor UAV was made. Real time tests have shown that the approach is very promising as an alternative form of human–drone interaction while offering the benefit of control simplicity. Introduction Unmanned aerial vehicles (UAVs), or drones, are robots that fly autonomously without a human pilot on board [1]. They were originally used solely for military applications, but are now also increasingly being used for civilian purposes impacting our lives in extremely beneficial ways. Drone technology is steadily becoming more prevalent and its utility has been demonstrated in a number of recent applications in various fields. Drones were used to inspect and monitor compliance with restrictions during the Covid-19 outbreak, as well as to deliver food, medications, and other supplies to isolated places. With the use of drones becoming more common, a disruptive shift in the way we use technology is expected, which is being supported by an intensive research activity. One important aspect in the design, development and integration of such flying engines is how to interact with them. Developing effective and easy-to-use systems for human-drone interaction (HDI) is a challenging task if we consider the technical, social, and regulatory aspects, and the growing range of potential users and activities for which drones are designed. Traditional remote controllers employ two joysticks for flight control, one for pitch and roll and the other for throttle and yaw. This method requires the constant use of both hands, which might be challenging. The drone's camera is controlled via additional wheels and buttons. Moreover, in tough or stressful situations (such as windy places), the absence of intuitive controls increases the mental burden of the pilot, thereby compromising safety and efficiency. Another difficulty is the difference in position and orientation between the drone and its pilot, which can make it difficult to align the camera feed from the drone with the pilot's surroundings. When the drone is far away or out of direct line of sight, it might be difficult to determine its position and orientation as well as the direction its camera is facing. The type of interaction depends on the application at hand. Methodologies for humancomputer interaction (HCI) and human-terrestrial-robot interaction cannot be used directly for HDI because of the flying complexity inherent in drone technology. A plethora of solutions for HDI have been proposed in the literature and alternative control strategies using gesture and speech were proposed. A review of the state of the art in the context of human-drone interaction can be found in [2]. The authors discuss the main open issues and challenges currently highlighted and reported in research projects and papers. The papers were categorized on the basis of the following dimensions: the drone role, the context of use, indoor vs outdoor usage and the interaction mode. Speech control problems were addressed by many works [3,4]. However, the use of English as the only language for control limits the number of users. Furthermore, improving speech recognition in noisy environment (particularly propeller noise) remains a major challenge. This paper presents an experimental framework for UAVs control using an automatic multi-lingual speech recognition (AMLSR) system based on a deep learning model. In the Middle-East and North Africa (MENA) zone, people speak different languages, basically Arabic and Amazigh. The idea was to use voice commands to interact with a drone using words spoken in English, standard Arabic and Amazigh. To this end, a convolutional Neural Network has been designed and built using the Google speech commands dataset for English and customized datasets for Arabic and Amazigh. Then, 18 possible commands interpreted as instructions for the UAV were collected. The commands are considered as six classes of actions that the UAV can execute from three different languages. The proposed method contributes to the state of the art by enhancing the Englishbased speech recognition system. Multilingual recognition is more advantageous because it provides users with additional options and is not limited to a single category. Additionally, the designed method improves the multilingual recognition for all users, regardless of their native language. Additionally, environmental and propeller noise robustness is addressed. In all languages, instruction utterance recognition is unaffected by signal noise. Consequently, high rates of recognition accuracy were attained during tests. Experiments were conducted using a hardware implementation in real time. To this end, a graphical user interface was created to facilitate human-drone interaction. Due to the big training dataset, English language commands were better understood during testing. Arabic and Amazigh were also detected with a high success rate. In general, the developed framework accommodates users from the MENA region fairly well, regardless of their native language and environmental noise. Based on the state of the art in relevant literature, this study introduces several contributions: • Faster and more efficient voice control recognition. Other interfaces, such as gesture control, delay the system, thereby limiting its utility. • Multilingual ASR system (English, Arabic, and Amazigh) that enables a broad spectrum of users to interact with the UAV with simplicity. • Voice recognition with high accuracy of over 93% using deep learning. • Quadrotor UAV hardware implementation and real-time testing of the designed system. • Graphical user interface to reduce the user's workload, simplifying the command and interaction with the UAV, and decreasing the impact of background noise on speech recognition. The paper is organized as follows: First, a review of related work is provided in Section 2. After that, Section 3 describes the proposed framework that suggests two stages, namely the design of a CNN model for AMLSR and the deployment of the model in a physical environment. Experimental results and real time tests and discussions can be found in Section 4. Section 5 presents the hardware implementation and the graphical user interface. Comparative study and discussion are described in Section 6. Finally, a summary of main results and future work are given in Section 7. Related Works Human-drone interaction (HDI) is an active and growing field of study that focuses on the evaluation and understanding of interaction distance as well as the development of novel use cases. The methodology and best practices for conducting user research in the HDI field are presented in [5]. This research proposes a taxonomy for HDI user studies that analyzes the various approaches found in the literature about human-drone interaction. Existing human-robot and human-computer interaction approaches must be adapted for drone research, as the complexity of flight introduces additional elements. To this end, a road map to further examine autonomous drones and their integration in human areas, as well as to investigate future interaction strategies, with the goal of establishing HDI best practices is presented in [6]. In [7] the authors examine the relationship between autonomy and User Experience (UX) at various perceived workload levels. This work aims to consider both technical and UX-related factors when developing the next generation of assistive flying robots. Nowadays, it is common to see more people with no prior knowledge of the subject owning a drone, either to accomplish a specific purpose or for entertainment purposes. As drone technologies became more pervasive and affordable, researchers began to shift interface design towards modern user interfaces that no longer restrict drone control to a remote controller or a ground control station. These innovative techniques, also known as natural user interfaces (NUI), enable users to interact with drones via gesture [8], speech [9], touch [10], and even using brain-computer interfaces (BCIs) [11]. These methods have been evaluated in [12]. Most results appear to imply that the implementation of a NUI facilitates human drone interaction. Most findings suggest that implementing a NUI facilitates human-drone interaction. In addition, as the roles and operations of military UAVs have expanded, along with the need for UAV group control, the traditional "mouse-keyboard" single-mode NUI technology has proven incapable of meeting the requirements of future unmanned warfare. The authors of [13] created a new multi-mode UAV interactive system based on cutting-edge virtual reality and artificial intelligence hardware and software. According to studies on natural user interfaces [12], speech is used as a method of interaction by 38% of American users and 58% of Chinese users. The Natural Language Processing (NLP) task of real-time computational transcription of spoken language is known as automatic speech recognition (ASR). The authors of [14] provide an overview of the various techniques and approaches used to perform the task of speech recognition. The research provides a thorough comparison of cutting-edge techniques currently being used in this field, with a particular emphasis on the various deep learning methods. In [15], the authors present a statistical analysis of the use of deep learning in speech-related applications, whereas authors of [16] present two cases of successful speech recognition implementations based on Deep Neural Network (DNN) models. The first is a DNN model created by Apple for its personal assistant Siri, and the second is region-based convolutional recurrent neural network (R-CRNN) designed by Amazon for rare sound detection in home speakers. The work described in [17] provides a conceptual understanding of CNN, as well as its three most common architectures and learning algorithms. DNN has shown great promise in speech recognition systems with multiple languages. In [18], the authors propose a deep learning speech recognition algorithm that combines speech features and speech attributes in the context of English speech. In [19], a combination of deep belief network (DBN) and Deep Bidirectional Long Short-Term Memory (DBLSTM) with Connectionist Temporal Classification (CTC) output layer to create an acoustic model on the Farsdat Persian speech data set is proposed. A contribution to the Amazigh language is also provided in [20]. The paper investigated and implemented an automatic speech recognition system in an Amazigh-Tarift-based environment. In [21], the authors investigate the use of cutting-edge end-to-end deep learning approaches to build a robust diacritised Arabic ASR for the Arabic language. These approaches rely on the Mel-frequency Cepstral Coefficients and the log Mel-Scale Filter Bank energies as acoustic features. Speech control is widely used in a variety of applications. The work in [22]. details the research and application of human-computer interaction technology based on voice control in UAV ground control stations. Moreover, authors of [23] propose a system for locating and rescuing victims buried beneath debris. A speaker installed on the UAV causes victims to react, and their voices are recorded to detect them. In [24], the authors investigate a speech-based natural language interface for defining UAV trajectories. To determine the effectiveness of this interface, a user study comparing its performance to that of a conventional mouse-based interface is also presented. However, previous works were able to control UAVs via speech, but voice recognition accuracy was insufficient. In [9], the authors designed a speech control scheme for UAVs based on Hidden Markov Model (HMM) and Recurrent Neural Networks (RNN) to address this issue. HMM is utilized to identify error commands and RNN is utilized to train the sets of UAVs commands, with the subsequent command predicted based on the training result. The recognition rate of incorrect commands is as high as 61.90%, while the overall error rate is reduced to 1.43%. A further enhancement can be found in [25] with Speech Recognition Engine based on Convolutional Neural Network (CNN), a Voice Command Controller for Fixed Wing UAV. According to the classification report, the model achieved a quantitative evaluation with an average of 87% for precision and 83% for recall. An alternate approach to speech recognition for robotics applications based on a combination of spectrograms, MEL and MFCC features, and a deep neural network-based classification is presented in [26]. The algorithm's overall validation accuracy is as high as 97%, whereas the testing accuracy of the system is 95.4%. Since this is a classification algorithm, results have been presented using custom datasets for voice classification. In [27], the authors presented a multi-modal evaluation dataset for UAV control, comprised of spoken commands and associated images that represent the visual context of what the UAV sees when the pilot utters the command. For previous works, even with high-rate accuracy, robustness over noise was not considered. In [28], the authors conducted a discriminant analysis of voice commands in the presence of an unmanned aerial vehicle with four rotating propellers, in addition to measuring background sound levels and speech intelligibility. For male speakers, classification of speech commands based on mel-frequency spectral coefficients showed a promising classification rate of 76.2%. Deep Xi, a deep learning approach to a priori SNR estimation proposed in [29], is capable of producing speech enhancements of higher quality and intelligibility than recent deep learning speech enhancement approaches. The method was evaluated using both real-world nonstationary and colored noise sources, at multiple SNR levels. The problem with previous publications is that voice recognition is performed on board. To receive verbal commands, the drone must be close to the user (no more than two to three meters away). Users are restricted to collocated interaction. Proposed Framework To achieve human-drone interaction using natural language, a two-stage framework is proposed. Overall, as can be seen on the block diagram shown in Figure 1 below, the first stage aims to develop a system that can recognize commands spoken in three different languages namely Arabic, English, and Amazigh using a deep learning model. During this stage, various data are first collected from many sources, then they are preprocessed during a data preparation phase. Then a deep learning model is designed, built, and fine-tuned using training and validation sets. The performance of the developed deep learning model for automatic Multi-Lingual Speech Recognition (AMLSR) is evaluated using a test set. In the second stage the developed (AMLSR) system is deployed to control the movements of a quadrotor UAV. During this stage, input audio commands are acquired, preprocessed, and fed to the AMLSR system. If the command is recognized, the corresponding instruction is sent directly to the quadrotor UAV; otherwise, the command is rejected. Data Collection and Preparation During this first step, data records are first collected from the Google Speech Commands Dataset for English language, and from our personnel records for Arabic and Amazigh languages. The English dataset contains 65,000 one-second clips for 30 different words spoken by thousands of different subjects including the desired commands like up and stop, as well as other objects like numbers and names categorized as unknown words. The other language clips were recorded with phones and laptops in realistic environments. Each record has a sampling frequency of 16 KHz, in (.wav) format, including 12 commands (six for each) and unknown words. Table 1 summarizes the record characteristics for our personnel database. Before extracting features, it is important to ensure that the network is robust enough to deal with background noise. Background noise has been mixed into the instruction records to achieve this. A variety of types from various sources are employed, and the background samples in the various data sets are highly correlated. Figure 2 shows an example of the signal to noise ratio (SNR) of one of the commands (Akker (On-Am)) correlated with white noise. The obtained SNR level is −8.61 dB, showing the noisy conditions in which the experiments are done. In this work, six possible commands with three possible languages under various conditions and spoken by different speakers are considered. Hence, there is more than one command for a specific action, as summarized in table Table 2. For example the action "up" can be carried out by saying the word "Aala" in Arabic or "Oussawen" in Amazigh. The collected data undergo a preparation phase where audio records represented as speech waveforms are transformed into spectrograms before being used in the training, validation, and testing of the machine learning model as shown on Figure 3. Deep Learning Model for AMLSR The command recognition problem is cast as a multi-class classification task. Various architectures have been designed and implemented in convolutional neural networks (CNNs) to handle this task. Ref. [30] presented LeNet-5 as the first development of CNNs that demonstrates impressive results for solving handwritten digit recognition problem. AlexNet [31], is another CNN architecture similar to LeNet-5, but with a more complex network and more parameters to learn. Simonyan and Zisserman in [32] have designed the VGG-16 network that is well-known for its consistent design and which has been successful in a variety of fields. Finally, GoogleNet is a model proposed by [33]. The main goal was to create a model with a lower budget that could reduce the amount of power needed, the number of trainable parameters employed, and the amount of memory used. The number of trainable parameters in the network was drastically reduced by the model. As shown on Figure 4, the process of developing AMLSR is divided into two stages: training and testing. First, the database that includes UAVs instructions is collected from different users records that speak the desired languages. The dataset is then separated into test, validation and training. The neural network is trained using the extracted features of training data to set the model parameters. Finally, the obtained model is evaluated using the test data. As illustrated in Figure 5, the architecture of the developed CNN includes: • 5 convolution layers: each layer acts as a feature extractor. • 5 By normalizing the outputs of intermediary layers during training, batch normalization attempts to reduce the internal covariate shift in a network. This accelerates the training process and enables faster learning rates without increasing the risk of divergence. Batch normalization does this by normalizing the output of the preceding hidden layer using the mean and variance of the batch (mini-batch). For an input mini-batch β = x 1:m , we learn parameters γ and β via [34]: • 5 ReLu Layer: The rectified linear unit (ReLU) is a simple, fast activation function typically found in computer vision. The function is a linear threshold, defined as: • 4 max pooling layers: As the name suggests, the max pooling operation chooses the maximum value of neurons from its inputs and thus contributes to the invariance property. Formally, for a 2d output from the detection stage, a max pooling layer performs the transformation in order to downsample the feature maps (in time and frequency) [34]. where the function h is generally known as a kernel or filter transformation, p and q denote the coordinates of the neuron in its local neighborhood and l represents the layer. In k − max pooling, k values are returned instead of a single value in the max pooling operation. • Dropout Layers: Applying dropout to a network involves applying a random mask sampled from a Bernoulli distribution with a probability of P. This mask matrix is applied elementwise (multiplication by 0) during the feed-forward operation. During the backpropagation step, the gradients for each parameter and the parameters that were masked in the gradient are set to 0 and other gradients are scaled up by 1 1−P . • Fully connected Layer: allows us to perform classification on the dataset. • Softmax Layer: we find it just after the fully connected layer in order to predict classes. The output unit activation function is the softmax function [35]: where 0 ≤ y r ≤ 1 and ∑ k j=1 y j = 1 • Weighted Classification Layer: For typical classification networks, the classification layer usually follows a softmax layer. In the classification layer, trainNetwork takes the values from the softmax function and assigns each input to one of the K mutually exclusive classes using the cross entropy function for a 1-of-K coding scheme [35]: where N is the number of samples, K is the number of classes, w i is the weight for class i, t ni is the indicator that the nth sample belongs to the ith class, and y ni is the output for sample n for class i, which in this case, is the value from the softmax function. In other words, y ni is the probability that the network associates the nth input with class i. • numC: controls the number of channels in the convolutional layers. It's worth mentioning at this stage that the network's accuracy is linked to its depth. Increasing the number of filters (numC) or adding identical blocks of convolutional, batch normalization, and ReLu layers may improve overall performance. Once the network is created, its training can be started. Second Stage: AMLSR System Deployment The aim of this stage is to conduct real time tests of the designed system using hardware implementation. First, a graphical user interface is designed to simplify the interaction between the users and the UAV. The interface is designed with the multi-lingual recognition system to detect and recognize user instructions. Next, the instructions are send to the UAV by mean of a communication link to be executed. The transmission/reception system allows the users to control the drone for a range of 1Km. All experiments were made under a quadrotor UAV equipped with all the necessary sensors as shown in Figure 6. Implementation Setup, Results and Discussion This section includes two main parts. In the first part, the implementation details and results of the AMLSR system's performance evaluation will be presented and discussed. Then, the second part will be devoted to the hardware implementation for the deployment of the system. AMLSR System Testing Results In order to build the CNN, the data set has been split into: The material environment in which the work was done is characterized by the following features: Windows 10 Professional operating system, Intel Core i7-ES1650 3.50 GHz processor, 64 GB DDR4 RAM, and NVIDIA GeForce GTX 1050Ti GPU. Figure 7 represents the training with the English, Arabic, and Amazigh databases. For the training phase, the Adam optimizer is used with a mini batch size of 128. Trained for 30 epochs and with learning rate reduced by a factor of 10 after 20 epochs. The most well-known and discussed metrics in deep learning are accuracy and loss. The goal of the training process is to reduce the loss. In the training process, loss is frequently used to find the best parameter values for the model. Accuracy is a metric used to assess the performance of a classification model. It is most commonly expressed as a percentage. It is less difficult to interpret than loss. Most of the time, we would see an increase in accuracy as the loss decreases (as shown in Figure 7), but accuracy and loss have different definitions and measure different things. They frequently appear to be inversely proportional, but no mathematical relationship exists between them. The continuous blue line represents the accuracy achieved on the training data, while the dashed black line, which represents the accuracy achieved on the validation data, is updated less frequently. The validation set is used to ensure that the network is sufficiently abstracting what it learns from the training data, giving an indication on whether the training of the model is progressing well. The training error is about only 1% while the validation error reached 9%. The total validation accuracy is 91.31% , which means that the model is highly accurate and can be used for the desired application. Performance Measures and Evaluation The network's performance on test data can also be evaluated using a confusion matrix as shown in Figure 8. As the target variable is multinomial, we are dealing with a multi-class classification problem. It has 20 levels, 18 of which are related to the spoken words "up", "down", "right", "left", "on" and "off" in the three languages, and two to the classes "unknown" and "background". As a result of the testing phase, a 20 × 20 confusion matrix was derived, from which several performance measures were considered to evaluate the performance of the proposed CNN model for multilingual recognition of spoken commands. The ONE-vs-ALL strategy was used to calculate the values of the performance measures, with each level being considered as the positive class at a time and the others as the negative class. The performance measures that are considered in the experimental study are: • Recall: This metric assesses how confident we can be that the model will find all spoken words related to the selected positive class. In other words, it indicates the proportion of true positives that are correctly classified. Recall is calculated for each target level (l i ) as follows: • Precision: This metric helps in assessing how confident we can be that a spoken command identified as having a positive target level actually has that level. In other words, it indicates the proportion of predicted positives that are actually positive. • F1-measure (or F1 score): The F1 measure, which is the harmonic mean of precision and recall, provides an alternative to assessing the misclassification rate and aids in concentrating on minimizing both false positives and false negatives equally. • Average class accuracy (ACA): Because the data set is imbalanced, using raw classification accuracy to evaluate overall performance can be misleading. Instead, we propose using the harmonic mean to calculate the average class accuracy, as shown in the equation below: As indicated in Figure 9, the model's average class accuracy was 0.93, indicating its promising potential. Figure 9 also shows the obtained results in terms of recall, precision, and recall for each of the classes under consideration. Precision and recall metrics had values ranging from 0.7 to 1 and above 0.8, respectively. F1 scores, on the other hand, vary from 0.8 to 1. Despite the size of the English dataset, the results show poor performance for English words as compared to Arabic and Amazigh words. This can be explained by the completely distinct word pronunciation, as well as the presence of a considerable portion of unknown English words. Real Time Tests of the Proposed AMLSR After training and evaluating the network, the test phase of the AMLSR system is performed. First the audio signal is extracted from the users in real time. The spectrogram is generated and transferred to the network in order to recognize the desired command. The data was collected from 16 persons divided into two categories: the first one formed from Amazigh native speakers and the second composed of Arabic native speakers. Figure 10 shows speakers' age and gender distribution. Overall, Figure 11 depicts the AMLSR model's achieved performance following this test phase. The AMLSR system performed reasonably well, with 93.76% for English word recognition, 88.55% for Arabic word recognition, and 82.31% for Amazigh word recognition, respectively. This time, English terms were easily identified because there was no unknown word introduced in the test set, and non-native speakers, unlike native speakers, have an accentuated and clear pronunciation of these English words. Figure 11. Achieved performance during test phase for each data set. Hardware Implementation for AMLSR Deployment The UAV used for the real time test is a quadrotor drone manufactured in our laboratory for academic purposes. The drone can support an additional payload of 500 g in order to execute different kind of missions. It is equipped with an Atmel processor on-board connected to different sensors such as the Inertial measurement unit for attitude estimation, GPS for positioning and ultra-sound for obstacles detection. Radio communication and data transmission is assured via TX/RX module with a 1 km transmit rang. Furthermore, communication with the interface is granted via USB. The quadrotor drone has a flight autonomy of approximately ten minutes depending on its usage. The main goal of creating the graphical interface is to reduce the user's workload by simplifying command and interaction with the UAV. The interface was created so that a single operator could communicate with the UAV and issue the required movement commands using the proposed languages. In addition, as shown in Figure 12, the following factors were taken into consideration when developing the design: • The interface is able to automatically detect the available UAVs. • The interface displays the information related to the motors rotation speed as PWM signals. This information is displayed with different-colored text in order to help the operator to avoid overload. • The detected command is displayed as 'Text' with the language of the user. • The interface is offered in a variety of languages depending on the user's language. English is primarily suggested. • The interface displays visual alerts in reaction to various events that may occur during the system's functioning. In the implemented system, once a command is transmitted to the UAV, it is carried out until another command is delivered. As a result, to stop the UAV's engine, an explicit instruction for the action "stop" is required. For safety and to avoid collisions, the UAV will automatically stop when it reaches 0.1 m above the ground. Figure 13 shows that being a native speaker has an impact on the recognition rate. For example, in category 1, Amazigh words were recognized better than English and Arabic terms, whereas Arabic words were recognized best in category 2. However, because Arabic commands are less complex than Amazigh commands, the results for Arabic words appear to be better than those for Amazigh terms in both categories. Furthermore, English recognition works well with both categories because the commands are short and easy to pronounce. Because Amazigh commands have many syllables, users must correctly pronounce the word in order to be recognized. The fact that Amazigh is a language with several dialects, therefore the accent changes from person to person, explains category 1 errors in Amazigh command. Finally, there was considerable ambiguity in the multilingual test between the terms "left", "asfel", and "ayfouss". Moreover, even in a noisy environment, the results are very encouraging. Comparative Study and Discussion This work can be considered as the first attempt that uses the most commonly spoken languages in the MENA region, including standard Arabic, English, and Amazigh to develop an AMLSR system for HDI. Table 3 compares our work to some similar studies in the ASR related-literature based on the following criteria: Languages used for ASR, speech signal (SS) representation used, average accuracy, robustness, and system hardware implementation. Several features can be considered to design an ASR system, including the Mel-frequency cepstral coefficients (MFCC) and spectrograms. Considering the entire hardware implementation, the prototype's performance was quite promising. However, our system is limited to a small set of commands. A more extensive instruction dictionary is needed. Furthermore, additional instructions to manage the drone's position and camera should also be considered. The developed system must account for communication disruptions and environmental limits in order to avoid collisions and obstacles. Extending the dataset while ensuring good class-balance in target feature distribution is required to improve the ASR system. To avoid the burden of further data collection and labelling, a Self-Supervised Learning (SSL) approach, as described in [37], could be used for this purpose. Semi-supervised learning can also be investigated. Furthermore, one of the most recent aspects to be explored is multitask language learning as it could improve speech interaction with drones [38]. Another component that might be included to explain the predictions produced by algorithms, as stated in [39], is the explainability of the settings. Moreover, the system can be extended to handle audiovisual interaction techniques, in which the drone can predict the user's movements using an integrated camera and background invariant detection [40]. As the altitude of drones changes, this last strategy must be supplemented with multiscale object detection [41]. Conclusions The goal of this work was to develop a speech recognition system that can be used to interact with unmanned aerial vehicles using words spoken in English, Arabic, and Amazigh. As a first step, the architecture of a CNN has been defined, and the data required for training, validation, and testing has been collected and pre-processed. A rate of more than 90% accuracy was achieved. A quadrotor drone was built and used during the model's deployment in a real environment. A user interface has also been designed to make it simple to control the engine. This step of the system evaluation involved a sample of 16 people of various ages and genders. The AMLSR system performed well in terms of recognition accuracy, with an accuracy of 93.76% for English interaction, 88.5% for Arabic interaction, and 82.31% for Amazigh interaction. The AMLSR system was able to understand the user's voice instructions with an average accuracy of 88.2% during tests. The proposed prototype can be improved in a variety of ways as part of future work plans. It would be interesting to investigate other DL models and to expand the database in order to improve efficiency. As suggestions, we propose creating a website where a large number of people can record commands in order to cover other existing dialects. Furthermore, the graphical interface could be improved to show more aspects such as UAV attitude sphere, battery health, and so on. We may even enter a lexical field to broaden the command dictionary to include all possible words used for control and even a navigation tool.
8,050
2022-06-09T00:00:00.000
[ "Computer Science", "Engineering" ]
in selective laser-melted Ti6Al4V: a parametric thermal modelling approach High cooling rates within the selective laser melting (SLM) process can generate large residual stresses within fabricated components. Understanding residual stress development in the process and devising methods for in-situ reduction continues to be a challenge for industrial users of this technology. Computationally efficient FEA models representative of the process dynamics (temperature evolution and associated solidification behaviour) are necessary for understanding the effect of SLM process parameters on the underlying phenomenon of residual stress build-up. The objective of this work is to present a new modelling approach to simulate the temperature distribution during SLM of Ti6Al4V, as well as the resulting melt-pool size, solidification process, associated cooling rates and temperature gradients leading to the residual stress build-up. This work details an isotropic enhanced thermal conductivity model with the SLM laser modelled as a penetrating volumetric heat source. An enhanced laser penetration approach is used to account for heat transfer in the melt-pool due to Marangoni convection. Results show that the developed model was capable of predicting the temperature distribution in the laser/powder interaction zone, solidification behaviour, the associated cooling rates, melt-pool width (with 14.5% error) and melt-pool depth (with 3% error) for SLM Ti6Al4V. The model was capable of predicting the differential solidification behaviour responsible for residual stress build-up in SLM components. The model-predicted trends in cooling rates and temperature gradients for varying SLM parameters correlated with experimentally measured residual stress trends. Thus, the model was capable of accurately predicting the trends in residual stress for varying SLM parameters. This is the first work based on the enhanced penetrating volumetric heat source, combined with an isotropic enhanced thermal conductivity approach. The developed model was validated by comparing FEA melt-pool dimensions with experimental melt-pool dimensions. Secondly, the model was validated by comparing the temperature evolution along the laser scan path with experimentally measured temperatures from published literature. Introduction Additive manufacturing (AM) techniques form threedimensional components directly from a digital model by joining materials layer by layer [1,2]. The expanded geometric freedom of the process, low material wastage and rapid product development cycles make these technologies attractive to a variety of industries [2]. The AM process selective laser melting uses a high-power laser to completely melt compositions of metallic feedstock from a powder bed. Due to the rapid heating and cooling cycles of successive layers, large thermal gradients are generated which in turn can create high residual stresses within fabricated components [3]. The process-induced residual stresses may lead to in process part failure due to geometric distortion, built-in cracking or premature failure of parts subjected to alternating loading or corrosive environments [3][4][5][6][7][8][9]. The complex nature of the layer-bylayer building process and thermal cycling requires a robust understanding of the numerous physical phenomena associated with the selective laser melting (SLM) process in order to be able to control residual stress and improve the quality of parts [10]. Using sub-optimal SLM processing parameters can lead to build failure or may result in part properties falling below requirement (e.g. low part density) [11]. Practical experimentation is generally used to determine the optimal manufacturing process parameters for SLM [12][13][14][15][16] and is often supplemented with computer simulations using finite element modelling to increase the understanding of the processing conditions. Several attempts have been made to model the SLM process [8,10,11,[17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. Shiomi et al. [21] developed an FE simulation to predict the temperature distribution and the amount of solidified material within metallic powders irradiated by a pulsed laser. The model was validated by comparing the experimentally measured weight of solidified material against model predictions for different combinations of power and exposure time. It was reported that the effect of laser power variations are more effective than the varied exposure time on the maximum temperature reached by metallic powder. Matsumoto et al. [22] proposed an FE method for estimating temperature and stress distribution in a single laserprocessed solidified layer. Gusarov et al. [26,33] developed a coupled radiation and heat transfer model for estimating the thermal distribution within an SLM powder layer. Roberts et al. [32] considered temperature-dependent material properties and phase changes to develop a three-dimensional model of SLM Ti6Al4V. The model was used to predict the thermal and residual stress distribution resulting from layer-by-layer processing approach during SLM using ABAQUS element birth and death method. Song et al. [28] created a threedimensional FE simulation to predict optimal SLM processing parameters. The model results were validated by building high-density parts with parameters, for which the model had predicted a melt-pool depth of 45 μm using 50-μmp o w d e r layer thickness. Correct modelling of the SLM process is a useful tool for control and optimisation of the process [18]. These studies did not consider the heat flow in the melt-pool due to Marangoni convection or heat loss due to vaporisation, since the powder was assumed to be a homogeneous solid section with thermo-physical properties of powder. Marangoni convection or fluid flow greatly influences the heat transfer within the melt-pool formed by the SLM process [36]. Modelling the SLM process without considering the heat transfer in the melt-pool due to fluid flow can lead to inaccurate (very high) temperature predictions in the range of 14,000°C reported by Fu et al. [17], for a three-dimensional FE model of SLM Ti6Al4V. Khairallah et al. [11] developed a three-dimensional mesoscopic, multi-physics model, to demonstrate the effect of the stochastic nature of powder distribution in powder bed systems. It was found that surface tension of the melt-pool drives the physics of the process and affects the heat transfer and the topology of solidified melt-pool. Three-dimensional, computational fluid dynamics (CFD) was also used to predict the melt-pool geometry and temperature distribution in SLM by Yuan et al. [37]. The heat transfer due to the fluid flow in the melt-pool was modelled using an enhanced anisotropic thermal conductivity approach by Safdar et al. [18], where the thermal conductivity of the material was adjusted to account for the experienced thermal process. It is reported by Safdar et al. [18] that the geometry and thermal distribution in the melt-pool were predicted accurately without involving the complexity and/or longer processing time involved in using the CFD modelling approach. However, the anisotropic models are expected to be computationally more expensive compared with the case where all the material properties are assumed to be isotropic. Threedimensional multiple-layer models of SLM were developed by Cheng et al. [38]andParryetal. [35], where the laser beam was considered as a volumetric heat source with a known penetration into the material to account for the heat flow in the melt-pool. Parry et al. [35]reportedtemperaturesashigh as 12,000°C in the melt-pool and this peak temperature was termed as an isolated singularity above the vaporisation temperature of Ti6Al4V. Understanding the physical phenomenon associated with laser processing of materials and predicting the microstructure of the SLM components depend on the appropriate temperature prediction during the process. This will also provide a more realistic view of the temperature gradients and cooling rates associated with the process which can help in understanding the mechanical properties and residual stress behaviour of SLM components. Lopez et al. [34] recently developed a two-dimensional FE model based on the enhanced anisotropic thermal conductivity approach to simulate the thermal behaviour of SLM AA-2024. The FE model was validated by comparing experimental melt-pool dimensions with model-predicted melt-pool dimensions. The thermal history from the FE model was coupled with cellular automata model for accurately predicting the microstructure of the material and the results were validated experimentally. Ti6Al4V is light weight and possesses high strength at low to moderate temperatures [39]. It also has excellent corrosion resistance, is biocompatible and has good machinability. It has a wide range of applications within aerospace, automotive and medical sectors and is one of the most commonly processed materials using SLM. Based on this material's extensive usage, this investigation models the melting of Ti6Al4V to understand the thermal behaviour and its effect on residual stress development in SLM components. Increasing the thermal conductivity enhancement factor leads to increasing computation time, and therefore, the present research proposes a modelling strategy to simulate the SLM process for Ti6Al4V by modelling the laser beam as a volumetric heat source (modelled using ABAQUS DFLUX subroutine), with enhanced penetration depth. The enhanced penetration depth is expected to account for part of the heat flow in the melt-pool and thus require lower thermal conductivity enhancement factors. This is expected to improve the computational efficiency of the FEA model. The proposed model considers temperature-dependent material properties with phase change from powder-liquid-solid (modelled by ABAQUS USDFLD subroutine). This work also proposes two modelling reduction approaches which will assist in simulating the substrate and the surrounding powder as a heat sink without the requirement to increase the size of the model. Since the surrounding powder and substrate are modelled as boundary conditions the model is independent of the platform size. The proposed model is used to estimate the effect of SLM process parameters on cooling rates and temperature gradients to determine the effect of parametric variations on residual stresses in SLM components. Modelling methodology The modelling approach used within this work is based upon the concept of a moving volumetric heat source, combined with enhanced thermal conductivity. The melting behaviour of a single line containing 14 laser spots was simulated. A 1.04 × 0.33 mm, powder layer of 50-μm thickness was deposited on a substrate with a thickness of 0.5 mm. A length of 1.04 mm was chosen such that only one laser spot with extra powder is modelled at the beginning and end of laser scan track. A width of 0.325 mm was chosen such that only two laser spots with extra powder is modelled on either side of laser scan track. The choice of small sizes for extra powder to be modelled and the small thickness of the substrate was to illustrate the effectiveness of the modelling reduction approaches. ABAQUS 8-node linear heat transfer brick element (DC3D8) was used for meshing. A mesh size of 32.5 × 32.5 × 50 μm was used for the powder layer. The substrate mesh was biased to move from 50 μm at the top of the model, increasing to 100 μm at the bottom to minimise the number of mesh elements and reduce the computation time. The SLM process uses a localised laser beam to heat and melt feedstock from the powder bed; heat transfer therefore plays an important role in the process. The general, spatial and temporal distribution of the temperature is governed by the heat conduction equation (Eq. (1)). where T is temperature; t is time; x, y and z are the spatial coordinates; k xx , k yy and k zz are the thermal conductivities; ρ is the density; C p is the specific heat and q is the heat source term. Initial conditions Powder was modelled to be deposited with an initial temperature of 25°C. Substrate pre-heating was also applied as an initial temperature condition to the substrate. The value of the temperature applied to the substrate was varied according to the parameters being modelled. Heat source Using ABAQUS DFLUX subroutine written in FORTRAN, a moving volumetric heat source was programmed to simulate the laser. The volumetric heat source was used to account for the laser penetration effect into the powder, which according to Fischer et al. [25]is63μm for commercially pure titanium powder. In order to make the simulation more efficient, the volumetric heat source was applied to a 50-μmpowderlayer thickness along with 250-μm depth in substrate. The variation of laser intensity in the radial direction was modelled using a modified cylindrical laser heat flux (MCHF) model as explained in refs. [8,40]. Equation (2) shows the MCHF model, where P is laser power in watts; r las is radius of laser spot on the bed surface, which was taken as 50 μm for the Renishaw AM250 SLM machine, and η is the laser absorptivity value for Ti6Al4V. Absorptivity value of 0.6 was chosen after a few trials with different values around η =0.77 [41], for pure titanium. In Eq. (3), I r is intensity of laser in the radial direction used in this research. The correction factor of 2.55 found through trial and error is necessary for achieving the correct melt-pool size and temperature distribution. Equation (4) shows the variation of laser intensity in the depth direction (Z-axis), modelled as a parabolic relation (see Fig. 1). Equation (5) shows the definition of the heat flux used for simulating the moving heat source in this work. Material properties Material phase change was modelled using a user subroutine (USDFLD) in order to predict the powder-liquid-solid phase change based on the temperature of the laser-irradiated region. Temperature-dependent material properties of solid and powder Ti6Al4V used in this research were taken from the work by Roberts [8], except thermal conductivity of powder Ti6Al4V which was taken from the work by Parry et al. [35]. In order to artificially simulate the Marangoni convection responsible for heat flow in the melt-pool, an enhanced thermal conductivity model presented by Safdar et al. [18] was used but isotropic thermal conductivity was considered instead of anisotropic conductivity [18], as shown in Eq. (6). According to Safdar et al. [18], isotropic enhanced thermal conductivity approach has been used by many researchers to simplify and speed up the modelling process to account for melt-pool convection. Therefore, this work uses the isotropic enhanced thermal conductivity approach to improve the computational efficiency of the FEA model. where K ′ is the enhanced isotropic thermal conductivity of the melt-pool, K is the normal isotropic thermal conductivity at a given temperature for molten material and is the thermal conductivity enhancement factor, defined by Eq. 7. An isotropic enhancement factor of = 4.0 was used in this work, based on trial and error to achieve the desired melt-pool dimensions. Due to the enhanced penetration of the volumetric heat source, the thermal conductivity factor had a more pronounced effect on the width of the melt-pool compared to the depth. Heat losses During the SLM process, the majority of heat is lost through conduction to the substrate and surrounding powder. Heat loss also occurs due to convection and radiation from the top surface during the process. For simplicity, radiation heat losses were not considered in this work and also according to Polivnikova [29], radiation heat losses are negligible. Convective heat loss from the top surface due to flow of inert gas in the chamber was modelled with a convective heat transfer coefficient of 20 W m 2 −K . In order to simulate the conductive heat loss to the substrate, a surface film condition was defined on the five surfaces of the substrate (Fig. 2a). Temperature-dependent conductivity of solid Ti6Al4V was used as a convective heat transfer coefficient on the selected surfaces. where h 1 is the convective heat transfer coefficient applied on the four sides and bottom of the modelled small substrate to account for the heat losses into actual (larger) substrate and k solid (T) is the temperaturedependent thermal conductivity of solid Ti6Al4V adapted from the work by Roberts [8]. In order to simulate the conductive heat loss to the surrounding powder, a surface film condition was defined on the four surfaces of the powder layer (see Fig. 2b). Temperature-dependent conductivity of powder Ti6Al4V was used as a convective heat transfer coefficient on the selected surfaces. where h 2 is the convective heat transfer coefficient applied on the four sides of the modelled small powder layer to account for the heat losses into surrounding powder and k powder is the temperature-dependent thermal conductivity of powder Ti6Al4V adapted from the work by Parry et al. [35]. These modelling reduction approaches helped in reducing model size and thus computational time. Thermal model validation Three 20-mm-long single lines were melted from a 50 μm layer of Ti6Al4V powder deposited onto a titanium substrate using a Renishaw AM250 machine with optimised (> 99% part density) build parameters (details in ref. [12]), for experimental melt-pool measurement. The substrate was cross sectioned, mounted, polished and etched for 20 s with Kroll's reagent to reveal the melt-pool. Using an optical microscope, images of the substrate region with the SLM-melted scan lines were acquired and ImageJ was used to measure the melt-pool dimensions. The simulated melt-pool dimensions were determined by taking a cross-sectional view of the melted line and measuring the melt-pool dimensions. The thermal FEA model was validated by comparing the simulated melt-pool size with experimentally measured values. Residual stress measurement Three 30 × 30 × 10 mm blocks were designed and manufactured to determine the process induced residual stresses. The parts were fabricated using a layer thickness (lt) of 75 μm and parameters (obtained from density optimization trials) shown in Table 1. Air-abrasive hole drilling using ASTM E837-13a [42] was used to measure residual stress on the top surface of the blocks (depth of 2 mm into the sample), with an average error of 5-20% in residual stress values. This is a semi-destructive method capable of measuring bi-axial normal (σ xx , σ yy ) and shear (τ xy ) stresses [43]. Using the parameters shown in Table 1, energy density required for nearly fully dense (99.9% dense) SLM Ti6Al4V parts using 75-μm layer thickness was calculated using Eq. (10). The required energy density for 75-μm-layer-thickness SLM Ti6Al4V parts to achieve nearly fully dense parts is 61.5 J mm 3 . Validation of the effect of FEA-predicted cooling rate on residual stress Keeping the energy density constant at 61.5 J/mm 3 ,t h e power was lowered to 150 W and using Eq. 10,t h e exposure was calculated to be 160 μs. Table 2. FEA model predicted a lower cooling rate for the combination of parameters shown in Table 2; therefore, three 30 × 30 × 10 mm blocks were manufactured using a layer thickness of 75 μm and parameters shown in Table 2. Residual stress was measured using air-abrasive hole drilling based on ASTM E837-13a [42]. 4 Results and discussion Validation of thermal modelling This section presents the model validation approaches taken for this work. Firstly, the model was validated based on comparison of experimentally measured melt-pool dimensions against model-predicted melt-pool dimensions. Secondly, the model was validated based on the trend in temperature evolution history over a scanning length of 325 μm. FEA-predicted temperature distribution in the XY-plane along the laser scanning direction was compared with experimentally determined values for SLM of Ti64 by Yadroitsev et al. [44]. The experimental measurement of temperature distribution in the meltpool was carried out using a single-mode continuous-wave, 1075-nm wavelength, Ytterbium fibre laser with 70-μmsp ot size [44]. In the study by Yadroitsev et al. [44], melt-pool temperature at the Ti6Al4V substrate without powder was measured at laser powers (P) of 20, 30 and 50 W, in combination with scanning speed (V) 0.1, 0.2 and 0.3 m/s from ten single tracks of 10-mm length. Temperature distribution in the melt-pool was measured by a specially designed coaxial optical system using a 782 × 582 pixel resolution CCD camera [44]. Melt-pool dimensions Experimental melt-pool dimensions from three 20-mm-long line sample cross sections were compared with modelpredicted melt-pool dimensions. Figure 3 shows a comparison of average experimental melt-pool width (186 μm) and depth (169 μm) against model-predicted melt-pool width (159 μm) and depth (164 μm). Representative optical micrograph of the experimentally acquired melt-pool with average melt-pool dimensions is shown in Fig. 4a. Experimental melt-pool had an average width of 186 μm and an average depth of 169 μm. Figure 4b shows the melt-pool dimensions predicted from the ABAQUS finite element model, using optimised (> 99% part density) SLM build parameters (details in ref. [12]). The FEA model predicted a melt-pool width of 159 μm. The predicted melt-pool width is 14.5% less than the average experimentally measured melt-pool width of 186 μm. It can be seen from Fig. 4b that the FEA model predicted a melt-pool depth of 164 μm. The predicted melt-pool depth is 3% less than the average experimentally measured depth of 169 μm. Therefore, based on the comparison of melt-pool dimensions shown in Fig. 4, the FEA model prediction of the melting behaviour of Ti6Al4V when irradiated by laser correlates well with experiments. This FEA model was used for studying the parametric dependence of residual stress in SLM Ti6Al4V parts. It was used for estimating the effect that varying SLM process parameters had on cooling rates and temperature gradients within the process. Melt-pool temperature distribution The second usage of the FEA model was to estimate the temperature distribution across the melt-pool. Figure 5 shows a comparison of FEA-predicted temperature distribution in the XY-plane along the laser scanning direction (points of interest highlighted in sub Fig. 5(b)), with experimentally determined distribution of the brightness temperature for SLM of Ti64 [44], in the XY-plane along the laser scanning direction. Figure 5 shows a good correlation of the trend in FEApredicted temperature distribution with experimentally measured temperature distribution. In the experimentally determined temperature distribution [44], the material'ssolidification region is highlighted to commence at approximately 220 μm behind the current position of the laser. Figure 5 shows that the FEA model predicted a similar solidification region. The experimentally determined temperatures are for a solid Ti6Al4V substrate using a laser power of 50 W and scanning velocity of 0.1 m/s [44], while the FEA-predicted temperature distribution is for 50 μm Ti6Al4V powder layer on a solid substrate using a laser power of 200 W and scanning velocity of 0.64 m/s. FEA-predicted temperatures are higher than the experimentally measured values because the experimental temperatures are brightness temperature, and according to Yadroitsev et al. [44][45][46], the true melt-pool temperature v a l u e ss h o u l db eh i g h e r .A c c o r d i n gt oY a d r o i t s e ve ta l . [44][45][46], the true peak melt-pool temperature for 50-W laser power and 0.1-m/s scanning velocity was calculated to be 2710 K (corresponding brightness temperature being 2340 K). According to refs. [44][45][46], laser power has a more pronounced effect on the melt-pool peak temperature compared with scan speed (exposure or irradiation time). Yadroitsev et al. [44] experimentally determined the dependence of melt-pool peak temperature on laser power and irradiation time, concluding that the peak temperature of the melt-pool is more sensitive to laser power. Therefore, the model-predicted temperature should have been much higher than the true experimental temperatures as the model uses a much higher power. The reason for not achieving much higher temperatures could probably be attributed to the laser spot size, as the modelled laser spot size (100 μm) is bigger than the experimental laser spot size (70 μm). The results in Fig. 5 show that the trend in model-predicted temperature evolution over the laser scan path agrees well with the trend in experimental trends and therefore will result in accurate predictions of the cooling rate and temperature gradients. The predicted cooling rate and temperature gradients provides insight into the residual stress build-up. Figure 6a shows the temperature distribution in the XY-plane (top view) along the laser scanning direction. It can be seen from Fig. 6a that the melt-pool has an elongated tail surrounded by recently solidified material. The melt-pool is symmetrical around the line the laser centre traverses. Similar melt-pool shapes have been reported by Cheng et al. [24], from FEA model of IN718, and Polivnikova [29] reported similar shape of melt-pool for 18Ni(300) maraging steel using Mathematica software. The material starts solidifying around the edges first with the material in the centre, taking longer to solidify. This variation in temperature between the central molten material and the recently solidified material on the sides creates a temperature gradient and, thus according to the temperature gradient mechanism [40,47], will result in residual stress build-up in the SLM components. Figure 6b shows a dimensioned isometric view with laser scanning direction and the region used for volumetric heat addition. Figure 6c illustrates the temperature and material solidification evolution along the depth, ZY-plane (front view) of laser scan path along the laser scanning direction. An important feature to note within Fig. 6c is that the melt-pool starts solidifying from the bottom and moves upward. Thus, the analysis of solidification front movement from Fig.6a, c, is used to suggest the movement of solidification front indicated by the Fig. 4 a Experimentally measured melt-pool dimensions. b Melt-pool dimensions predicted by ABAQUS finite element thermal model Fig. 5 a Comparison of FEA model-predicted temperature in XY-plane along the laser scanning direction with experimentally determined distribution of the brightness temperature in the XYplane along the laser scanning direction; P =50WandV = 0.1 m/s values adapted from ref. [44]. (b) The 325-μm distance with points considered for FEA model-predicted temperature in XY-plane along the laser scanning direction white arrow in Fig. 6c. Thus, the underlying solidified material restricts the shrinkage of the molten material on top and, according to the cool-down phase model [47,48], is responsible for the generation of residual stress in SLM components. Figure 6d shows the temperature distribution across the depth ZX-plane (side view) of the melt-pool. The highest temperature of 2160°C occurs at the top surface of the melt-pool. The temperature distribution spreads out along the X-axis in the substrate region due to higher conductivity of the solid substrate surrounding the melt-pool compared to the powder layer, whereas powder has lower conductivity. It can also be seen from Fig. 6d that the temperature gradient along the depth (Z-axis) of the melt-pool increases in the substrate region. This high temperature gradient across the melt-pool depth will result in differential contraction upon cooling and, according to temperature gradient mechanism [40,47] and cool-down phase model [47,48], is responsible for the development of residual stress in SLM components. 4.3 Cooling rate and temperature gradient prediction from FEA relationship with experimentally determined residual stress FEA simulation was used to predict the temperature gradient and cooling rates for SLM Ti6Al4V with samples built at different bed pre-heat temperatures. Figure 7(a) shows that the temperature gradient between the top surface of the meltpool and 250 μm below the melt-pool top surface (sub Fig. 7(b) highlights the two points in the cross-sectional view of the model) decreases with increasing powder bed pre-heat temperature. According to temperature gradient mechanism [40,47], a decrease in temperature gradient should result in lowering residual stress and thus an increase in bed pre-heat temperature should also result in a decrease in residual stresses. According to the residual stress results presented by Ali et al. [12] (shown in Fig. 8), increasing powder bed pre-heat temperatures resulted in lowering of residual stress. The trend in temperature gradient for varying bed pre-heat temperatures predicted from the FEA simulation correlates with the residual stress values reported by Ali et al. [12]. According to refs. [4,8,[48][49][50], pre-heating is responsible for a reduction in temperature gradients in SLM builds and the FEA simulation predicted the same effect as shown in Fig. 7. Another interesting observation from Fig. 7 is that the peak temperature in the melt-pool increases with increasing bed pre-heat temperature up to 470°C while the peak temperature at 570°C (2073°C) is even lower than at bed pre-heat temperature of 100°C (2081°C). A reason for this drop in meltpool peak temperature could possibly be related to the start of endothermic microstructural phase transformation at pre-heat temperatures of 570°C. According to the microstructural analysis presented by Ali et al. [12], nano β-particles started forming inside α-laths at pre-heat temperatures of 570°C. According to refs. [51,52], α-t oβ-phase transformation is an endothermic reaction. Therefore, based on the microstructural results of Ali et al. [12], showing the start of nano βparticles inside α-laths could be responsible for the drop in melt-pool peak temperature at 570°C bed pre-heat temperature. Figure 8 shows the FEA-predicted cooling rates for SLM Ti6Al4V samples built at different bed pre-heat temperatures along with residual stress (data adapted from the work by Ali et al. [12]). Cooling rate for all test cases were calculated by extracting the time-temperature data (for the heating and cooling cycles from the start to the end of FEA simulation), for the node at the top centre of second laser spot in the FEA simulation. Microsoft Excel was used to calculate the gradient of the cooling curve of the node selected at the top centre of the second laser spot in the FEA simulation. It can be seen from Fig. 8 that both residual stress and cooling rate have an inverse relationship with bed pre-heat temperature. Figure 8 shows a correlation between the trend in cooling rates and residual stress with varying bed pre-heat temperature. Therefore, the FEA model can be used with confidence for Fig. 8 Cooling rate predicted from FEA simulation for Ti6Al4V SLM samples built at different bed pre-heat temperatures, with residual stress data adapted from ref. [12] Table 1 based on the density optimisation trials for 75-μm layer thickness resulted in 78-MPa residual stress as shown in Fig. 9a. Keeping the energy density constant at 61.5 J/mm 3 (optimum energy density for achieving nearly fully dense SLM Ti6Al4V parts with 75-μm layer thickness), the required exposure time was calculated for 150-W power using Equation-10. FEA simulation predicted a lower cooling rate for 150-W power and 160-μs exposure time for 75-μm-layerthickness SLM Ti6Al4V parts. Blocks built with 150-W power and 160-μs exposure time resulted in 55-MPa residual stress as shown in Fig. 9a. The decreasing trend in residual stress correlates with the FEA-predicted trend in cooling rate and thus shows that the FEA simulation is a reliable tool for assessing the effect of SLM parameters on cooling rates and thus residual stress. FEA simulation was also used to predict the temperature gradient for both sets of parameters used for creating the Fig. 9 a Effect of power and exposure combination keeping energy density constant on cooling rate and residual stress. b Temperature gradient prediction between the top surface of the melt-pool and 250-μmdepth below the melt-pool from FEA simulation for SLM Ti6Al4V samples, built with different power and exposure combinations keeping energy density constant at optimum 75-μm-layer-thickness SLM Ti6Al4V samples. Figure 9b shows that the temperature gradient between the top of the melt-pool and 250 μm below the melt-pool top surface is higher for 200-W power and 120-μs exposure combination compared with 150-W power and 160-μse x p o s u r e .T h ed ecreasing trend in the FEA-predicted temperature gradient according to temperature gradient mechanisms [40,47]sh ou ld result in a decreasing trend in residual stress. The decreasing trend in residual shown in Fig. 9a agrees with the decreasing trend in temperature gradients (see Fig. 9b) and therefore increases the confidence in results for the FEA simulation. Another important observation from Fig. 9b is that the highest temperature in the melt-pool decreases for a lower power of 150 W and higher exposure of 160 μsinco mbin ation, compared with a high power of 200 W and lower exposure of 120 μs in combination. This trend in peak temperature with laser power is in agreement with the findings of refs. [44][45][46], which reported that increase in laser power had a more pronounced effect on the melt-pool peak temperature compared with scan speed (exposure or irradiation time). This further provides as evidence for the validity of the FEA simulation as a tool for analysing the effect of SLM process parameters on residual stress. Conclusions The developed isotropic enhanced thermal conductivity model for SLM Ti6Al4V treated the laser as a penetrating volumetric heat source and was capable of predicting the melt-pool width (with 14.5% error) and melt-pool depth (with 3% error). The model accurately predicted the temperature evolution along the laser scan path with good correlation to the experimentally determined temperature [44] along the scan path. Accurate prediction of melt-pool dimensions and the trend in temperature evolution along the laser scan path with high correlation to experimental data validates the modelling approach. Therefore, considering enhanced laser penetration to account for heat flow in the melt-pool due to Marangoni convection is a valid approach for modelling the SLM Ti6Al4V melting behaviour. Enhanced penetration depth led to using isotropic enhanced thermal conductivity approach instead of anisotropic enhanced thermal conductivity approach and thus made the FEA model computationally efficient. The model was capable of predicting the start of the solidification region along the laser scan path that was similar to the experimentally determined [44] solidification region. The model accurately predicted the solidification behaviour of the melt-pool; it was then used as a tool for studying the effect of SLM process parameters variation on residual stress. The trends in model-predicted cooling rates and thermal gradients correlated with the trend in experimentally determined residual stress values. The model accurately predicted the effect of SLM process parameter variation on cooling rates and thermal gradients validated by comparison with the effect of SLM process parameters variation on experimentally determined residual stress. The model clearly showed a reduction in cooling rates and thermal gradients with increasing bed preheat temperature and thus provided evidence for the reduction in residual stress with increasing bed temperature. The effect of bed temperature on peak melt-pool temperature was clearly shown by the model temperature estimates. The model showed a drop in peak melt-pool temperature at a bed preheat temperature of 570°C, which marks the start of nano βformation inside α-laths in SLM Ti6Al4V as shown in the work by Ali et al. [12]. The drop in peak melt-pool temperature at 570°C bed pre-heat temperature is a result of α-toβphase transformation being an endothermic process. The model accurately predicted the effect of laser power and exposure on peak melt-pool temperature, corroborating the fact that laser power has a stronger effect on peak melt-pool temperature compared with exposure time (scan speed). The model predicted cooling rates and temperature gradients for different power and exposure combinations, showing correlation with the trends in experimentally measured residual stress. The model was helpful in understanding the movement of the solidification front and thus the underlying phenomenon for residual stress build-up. Correlation of results between the developed model and experiments validate the effectiveness of the two proposed modelling reduction approaches. Using temperaturedependent conductivity of powder Ti6Al4V as a convective heat transfer coefficient to account for heat loss to excess surrounding powder, this reduces the model size as there is no need for modelling excess powder. Similarly, modelling a small substrate and adding a convection boundary condition, using temperature-dependent conductivity of solid Ti6Al4V as convection coefficient accounts for heat loss to the large substrate without the need for modelling a larger substrate. These modelling reduction approaches assisted in reducing the model size and thus improving the computational efficiency of the model.
8,092
2018-05-18T00:00:00.000
[ "Engineering", "Materials Science" ]
Intelligent Automatic Traffic Challan on Highways and Payment Through FASTag Card Background/objectives: With the advent of Internet of Things (IoT), Government has taken a huge step in the field of Trafficking to initiate a hassle free and most convenient way such as using Radio Frequency Identification (RFID) cards to pay at the Toll Plaza. Methods/statistical analysis: The Indian Government in association with the National Highway Authority of India (NHAI), understood the pain and introduced electronic toll collection (ETC), christened it as “FASTag”. So, India’s highways are going cashless with FASTag and vehicles do not need to stop at toll plaza for the cash transactions. Findings: This study deals with the application of the latest technology of the FASTag which is beneficial in avoiding the traffic hassle at the National toll plazas. With the use of FASTag installed on the front windshield of vehicles, toll generation is made a fun job. Automatically, the toll charges are deducted from FASTag account. This technology can also be used for the generation of challan and automatic deduction through the FASTag linked to the vehicle. Improvements/applications: The major applications of this aspect are Automatic Challan generation on the national highways which includes OverSpeeding, Seat Belt, Wrong-way driving, Parking in “No Parking” Zone, driving without a Valid Permit, and Overtaking from the Wrong Side. Introduction The National Highway Authority of India (NHAI) has come up with an innovative approach of collection of tolls through FASTag, which is an electronic toll collection system being implemented at more than 350+ toll plazas across India. A separate FASTag lane has been provided by the NHAI on the toll plazas which indicates the approaching vehicles having FASTag installed correctly to pass hassle-free and smoothly without any stopping. A large banner displaying "FASTag Lane" makes the drivers more cautious about it. If any vehicle does not have proper installed FASTag system and deliberately passes through the toll will be charged double as fine. 1 FASTag is associated with the Radio Frequency Identification (RFID) affixed to the windshield of the vehicle for automatic verification and deduction of tolls on the National Highways directly from the Bank Account linked to the FASTag. The vehicles can pass through the gateway without any stoppage and avoid any sort of time delay. 2 This is due to the mechanism between RFID Scanner present underneath the FASTag banner and the RFID attached on the vehicle. The scanner reads the RFID on the vehicle and verifies it, makes deductions from the account of the driver, and the process continues. 3 The Issuer Agency deducts the relevant toll fee from the customer's prepaid account linked to the FASTag and this whole process is computerised. This reasoning will be done post the toll exchange. The customers enjoying the benefits of FASTag need to maintain their funds in their account linked to the FASTag. It will be called as a Top-up. This paper proposed a novel framework of effective pricing of Toll rates. Keywords: FASTag, RFID, Electronic Toll Collection, ETC, National Highway Authority of India, NHAI RFID RFID functions with the help of electromagnetic fields to intelligently scan and track tags attached to the devices. The information is stored electronically inside these tags. The passive tags collect energy from a nearby RFID reader through radio waves, whereas the active tags are embedded with the local power source and can function from hundreds of meters away from RFID scanner. RFID is a technology which provides Automatic Identification and Data Capture (AIDC). The government has introduced an effective way by affixing RFID tags on the vehicle and using it as a toll plaza wallet. This tag is known as FASTag. [4][5][6][7] FASTag is a tag, which is affixed on the vehicle's windshield. It utilises Radio-Frequency Identification (RFID) technology, which further uses electromagnetic fields to automatically verify and track the FASTag attached to the vehicle's windshield. 5 The FASTag sticker has a hidden chip that stores the owner's information electronically. Due to this, the scanners at the FASTag Toll lanes can easily read the data and the toll amount are deducted whenever the vehicle approaches the Toll Plaza lanes. It is linked and verified by a prepaid account of the owner of the vehicle having FASTag system installed and the toll amount is deducted from their prepaid accounts. FASTag is the best solution for a fast and traffic-free trip on National Highways especially at the Toll Plazas. At present, it is functional at 350+ toll plazas across different states and national highways. With the greater need of technology, according to the demands of the future, the day is not far when everybody will be using FASTag system as the common source of toll transactions across the globe 6-8 as shown in Figures 1 and 2. Various banks and payment merchants have partnered with NHAI for FASTag transactions. The only necessity is to link FASTag to one of these banks for toll deductions. Once the ID number is linked to this account, all the applicable tolls are deducted automatically from this account. Following are the key benefits of utilizing RFID-based FASTag: • Cashless transactions: with the benefit of the cashless transaction, hassle is reduced at the toll plaza. • Online recharge: Online recharges can be done using Paytm and various online transaction methods. It can be easily updated anytime and anywhere. • Saves fuel and time: Automatic deduction at toll plaza is done effectively when the vehicle approaches the toll plaza. Following are the limitations of RFID-based FASTag technology: • Some are the various limitations present in this hassle free FASTag-based technique: • Technical errors: Sometimes due to some of the technical issues, the toll payer is made to wait for a certain period, these results in piling up the queue and finally results in more traffic jams on the toll plaza. Abhishek Sontakke, Aman Diwakar and Gaganjot Kaur Materials and Methods These days, there are too much traffic violations, taking lots of lives on the roads. Looking at the concern of large trafficking at the toll plaza and the number of deaths, we would like to track the people who are violating these rules and are not being caught by traffic police so that they can be charged money as punishment. People usually do rash driving at the highways and meet with accidents and pay their lives. According to the figures released by the government, more than 140,000 people died due to road accident in 2017. They also stated that nearly 80% accidents were caused by drivers, with 62% of those blamed for speeding. Over-Speeding On highways Speed Radars and number plate recognition (NPR) Cameras are present to capture the speed of vehicle and if the speed crosses the limit then NPR camera captures the number plate of the car. A message is sent to the mobile number registered with the registration certificate of that car indicating that he/she has exceeded the speed limit at location with the time stamp. Accordingly, a challan fee will be added to the FASTag and will be deducted as the car passes through the next FASTag lane at the National Toll Plaza as shown in Figures 3 and 4. No Seat Belt Driving without fastening seat belt results is caught by the surveillance camera placed on the toll both which costs in fine and a challan message is delivered to the mobile number registered with the registration certificate of that vehicle as shown in Figure 5. Wrong-way Driving Driving on the wrong side of the road is considered a traffic violation under section 184 of the Motor Vehicles Act. For the first offence, an offender may be booked for a six-month prison term or fined Rs 1000. On the second instance, the period of imprisonment is increased to two years or/and the fine amount increases to Rs. 2000. Parking in "No Parking" Zone If a vehicle is parked under the no parking zone, then a traffic challan is generated according to the 15 (2) RRR 177 Motor Vehicle Act as shown in Figure 6. Driving Without a Valid Permit For commercial vehicles, valid permit is checked by the details fetched from the FASTag if validity fails penalty up to INR 5000 and no less than INR 2000 is charged under 130 r/w 177 Motor Vehicle Act. Overtaking from the Wrong Side If a vehicle overtakes from wrong side challan of is generated according to the 177 Motor Vehicle Act. Refund for Travelling Less Distance on Toll Road The current rules and regulations followed by the NHAI and the toll amount at the toll plazas is expensive if we talk about the distance travelled between two toll plazas in two different cities. So, to avoid the loss to the users, we can charge the users for the distance they travelled after check-in at the toll plaza i.e., Charge for per kilometer as shown in Figure 7. Car 1 and car 2 passes a toll plaza and pays same amount, suppose Rs 100. This toll has been given for the full toll road. As shown in the figure, car 1 passes the next toll plaza and travels Y km whereas car 2 returns after travelling X km but has paid the amount for full toll road. That means he is paying more than the usage. So, for providing benefit to the customer, a refund should be made. The deduction is according to the distance travelled on the toll road as shown in Table 1. Check-in Again Within 24 h in the Range of Toll Plaza, Let the Vehicle Go Without Payment of Toll Tracking location of the car and monitoring it for next 24 h. If the vehicle passes the toll plaza first at 4:00 pm and if it come back before 4:00 pm on the next day in the range of toll plaza but while standing in the queue the time exceeds 4:00 pm then also toll charge will not be made. If it is deducted, refund will be processed within 24 h to FASTag card. The zone of toll plaza will be decided by Abhishek Sontakke, Aman Diwakar and Gaganjot Kaur adding a RFID reader about 500-1000 m before the toll plaza and detecting the vehicle so that the person is free by not paying the toll again. Results and Discussion This study deals with the application of the latest technology of the FASTag which is beneficial in avoiding the traffic hassle at the National toll plazas. Using the proposed method, FASTag automatically deducted the toll charges from the designated FASTag account. This proposed methodology can also be used for the generation of challan and automatic deduction through the FASTag linked to the vehicle, irrespective of the light motor vehicle and heavy motor vehicle. Dear Customer, you have exceeded the speed limit of your vehicle Reg. no. DL3SAU1214 at NH44, Mathura Road, Faridabad, Haryana, India. A traffic challan of Rs 400 has been generated and will be deducted as you pass the next National Toll Plaza Figure 7. Toll road. Conclusion This study deals with the application of the latest technology of the FASTag which is beneficial in avoiding the traffic hassle at the National toll plazas. With the use of FASTag installed on the front windshield of vehicles, toll generation is made a fun job. Automatically, the toll charges are deducted from FASTag account. This technology can also be used for the generation of challan and automatic deduction through the FASTag linked to the vehicle. A solution for many people has been given who is suffering from being late by 1-2 min due to long queue at the toll plaza and paying the toll again.
2,713.2
2019-11-30T00:00:00.000
[ "Computer Science" ]
The positive feedback between Snail and DAB2IP regulates EMT, invasion and metastasis in colorectal cancer DAB2IP has been identified as a tumor suppressor in several cancers but its oncogenic role and transcriptionally regulatory mechanisms in the progression of colorectal carcinoma (CRC) remain unknown. In this study, DAB2IP was down-regulated in CRC tissues and a valuable prognostic marker for survival of CRC patients, especially in the late stage. Moreover, DAB2IP was sufficient to suppress proliferation, epithelial-mesenchymal transition (EMT), invasion and metastasis in CRC. Mechanically, the linear complex of EZH2/HDAC1/Snail contributed to DAB2IP silencing in CRC cells. The study further proved that the positive feedback loop between Snail and DAB2IP existed in CRC cells and DAB2IP was required for Snail-induced aggressive cell behaviors. Finally, DAB2IP correlated negatively with Snail and EZH2 expressions in CRC tissues. Our findings reveal the suppressive role and a novel regulatory mechanism of DAB2IP expression in the progression of CRC. DAB2IP may be a potential, novel therapeutic and prognostic target for clinical CRC patients. Patients This study was conducted on a total of 200 paraffinembedded CRC samples from 2004 to 2005 year, which were histologically and clinically diagnosed from the Department of Pathology, Nanfang Hospital affiliated with Southern Medical University. They were not pretreated with radiotherapy or chemotherapy prior to surgery. They were followed up for 5 years and their complete clinical data were collected. For the use of these clinical materials for research purposes, prior patient's consent and approval from the Institute Research Ethics Committee were obtained. Clinical information of the samples was described in detail in Table 1. Patients included 111 males and 89 females, of ages ranging from 31 to 88 years (mean, 62 years old). All cases were with no metastasis at its presence at original presentation with CRC. The tables on metastasis pertain to its presence at any time in followup. A total of 75 (37.5%) patients died during follow up and 46 (23%) patients experienced distant metastasis. Fresh CRC tissues and the corresponding normal tissues were collected from 20 patients who underwent CRC resection without prior radiotherapy and chemotherapy at the Department of General Surgery in Nanfang Hospital in 2008 year. These samples were collected immediately after resection, snap-frozen in liquid nitrogen, and then stored at −80°C until needed. Real-Time RT-PCR Total RNA was extracted using Trizol reagent (Invitrogen) and cDNA was synthesized using an access RT system (Promega). Real-time PCR was performed using Mx3000P Real-time PCR System (Stratagene) and SYBR PremixExTaqTM (TaKaRa). The primers were selected as the following: For DAB2IP, forward 5′-TGG ACG ATG TGC TCT ATG CC-3′, reverse 5′-GGA TGG TGA TGG TTT GGT AG-3′. The PCR condition was 95°C for 30s, followed by 40 cycles of amplification (95°C for 5s, 60°C for 34s, 72°C for 34s). The comparative quantification was determined using the 2 −ΔΔCt method. Each sample was tested three times. Immunohistochemical Analysis (IHC) The sections were deparaffinized and rehydrated, and endogenous peroxidase was inhibited with 0.3% H 2 O 2 methanol. For antigen retrieval, slides were boiled in 0.01 M, pH 6.0 sodium citrate buffer for 15 min in a microwave oven. After blocked with the 5% normal goat serum, primary anti-DAB2IP polyclonal antibody (1:400, Abcam, Cambridge, UK), anti-Ezh2 monoclonal antibody (1:200, Cell Signal Technology, USA) and anti-Snail monoclonal antibody (Abcam, Cambridge, UK) was applied and the slides were incubated at 4°C overnight. Following incubation with biotinylated secondary antisera, the streptavidinbiotin complex/horseradish peroxidase was applied. Finally, the visualization signal was developed with DAB staining, and the slides were counterstained in hematoxylin. The stained tissue sections were reviewed and scored separately by two pathologists blinded to the clinical parameters. The staining intensity was scored as 0 (negative), 1 (weak), 2 (medium) and 3 (strong). The extent of staining was scored as 0 (0%), SuPPlemenTARy mATeRIAlS AnD meTHODS www.impactjournals.com/oncotarget/ 1 (1-25%), 2 (26-50%), 3 (51-75%) and 4 (76-100%), according to the percentages of the positive staining areas in relation to the entire carcinoma-involved area or the entire section for the normal samples. The sum of the intensity and extent scores was used as the final staining score (0-7) for DAB2IP or Ezh2. The staining of DAB2IP or Ezh2 was assessed as follows: (−) means a final staining score of <3; (+) a final staining score of 3; (+ +) a final staining score of 4; and (+ + +) a final staining score of ≥5. Tumors having a final staining score of 3 or higher were considered to be positive. This relatively simple, reproducible scoring method gives highly concordant results between independent evaluators and has been used in previous studies [1,2]. Cutoff values for DAB2IP were chosen based on a measure of heterogeneity by using log-rank statistical analysis with respect to overall survival. An optimal cutoff value was identified: Tumors with a final staining score 0~+ were classified as tumors with low expression of DAB2IP, and tumors with a final staining score ++~+++ were classified as tumors with high expression of DAB2IP. Chromatin immunoprecipitation assay (ChIP) Cells were lysed using SDS lysis buffer and DNA was sheared by sonication to lengths between 200 and 1000 base pairs. Protein-DNA complexes were precipitated by antisnail antibody (Abcam, Cambridge, UK), and then recovered using protein G agarose beads, washed, and eluted. Crosslinks in protein-DNA complexes were then reversed by NaCL. The immunoprecipitated DNA was amplified by PCR for specific sequences containing snail-binding sites.
1,248.8
2015-08-21T00:00:00.000
[ "Biology", "Medicine" ]
A scoping review of the use of minimally important difference of EQ-5D utility index and EQ-VAS scores in health technology assessment Objectives Estimates of minimally important differences (MID) can assist interpretation of data collected using patient-reported outcomes (PRO), but variability exists in the emphasis placed on MIDs in health technology assessment (HTA) guidelines. This study aimed to identify to what extent information on the MID of a commonly used PRO, the EQ-5D, is required and utilised by selected HTA agencies. Methods Technology appraisal (TA) documents from HTA agencies in England, France, Germany, and the US between 2019 and 2021 were reviewed to identify documents which discussed MID of EQ-5D data as a clinical outcome assessment (COA) endpoint. Results Of 151 TAs utilising EQ-5D as a COA endpoint, 58 (38%) discussed MID of EQ-5D data. Discussion of MID was most frequent in Germany, in 75% (n = 12/16) of Gemeinsamer Bundesausschuss (G-BA) and 44% (n = 34/78) of Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, (IQWiG) TAs. MID was predominantly applied to the EQ-VAS (n = 50), most frequently using a threshold of > 7 or > 10 points (n = 13). G-BA and IQWiG frequently criticised MID analyses, particularly the sources of MID thresholds for the EQ-VAS, as they were perceived as being unsuitable for assessing the validity of MID. Conclusion MID of the EQ-5D was not frequently discussed outside of Germany, and this did not appear to negatively impact decision-making of these HTA agencies. While MID thresholds were often applied to EQ-VAS data in German TAs, analyses were frequently rejected in benefit assessments due to concerns with their validity. Companies should pre-specify analyses of continuous data in statistical analysis plans to be considered for treatment benefit assessment in Germany. Introduction Patient-reported outcome (PRO) measures, which assess patients' perceived health-related quality of life (HRQoL) or health status, are increasingly included in clinical trials to support clinical efficacy and safety endpoints [1].The EQ-5D is a generic PRO measure, comprising five health dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression) and a visual analogue scale (VAS) [2], and is the most frequently preferred choice of instrument in health technology assessment (HTA) guidelines [3].Two versions of the EQ-5D are available: the 3-Level EQ-5D (EQ-5D-3 L) with 3 severity levels for each dimension and the 5-Level EQ-5D (EQ-5D-5 L) with five severity levels [2].In order to interpret PROs such as the EQ-5D, minimally important difference (MID) thresholds can be applied to determine whether change in scores translates into markers of clinical improvement, or via defining responders to treatments [4].MID has been defined as "the smallest difference in score in the domain of interest that patients perceive as important, either beneficial or harmful, and which would lead the clinician to consider a change in the patient's management" [5].Terminology relating to MID can be confusing, with multiple terms that differ in definition, which have led to inconsistency in terminology used [6,7].Further, there are differences in methods for estimating MID and minimal important change (MIC), which vary in methodological robustness [7].De Vet and Terwee (2010) highlight that while MIC and MID are frequently used interchangeably, the authors prefer the use of MIC instead of MID, in order to differentiate changes from differences [8]. Some guidance on the use of MID has been provided by regulatory agencies, such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) [9][10][11].In order for both agencies to accept the clinical relevance of PRO data to support labeling claims, thresholds must be justified by the sponsor and defined a priori in the study protocol and statistical analysis plan [9][10][11].Furthermore, while some leading HTA bodies such as the National Institute for Health and Care Excellence (NICE) and the Institute for Clinical and Economic Review (ICER) have not included information on the adoption of MID in their methods for health technology evaluations [12,13], other agencies have incorporated it into their guidance.The Haute Autorité de Santé (HAS) recognises that MIDs can be used to overcome challenges of interpreting HRQoL data, however, data must be subject to rigorous methodology, with at least one clinical relevance threshold specified in study protocols, for assessment by the Commission de la Transparence [14].More recently, Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, (IQWiG) updated its General Methods in November 2020 and stated that responder analyses using an MID would be used for assessment, providing that the analyses were pre-specified in study protocols and the response criterion corresponds to at least 15% of the scale range of the PRO used [15]. MID threshold estimates can be derived through several approaches (e.g., anchor-based, distributionbased) and there is no consensus on an MID to use for the EQ-5D utility index and EQ-VAS [16].Estimated thresholds can vary by patient population, clinical context, sociodemographic factors, and at the group level, depending on whether patients' health status improves or deteriorates [6]. As part of a broader study to review the extent to which EQ-5D is used as a clinical outcome assessment endpoint in health technology assessment (HTA) decisions, regulatory labeling claims, and published literature [17], the objective of this study was to identify to what extent information on the MID of EQ-5D utility index and EQ-VAS scores is required and utilised by HTA agencies. Study selection All retrieved documents which included EQ-5D-related terminology were reviewed by one analyst, and 10% were reviewed by a second analyst.Records were included or excluded according to pre-specified eligibility criteria.Inclusion criteria for the broader HTA review included drug technologies intended for human use, and EQ-5D data (utility index and/or VAS) presented outside of the context of economic evaluation in guidance documents and supporting material.Appraisal documents which described non-drug technologies (e.g., medical devices, procedures, diagnostics, or digital applications), referred to EQ-5D data only in the context of economic evaluation, presented EQ-5D-Y data only, or those related to minor modifications of the marketing authorisation which did not provide additional data (e.g., 'demande de renouvellement d'inscription' or 'application for renewal of registration' reviews conducted by HAS) were excluded.Any disagreements between analysts were resolved through discussion until a consensus was reached. Data extraction and synthesis Data from the included TAs were extracted by one analyst and quality checked by a second analyst.Data were extracted from guidance documents, and additional data (e.g., further analyses) were extracted from supporting documents (e.g., NICE committee papers or G-BA tragende gründe zum beschluss and zusammenfassende dokumentation), where available.As G-BA TAs were identified at a later date, abbreviated data extractions were performed for G-BA TAs, whereby only differences between data reported in linked G-BA and IQWiG documents (i.e., reporting the same product and indication) were extracted, to avoid duplication of data.Extracted data included drug assessment details, source, and type of EQ-5D data, whether MID was discussed, the level of MID applied and its source, and HTA agency comments about the application of MID.Where outcome data were missing, they were extracted as "not reported".Data were presented descriptively, using a combination of narrative synthesis and summary tables to present frequencies of MID use.MID values were grouped into pre-specified thresholds, based on MID estimates for the EQ-5D utility index (UK scores) and EQ-VAS for cancer [18].No statistical comparative analyses were performed.Differences between data reported in linked IQWiG and G-BA documents for German HTA submissions were also presented descriptively. Literature search A detailed breakdown of the flow of studies in the HTA review has been described previously [17].In summary, 1329 HTA decision and supporting documents from 1072 technology appraisals were identified in the literature search.After screening for eligibility, 298 documents from 195 TAs met the inclusion criteria (G-BA n = 60, HAS n = 11, ICER n = 3, IQWiG n = 78, NICE n = 43).However, only 16 of the 60 G-BA TAs meeting the inclusion criteria provided additional EQ-5D data to linked IQWiG TAs and were extracted.Therefore, 151 TAs were considered for MID data. Of those which mentioned the MID, a greater proportion discussed the MID for the EQ-VAS (86%) than the EQ-5D utility index (5%), or the utility index and EQ-VAS in combination (5%; see Table 2).Forty six of the 53 (87%) appraisals which discussed the MID for the EQ-VAS were German. EQ-5D MID thresholds reported Reported MID thresholds stratified by HTA agency are summarised in Table 3.Of the 58 appraisals which mentioned MID, 50 (86%) reported the threshold utilised (thresholds were reported for both the EQ-5D utility index and EQ-VAS in 1 NICE [28] and in 1 HAS TA [29]). Differences in MID between G-BA and IQWiG appraisals of the same product When MIDs were compared between G-BA and IQWiG appraisals of the same product and indication (linked appraisals), 4 (25%) G-BA appraisals which presented additional EQ-5D data reported different MID usage [21,22,26,47] (Table 5).In all cases, the MID threshold was reported in G-BA and not in IQWiG TAs.For 31% of G-BA TAs, MID thresholds were not reported in either one or both of the linked IQWiG and G-BA documents. Acceptability of EQ-5D MID data In 34 appraisals, HTA agency comments were provided about the acceptability of the MID source and/or thresholds applied by the submitting companies, almost all of which were from Germany (G-BA n = 12, IQWiG n = 19, NICE n = 3).In 2 NICE TAs [30,69], it was noted that there was a lack of clarity about the MID thresholds applied, and results should be interpreted cautiously due to small patient sample sizes in another [32].In a fourth the Evidence Review Group stated it was "satisfied that the company's approach to analysing patient-reported outcomes was pre-specified" (including applying an MID of ≥ 0.08 to the EQ-5D-5 L utility index) and that the approach was appropriate [31].However, German HTA agencies were more critical of MID data analyses, particularly in reference to a lack of pre-specification of the MIDs utilised [37,68,70] and their source.In 13 TAs, IQWiG criticised the use of Pickard et al. 2007 [18] as the source of MID thresholds for the EQ-VAS, as it was perceived as being unsuitable for assessing the validity of MID [33, 34, 36, 39, 41, 42, 48, 49, 56-58, 62, 63].Consequently, MID analyses were excluded from the benefit assessment.Similarly, in the assessment of daratumumab (Darzalex, Janssen-Cilag International NV) [68], analyses of EQ-VAS data based on MIDs estimated by Hurst et al. 1997 [67] were also considered to be inappropriate and excluded from the benefit assessment, as it was noted that a MID for the EQ-VAS was not examined in Hurst et al. 1997 [67]. The G-BA echoed the opinion of IQWiG that the MID from Pickard et al. 2007 [18] was unsuitable, as the MID was not derived from a longitudinal study [19][20][21][22][23][24][25][26][27].Furthermore, the G-BA stated that the Eastern Cooperative Oncology Group Performance Scale (ECOG-PS) and Functional Assessment of Cancer Therapy -General (FACT-G) total score anchors used in the study were also not considered by IQWiG to be suitable for deriving a MID, however the reasoning for this was not provided [19][20][21]26].In several cases, IQWiG utilised continuous analyses of EQ-VAS data (e.g., standardised mean differences [a summary statistic where standard deviations are used to standardise results of studies to a single, weighted scale [71]] in EQ-VAS score, expressed as Hedges' g [an effect size measure representing the standardised difference between means [72]]) instead of responder analyses (the proportion of patients achieving a pre-defined level of improvement [73]) based on a MID [19-24, 26, 27, 70].Nevertheless, the G-BA differed from IQWiG and considered responder analyses using the EQ-VAS in its decision making, citing that responder analyses based on a MID for clinical evaluation of effects have advantages over analyses of standardised mean value differences [19-26, 47, 70]. Discussion In the context of HTA decision making, this study highlighted that estimates of MID are infrequently used to analyse and interpret EQ-5D data outside of Germany.Overall, 38% of included records (n = 58/151) discussed MID of the EQ-5D in some context, 79% (n = 46/58) of which were from Germany.Considering we found in the broader HTA review that 100% of IQWiG and 94% of G-BA TAs reporting EQ-5D data for COA were for the EQ-VAS only [17], it was perhaps unsurprising that 86% of all TAs and 100% of German TAs mentioning MID were for the EQ-VAS.Due to the small proportion of TAs discussing MID for the EQ-5D utility index (n = 5, 9%), limited conclusions can be drawn from the data.Thresholds were reported in 1 HAS and 4 NICE TAs and sources were provided for 2 TAs, but none of which were duplicated.However, NICE did note in 1 TA that the approach used to analyse EQ-5D utility index data was appropriate [31].Pickard et al., 2007 [18] was the most frequently cited source of MID, in 88% (n = 35/40) of TAs which reported the source and was exclusively used for the EQ-VAS in German submissions.In this reference, Pickard et al. estimated cancer-specific MIDs for EQ-VAS scores ranging from 7 to 10, when MIDs were averaged across the anchor-based categories derived using FACT-G quintiles.In our review, we found 10 different variations in MID around the 7 and/or 10-point threshold from TAs quoting this source, with scores greater than 7 or greater than 10 points as the most frequently reported MID.We also found that of the TAs which reported the source of MID (n = 40), almost all applied thresholds to patient populations with the same indication as the source (95%, n = 38).While HAS recognises the benefits of using MIDs in its guidance [14], currently, there are no recommended MID thresholds for NICE, HAS, or ICER.However, in November 2020, IQWiG introduced a value of at least 15% of the scale range of the generic or disease-specific instrument used, which was derived from the findings of a systematic literature review of MIDs in 8 therapeutic areas [15].As there is no universal MID estimate to use for each PRO, and MIDs can be highly variable, IQWiG adopted this approach to ensure that suitable response thresholds are used in responder analyses for benefit assessments and to minimise selective outcome reporting, which could arise by selecting one of many available MIDs.As the EQ-VAS is predominantly used in Germany, and the scale ranges from 0 to 100 points, this criterion equates to an improvement in responses of 15 points or above.In this review, we found that no TAs reported using thresholds starting at or above 15 points for the EQ-VAS.The highest threshold utilised was 12 points in a NICE TA of gilteritinib for treating relapsed or refractory acute myeloid leukaemia [74].Furthermore, despite the availability of MID estimates for the EQ-VAS in disease areas such as chronic obstructive pulmonary disease, oncology, osteoarthritis, and Crohn's disease [18,66,[75][76][77][78], we were unable to identify MID estimates that meet IQWiG's recommendations.It is therefore possible that this new MID requirement could be unrealistically large for the EQ-VAS and could result in fewer products gaining added value benefit based on PRO data.Further research is required to identify whether a 15% improvement in the EQ-VAS is a minimally meaningful change as perceived by patients. Discussion of the acceptability of EQ-5D MID data varied between HTA agencies.There was no mention of it in the included appraisals by HAS and ICER.Four NICE TAs included agency comments related to the MID of EQ-5D data, one of which was favourable, and all except 1 drug were recommended.Given that these HTA agencies have not published recommendations on MID thresholds to use (or even discussed MID in guidance documents), the low frequency of TAs discussing MID does not appear to have negatively impacted the final decision making on drug technologies by HAS, ICER, or NICE. Conversely, the acceptability of EQ-5D MID data was frequently discussed in German TAs, including 12 G-BA and 19 IQWiG TAs.Key criticisms referred to a lack of pre-specifying MID analyses in study protocols and the validity of the thresholds used.Principally, IQWiG did not utilise EQ-VAS responder analyses in submissions citing Pickard et al. 2007 [18] as this source was not deemed suitable to demonstrate validity of the EQ-5D MID.In agreement, the G-BA further elaborated that the main concern related to the cross-sectional design of the study underpinning Pickard et al. 's MID analyses.Concerns were also expressed about the choice of anchors.In these cases, the G-BA noted that IQWiG utilised continuous analyses of EQ-VAS (e.g., standardised mean differences in EQ-VAS score, expressed as Hedges' g) instead of responder analyses based on a MID.However, contrary to these criticisms, the G-BA still considered responder analyses in its decision making, due to preferring process consistency and recognising the advantages of using responder analyses based on a MID compared with analyses of standardised mean value differences.Since the searches were performed in this literature review, the G-BA has adopted the mandatory requirement to use the 15% threshold as suggested by IQWiG [79] to define the MID threshold used in responder analyses.Therefore, in future, we anticipate the exclusion of EQ-VAS responder analyses from benefit assessments in a greater number of TAs where chosen MIDs do not meet the 15% threshold.Pharmaceutical companies should consider PRO requirements that are relevant for HTA decision-making when designing clinical trials.Until MIDs meeting a 15% threshold for the EQ-VAS are available, companies should include the pre-specification of analyses of continuous data (i.e., standardised mean differences expressed as Hedges' g) in statistical analysis plans in order to be considered for treatment benefit assessment in Germany. Strengths, limitations, and scope for further work This study incorporated appraisals from multiple HTA agencies from the same time period, which allowed for direct comparison of EQ-5D MID data across different markets.Five agencies were chosen for review, as they are leading global HTA bodies which release publicly available and transparent documents for each technology.However, they may not necessarily reflect the use of EQ-5D amongst other agencies.Further investigation across additional HTA agencies could help expand the context of the results detailed here.It is also important to note that searching of G-BA documents was added at a later date, therefore data are not presented in the same way as for other agencies.This is because abbreviated extractions were performed which involved focus on data above what were reported in the linked IQWiG documents, so as not to introduce duplicated data. Another limitation surrounds the chosen two-year timeframe in the search strategy.As the searches were not limited by disease area or drug technology, there was a large volume of articles to be screened.While this approach allowed exploration of trends between HTA agencies as part of the broader literature review, there were relatively low numbers of included TAs which mentioned MID for some HTA agencies.Furthermore, searches were conducted two months after IQWiG updated its guidance on the use of MID for analysing PRO data.Further research is warranted to identify longitudinal trends in MID usage, and whether these guidelines have affected the proportion of drug assessments with accepted PROs and benefit ratings affected by PROs, since coming into effect. Conclusions The MIDs of EQ-5D outcomes were not frequently discussed in HTA dossiers outside of Germany, and this did not appear to negatively impact the decision-making of HTA agencies.While MID thresholds were often applied to EQ-VAS data in German TAs, these analyses were frequently rejected from benefit assessments, due to concerns with the validity of their source.Furthermore, although most thresholds for the EQ-VAS were greater than 7 or 10 points, no thresholds started at or above IQWiG's recommended threshold of 15 points.Companies should carefully consider utilising appropriate MID thresholds according to HTA agency requirements, to demonstrate product value during clinical trial design.Specifically for Germany, until MIDs meeting a 15% threshold for the EQ-VAS are available, study sponsors should include the pre-specification of analyses of continuous data (i.e., standardised mean differences expressed as Hedges' g) in statistical analysis plans to be considered for treatment benefit assessment. Table 1 Discussion of minimally important difference, stratified by HTA agency Abbreviations: G-BA, Gemeinsame Bundesausschuss; HAS, Haute Autorité de Santé; HTA, health technology assessment; ICER, Institute for Clinical and Economic Review; IQWiG, Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen; MID, minimally important difference; NICE, National Institute for Health and Care Excellence Table 2 Discussion of minimally important difference, stratified by EQ-5D measure and HTA agency Abbreviations: G-BA, Gemeinsame Bundesausschuss; HAS, Haute Autorité de Santé; HTA, health technology assessment; ICER, Institute for Clinical and Economic Review; IQWiG, Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen; NICE, National Institute for Health and Care Excellence; NR, not reported; VAS, visual analogue scale Table 3 EQ-5D MID thresholds reported, stratified by HTA agency Table 4 Source of MID thresholds for EQ-5D utility index and EQ-VAS
4,764.2
2024-08-13T00:00:00.000
[ "Medicine", "Economics" ]
Intelligent pH Indicative Film from Plant-Based Extract for Active Biodegradable Smart Food Packing Centre for Drug Discovery and Development, Sathyabama Institute of Science and Technology, Chennai, Tamil Nadu, India Department of Biotechnology, School of Bio and Chemical Engineering, Sathyabama Institute of Science and Technology, Chennai, Tamil Nadu, India Department of Biotechnology, Guru Nanak College (Autonomous), Velachery, Chennai, Tamil Nadu, India Department of Zoology, Government Art’s College (A), Karur, Tamil Nadu, India Department of Mechanical and Process Engineering, Hochschule Offenburg, Offenburg University, Germany Department of Zoology, Annamalai University, Tamil Nadu 608002, India Department of Zoology, Arignar Anna Govt Art’s College Cheyyar, Tamil Nadu, India Department of Biology, School of Natural Science, Madda Walabu University, Oromiya Region, Ethiopia AMITY University Mumbai, Maharashtra, India Hydrogel Hydrogels are defined as hydrophilic, three-dimensional polymers crossed along a correlated absorption interval of 10 g/g [1]. The hydrogel has a wide demand in various sectors such as agriculture [2], drug discovery, and water purification [3,4]. Bashir et al. were the first to write on the use of hydrogels in 1960 [5]. Hydrogel could be a threedimensional chemical compound structure that swells once exposed to water and includes valency bonds fashioned by the reaction of 1 or additional monomers, attachment bonds like van der Waals interactions, and chemical element bonds between chains. Hydrogels could take the shape of rigid formed forms (soft contact lenses), iron-based powder (pills or tablets for oral ingestion), small particles (as bioadhesive carriers or wound treatment), coatings (on implants or catheters), membranes or sheets, and encapsulated solids and liquids [6]. The analysis is based on making a food packaging film developed with property and perishable materials. As a result, poly(vinyl alcohol) (PVA) is replaced by chitosan (CS) in the packaging film. What is more, anthocyanin (ATH), a food colorant widely used and extracted from red cabbage during this analysis, is more to the forged film for acclimatization [7]. Chitosan (CS) is a natural polymer derived from chitin and is abundant in crab and shrimp shells [8]. Due to the significant amino and hydroxyl functional groups in chitosan, it has unique traits such as polycationic nature, chelating properties, and film-forming potential. It shows different biological activities as antimicrobial activity, biodegradability, etc. [8]. For further processing of polymer films, chitosan is often blended with other polymers with more versatile chains, such as poly(vinyl alcohol) (PVA) [9]. PVA is an emulsifiable nontoxic synthetic polymer of elasticity and high tensile strength and resilience, as well as low gas permeability, such as O 2 and CO 2 . At extreme temperatures, PVA can have mild water solubility [10]. Due to its excellent film-forming properties, PVA has been widely used in application as thin films, such as food packaging and medicinal application. Anthocyanins are water-soluble pigments found in a variety of plants, including red cabbage, blueberries, eggplants, and flowers. Depending on the pH values, they may use it as a color indicator. It obtained from red cabbage in particular and can range from red to purple and to blue at various pH values [11,12]. It makes it easier to identify the food quality in terms of pH. To increase the mechanical strength of cast film cross-linking substances such as sodium tripolyphosphate (STPP), glyoxal and glutaraldehyde must be added during the processing of thin polymer films. pH indicative films were successfully synthesized from hydrogels made by combining 1% poly(vinyl alcohol) (PVA) and 1% chitosan (CS) with anthocyanin (ATH) and sodium tripolyphosphate in this sample (STTP). To improve the mechanical properties of the cast films, ATH extracted from red cabbage was used as the pH indicator, and STTP was used as the cross-linking agent. The presence of the ATH in the cast films was verified by FTIR spectra. The cast film tensile strength, elongation-at-break, and swelling indices were also determined. The compositions of PVA/CS and the STPP dosage applied in the hydrogels had a significant impact on the properties of pH indicative films. If 35 percent of the PVA hydrogel was replaced with CS, the tensile strength of a film cast from pure PVA hydrogel could drop from 43.27 MPa to 29.89 MPa. The cast films were used as a food wrap that could be used to visually track the consistency of the enwrapped food by changing color as the pH values of the enwrapped food change. In practice, a sequential shift in color on the pH suggestive films partially enwrapping the pork belly was successfully observed, signaling the meat's spoilage [7]. Polymer-Based pH Sensitivity pH-sensitive polymers are polyelectrolytes with weak acidic or basic groups in their structure that accept or release protons in response to changes in pH. These acidic or basic polyelectrolyte groups can be ionized in the same way as the acidic or basic groups of monoacids or monobasic; however, complete ionization of these structures is more difficult due to their larger size to the electrostatic effects of other nearby ionized groups. It does not contain acid or low acid, i.e., carboxylic, or basic, i.e., ammonia. In response to the pH of the environment, these release protons or accept free protons. Under these pH conditions, the functional groups presented along the backbone and side chains ionized, causing the polymer to swell or dissolve [13]. pH-sensitive anionic polymers are formed on PAA or their derivatives. These systems typically produce anionically charged fractions at pH levels above their pKa, which may attract positively charged therapeutic agents [14]. Polymers that respond to pH can be linear, branched, or networked. Depending on their systems, they can have different responses to solution conditions and different selfassembly behaviors. pH change induces deswelling in hydrogel and dendrimer-like structures. Surface modified with polymers allows for the creation of ionic surface and thin/ thick layers as a result of pH changes. pH modifications cause changes in polymers of various architectures [15]. These pH-sensitive polymeric systems' intelligent properties are appealing for application in life sciences and the chemical industry, with possible applications in managed drug delivery, personal care, industrial coatings, oil exploration, and water remediation, among others [16]. Several popular polymerization methods can be used to make pH-sensitive polymers. Depending on the form of polymerization, functional groups need to be prevented from reacting. The masking is often removed to revive polymerization to restore pH-sensitive functionality. Because of the molecular weight distribution, living polymerization is commonly used to create pH-sensitive polymers. Some examples include group transfer polymerization (GTP), atom transfer radical polymerization (ATRP), and reversible addition-fragmentation chain transfer (RAFT). Graft polymers, which have a backbone with branches, are a common form of synthesis. The branch structure can be modified to achieve various properties [15]. Food Packing Food packaging is one of the most important aspects of a commodity from the consumer's viewpoint as well as one of the most important in modern commercial trade [17,18] since it ensures food quality and protection, aids in transportation, allows safe storage, avoids product harm and loss, reduces economic losses, aids in product marketing, and indirectly preserves consumer's health [19]. The typical and traditional food packaging, which is comprised of petroleum compounds, is a safe device. It protects the food product from microbial/physicochemical deterioration, as well as ambient conditions and external stimuli, extending its shelf life [20]. Biopolymers such as proteins, polysaccharides, and their derivatives are natural polymers that degrade in the environment as a result of natural physical, chemical, and biological processes especially microorganism metabolism. It could be separated by a variety of natural resources. Plant-based polysaccharides such as chitosan, starch, cellulose, alginate, agar, carrageenan, pectin, and various gums are commonly used 2 Journal of Nanomaterials [21,22]. While one of the functions of food packaging is to ensure food quality and safety, modern packaging must also notify the customer about food quality and sustainability for consumption. Several smart packaging systems based on colorimetric indicators will provide consumers with real-time quality monitoring for food items through quality sensors/ indicators for this reason of nowadays vast development [23]. Due to their low nontoxicity, eco-friendliness, easy preparation, biodegradability, low cost, availability, sustainability, and pollution-free properties, smart packaging based on natural colors and biodegradable films has recently emerged as an attractive alternative for use in food packaging among various indicators' and sensors' freshness applied in food systems. These natural colorants embedded in the biopolymeric film matrix appear to change color as the physiological condition of the food change during spoilage, thereby informing the customer of the packaged food's consistency and suitability for consumption. [24] Plant Pigment Pigments are found in every organism in the world, and plants are the major producers; therefore, pigments are responsible for the colors we see every day. Leaves, fruits, vegetables, and flowers, as well as the skin, eyes, and other animal tissues, as well as germs and fungus, contain them ( Figure 1). Medicines, foods, clothing, furniture, cosmetics, and other products employ natural and synthetic pigments [25]. Pigments are chemical substances that absorb light in the visible wavelength range. The color is produced by a molecule-specific structure called a chromophore. This structure absorbs the energy, causing an electron to jump from an exterior orbital to a higher orbital. The nonabsorbed energy is reflected and/or refracted to be captured by the eye, and the resulting neural impulses are transferred to the brain, where they can be interpreted as a color [26]. Natural, synthetic, and inorganic pigments are classed according to their origin. Living species such as plants, animals, fungi, and bacteria produce natural pigments. Laboratory-made synthetic pigments are available. Organic pigments, both natural and manmade, are organic substances. Inorganic pigments are present in nature or can be made synthetically [27]. To give the desired functional qualities of a pH indicator, natural colors such as anthocyanins can be incorporated to biodegradable starch films. Anthocyanins are secondary metabolites found in a wide range of fruits and vegetables (e.g., red cabbage, sweet potato, bean husk, and grapes), making them a viable source of natural pH indicators [28]. Red Cabbage Pigment The red cabbage (Brassica oleracea L. var. capitata F. rubra) is such a cabbage, which is recognized by its shading. The leaf of the red cabbage is more somewhat blue-purple versus red tone and is orbicular and firmly wrapped with waxy leaves. Red cabbage is currently developed and exchanged universally as a colorant in the food business; notwithstand-ing, it has been utilized for restorative purposes [29]. The red cabbage was mixed with phinolic content with antocyanins are the significant properties [30]. Anthocyanins can be delivered via the phenylpropanoid pathway and are classified as a flavonoid of class [31]. Red cabbage is a rich birthplace of anthocyanins, which can use for common tinge. Its tone is pH-subordinate which is red to blur tone over an expansive pH range [32]. There are 24 different types of anthocyanins in red cabbage, as well as sweet-smelling and aliphatic acids [33]. Antihypertensive, cancer prevention agent limit, hepatoprotective, and antihyperglycemic effects are some of the physiological properties of anthocyanins [34]. Red cabbage's most common anthocyanin is cyanidin-3-diglucoside-5-glucoside, which can be nonacylated, monoacylated, or diacylated with caffeic, p-coumaric, sialic, and ferulic acids [35]. Anthocyanins are the most abundant water-soluble colorants, responsible for the blue, purple, and red hues found in many blooms and plant leafy foods. In this approach, red cabbage could be used as a distinctive colorant in the food industry [36]. Red cabbage is abundant in minerals, nutrients, oligosaccharides, and bioactive chemicals such anthocyanins, flavonols, and glucosinolates, all of which are beneficial [37]. Crimson cabbage is also valued by buyers for its flavour and as a source of a deep red color that enhances the food's taste. As a result, red cabbage is a popular and frequently used vegetable as a fresh cut portion of mixed greens. Similarly, red cabbage is characterized by a lengthy period of usage, implying that it might be easily stored and available in a new structure for the duration of the year [38]. Pigment, Gel, and Film Red cabbage (Brassica oleracea L.) is a palatable source of anthocyanins with a high content and prospective production per unit area [39]. Anthocyanin removal from red cabbage is known to result in high levels of mono-or diacylated cyanidin anthocyanins [40]. Anthocyanin type and acylation are two important factors that determine their shading 3 Journal of Nanomaterials properties at different pH levels [41]. Because of the pH of the environment, red cabbage anthocyanin concentrates can display a wide spectrum of shading from orange to red to purple and blue due to its anthocyanin structures. Anthocyanin acylation affects their cancer-preventive qualities as well as their food strength [42]. Anthocyanins make up the largest group of water-soluble colorants and are responsible for the blue, purple, and red hues found in many soil-borne flowering plant products. Red cabbage could be used as a flavouring agent in food production and as a pH indicator [43]. Because of the arrangement of covalent bonds, ionic communication, hydrogel holding, and hydrophobic connections, three-dimensional organizations of hydrophilic polymers in hydrogels stood out towards the fuse of a greater number of medications for their application in lethargic maintained and controlled medication conveyance [44]. Chitosan (CS) is a characteristic polymer got from chitin and discovered richly in the shells of crabs and shrimps [8]. Chitosan is not generally best in polymer handling for film creation attributable to its mediocre mechanical properties like lower elasticity, less lengthening-at-break, and Young's modulus. Poly(vinyl alcohol) (PVA) is a waterdissolvable biodegradable engineered polymer with huge pliable strength, and the composites mixed with PVA and chitosan are known to have improved steadiness, biocompatibility, and mechanical strength, comparative with those of unadulterated PVA and unadulterated CS polymers [45]. Dissolvable projecting and hot softened expulsion are presently two of the most prominently applied preparing strategies for the creation of polymer flimsy movies. The hot liquefy expulsion measure regularly requires the contribution of high energy, for example, mechanical energy as extremely high shear pressure as well as nuclear power, to guarantee the prepared polymer in the softened state. In that capacity, this technique is not appropriate for preparing a polymer with heat and additionally shear delicate particles. Conversely, dissolvable projecting is the prevalent technique for assembling films containing temperature-delicate fixings, for example, anthocyanin, because the temperature needed to vanish the dissolvable is regularly lower than the cycle temperature of the hot liquefy expulsion [46]. Food Packing-pH Film In recent years, public concern over the disposal of traditional synthetic plastics has grown, particularly when the entire time of deterioration is long. As a result, biodegradable film research has seen a boom in interest. Many studies investigated biodegradable films, such as edible films and coatings manufactured from edible ingredients, in the hopes of increasing food quality and extending shelf life [47]. Chitosan is a natural cationic polysaccharide and a diacylated chitin derivative derived primarily from shellfish processing waste. Chitosan has antimicrobial properties, which means it can stop fungi, yeast, and bacteria from growing. Chitosan film has been discovered to have high mechanical strength, flexibility, biodegradability, and antibacterial properties [48]. The food industry has invested in smart packaging in shelf in response to customer demand for fresher items with a longer shelf life (Figure 2). The packaging smart with pH indicators was created with the aim of encouraging a deliberate relationship between food and packaging in order to improve quality characteristics [49]. As it is understood that food deterioration is linked to pH changes in the product, consumers can detect these changes in the food by simply adjusting the color of the packaging [50]. Since plastic is commonly used in many applications, including food packaging, the development of pH indicator packaging can come from either fossil or renewable sources. Plastic, on the other hand, has a detrimental effect on the climate because it pollutes the environment and is not biodegradable plastics using starch, alginate, and natural fibers, for example, which can be used as food coatings due to their degradable properties and conservation ability and is a revolutionary solution to this problem aimed at environmental preservation and partial replacement of traditional polymers derived from crude oil [51]. Haghighi et al. [51] explained that the recent sharp rise in exposure to environmental concern arising from plastic packaging has sparked interest in more environmentally friendly packaging materials. This latest trend encourages the commercialization of information through the use of chitosanbased films. Because of its unusual biological and functional properties, chitosan has been extensively researched and used. However, inherent flaws such as a low mechanical properties and high susceptibility to humidity restrict its industrial applications, which include food packaging. The scientific literature addressing chitosan-based films for their potential application in the food packaging industry has been extensively reviewed in the current research. The paper summarizes the various techniques used to resolve inherent flaws in chitosan-based films and enhance their properties, with a focus on blending with natural and synthetic biopolymers [52]. Wang et al. explained that chitosan and poly(vinyl alcohol) (PVA) were used to create a semi-interpenetrating polymeric network that was cross-linked with glutaraldehyde. The chitosan had a molecular weight of 612 kDa and a degree of deacetylation of 72 percent, respectively. The chemical bonds formed by cross-linking reaction were investigated, as well as their transformation in different pH media. The mechanical properties of the hydrogel and the gelatin property of the chitosan-PVA gel solution were investigated. The formation of Schiff's base (C=N) and -NH3+ was suggested by the FTIR spectra of the hydrogel before and after swelling at pH 3 and pH 7. They also demonstrated the pH-induced transformation of C=N to C-N and -NH3+ to -NH2, as well as the Schiff's base instability. The chitosan is required from hydrogel formation due to the Schiff's base reaction between the chitosan amino groups and the glutaraldehyde aldehyde groups. The addition of PVA enhanced the hydrogel's mechanical properties. PVA, on the other hand, appears to leach out in the acidic medium during longer swelling periods due to hydrolysis of the gel networks, Schiff's foundation [53]. Alizadeh-Sani et al. clarified that the new ascent in familiarity with safe food and changing buyer perspectives has 4 Journal of Nanomaterials acquired development bundling innovation. Customers are progressively requesting common food colorants like carotenoids, betadine, anthocyanins, and chlorophylls instead of engineered colors for food applications. Accordingly, shrewd bundling dependent on characteristic colorants and biopolymers has been presented as the most recent innovation in the food bundling field. Keen items shield food from natural risks, yet additionally convey ongoing messages (colorimetric, synthetic, or electrical) to shoppers for changes in the bundling climate and food quality [21]. Carvalho et al. explained that anthocyanin shades are appropriate as regular colors for food, makeup, and dietary enhancements, because of the interest for better items and their cancer prevention agent properties. This work meant to extricate the anthocyanin colors from red cabbage and its partition of the arrangement by adsorption activity onto chitosan films. The anthocyanins were extricated from red cabbage in hot water at 90°C by 15 min. Chitosan was got from shrimp waste, and its films were created by projecting procedure (rigidity of 25:1 ± 1:9 MPa, stretching of 10 ± 3:5 %, and thickness of 103:1 ± 1:3 μm). The anthocyanin adsorption tests were acted in cluster, and the most noteworthy adsorption limit was around 140 mg g -1 [30]. Pereira and colleagues elaborate on this food packaging with time-temperature indicators as one of the so-called intelligent packages. They use a gadget that monitors the state of food in real time, demonstrating the overall impact of temperature on food quality. The goal of this study was to develop and characterize a temporal temperature indicator (TTI) based on a PVA/chitosan polymeric doped with anthocyanins that may be utilized to detect changes in the pH of packaged foods exposed to incorrect storage temperatures. To manufacture the TTI, chitosan, PVA, and anthocyanins were taken from Brassica oleracea var. capitata (red cabbage).TG-DSC, FTIR, UV-Vis, and swelling index (Si) methods were used to describe the TTI. The color variance following activation by various pH values was calculated using the CIELAB scale. The mechanical parameters of the TTI were determined using stress/strain tests. Despite having a lower modulus of elasticity than commercial polymers used in food packaging, the produced TTI has physicochem-ical qualities that make it desirable for use in intelligent food packaging. An activation test on pasteurised milk with obvious changes in the coloring of the film, which is crucial for signaling to consumers that the food has been subjected to changes in its chemical composition, is supported by the TTI shown here [54]. Castillo et al. explained that chitosan and starch are biodegradable polymers with excellent film-forming capabilities and a wide range of food-related applications, including active and smart packaging that can track and inform customers about food conditions in real time. As a result, we provide a pH monitoring system based on chitosan, corn, starch, and red cabbage extract, all of which are inexpensive and renewable. Cornstarch, a medium-molecular-weight chitosan, and Brassica oleracea var. capitata phytochemical extract (red cabbage), TG-DSC, FTIR, water vapour transmission rate, and light microscopy were used to characterize the device. The color variance following activation in various pH ranges was calculated using the CIELAB approach. To confirm the device's utility as a fish spoilage detection sensor, application tests using fish fillets were conducted. The gadget exhibits strong optical and morphological features, as well as being particularly sensitive to pH changes, according to these findings. During the application test, the equipment visually displayed pH changes. As a result, the system responds quickly to changes in sample pH. As a result, it might be utilized as a visual indicator of how food is stored and consumed [55]. Balbinot-Alfaro et al. said intelligent packaging can emit a signal (electric, colorimetric, etc.) in real time in response to any improvement in the initial packaging conditions and food quality, in addition to acting as a food safety barrier. The colorimetric sensor in pH indicators or pH sensors is normally made up of two parts: a solid base and a dye that are sensitive to pH change. The dyes are derived from a variety of fruits and vegetables, as well as synthetics. The pH of food changes at the start of the degradation process; this transition is one of the measures of product quality. Packaging with a pH indicator is a safety measure that can signify the consistency of the food at the time of purchase prior to consumption. The aim of this research is to improve the characteristics and applicability of this indicator. This review paper includes the studies on pigments, polymers, food, and packaging solution, as well as an overview of the materials/technologies used in the production and the perspectives/challenges that this new technology brings [24]. Hydrogels consisting of cellulose are hydrophilic materials that can absorb and hold a considerable quantity of water in their interstitial locations. They contain a variety of organic biopolymers such as cellulose, chitin, and chitosan. These polymers exhibit a wide range of outstanding features, including reactivity to pH, time, temperature, chemical species, and biological circumstances, as well as a high propensity to absorb water. Biopolymer hydrogels can be modified and created for a wide range of uses, prompting a recent increase in scientific research. Researchers all over the world are focused on naturally generated hydrogels in response to increasing environmental challenges and Journal of Nanomaterials demand because of their biocompatibility, biodegradability, and availability. Biocompatible materials, such as cellulose hydrogels, can be utilized in medical devices to treat, complement, or replace any tissue, organ, or biological function. These hydrogels could be employed in agriculture, as smart materials, and in a variety of other applications. This review summarizes recent and ongoing research on the physiochemical properties of cellulose-based hydrogels, as well as their uses in biomedical fields like drug delivery, tissue engineering, and wound healing, healthcare and hygienic products, agriculture, textiles, and industrial applications as smart materials [56]. Fernandez et al. explain that red cabbage is a vegetable known for its enriched bioactive constituents. Generally, among the population, it is used as an ingredient in raw salads or coleslaws, pickle, and boiled and steamed dishes for its impact on human health and low calories and high fiber composition. It is widely used in food production to improve the aesthetic value of food and to provide health benefits as a natural colorant in drinks, candies, and gums. It has many health benefits, including protection against cancer and diabetes, as well as strengthening the immune system, aiding in body detoxification, promoting weight loss, improving skin, reducing inflammation, and relieving constipation. Red cabbage's antioxidant content aids in the prevention of chronic illness and the treatment of conditions such as Alzheimer's and depression. This paper examines the scientific approach to red cabbage as well as its pharmacological function [57]. Nanomaterial Food Packaging From better packaging material with improved mechanical strength, barrier properties, and antimicrobial films to nanosensing for pathogen detection and alerting consumers to the safety status of food, nano-based "smart" and "active" food packaging offer several advantages over traditional packaging methods. Nanoparticles are not just employed in antimicrobial food packaging; nanocomposite and nanolaminates have also been used in food packaging to create a barrier against high temperature and mechanical shock, hence prolonging food shelf life. Incorporating nanoparticles into packaging materials provides high-quality food with a longer shelf life. Polymer composites were developed to provide more mechanical and thermostable packing materials. In order to improve polymer composites, a variety of inorganic and organic fillers are used. Conclusion The active and intelligent packaging materials develop in the search of environment-friendly packaging solutions. In recent years, extensive research into the creation of novel active packaging technologies with natural pigments has resulted in a wide range of active packaging systems that can be used to increase the shelf life of food products. Data Availability The data used to support the finding of this study are included within the article. Conflicts of Interest The authors have no conflicts of interest to declare.
6,022.8
2022-02-12T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science" ]
Preoperative image-guided identification of response to neoadjuvant chemoradiotherapy in esophageal cancer (PRIDE): a multicenter observational study Background Nearly one third of patients undergoing neoadjuvant chemoradiotherapy (nCRT) for locally advanced esophageal cancer have a pathologic complete response (pCR) of the primary tumor upon histopathological evaluation of the resection specimen. The primary aim of this study is to develop a model that predicts the probability of pCR to nCRT in esophageal cancer, based on diffusion-weighted magnetic resonance imaging (DW-MRI), dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and 18F-fluorodeoxyglucose positron emission tomography with computed tomography (18F-FDG PET-CT). Accurate response prediction could lead to a patient-tailored approach with omission of surgery in the future in case of predicted pCR or additional neoadjuvant treatment in case of non-pCR. Methods The PRIDE study is a prospective, single arm, observational multicenter study designed to develop a multimodal prediction model for histopathological response to nCRT for esophageal cancer. A total of 200 patients with locally advanced esophageal cancer - of which at least 130 patients with adenocarcinoma and at least 61 patients with squamous cell carcinoma - scheduled to receive nCRT followed by esophagectomy will be included. The primary modalities to be incorporated in the prediction model are quantitative parameters derived from MRI and 18F-FDG PET-CT scans, which will be acquired at fixed intervals before, during and after nCRT. Secondary modalities include blood samples for analysis of the presence of circulating tumor DNA (ctDNA) at 3 time-points (before, during and after nCRT), and an endoscopy with (random) bite-on-bite biopsies of the primary tumor site and other suspected lesions in the esophagus as well as an endoscopic ultrasonography (EUS) with fine needle aspiration of suspected lymph nodes after finishing nCRT. The main study endpoint is the performance of the model for pCR prediction. Secondary endpoints include progression-free and overall survival. Discussion If the multimodal PRIDE concept provides high predictive performance for pCR, the results of this study will play an important role in accurate identification of esophageal cancer patients with a pCR to nCRT. These patients might benefit from a patient-tailored approach with omission of surgery in the future. Vice versa, patients with non-pCR might benefit from additional neoadjuvant treatment, or ineffective therapy could be stopped. Trial registration The article reports on a health care intervention on human participants and was prospectively registered on March 22, 2018 under ClinicalTrials.gov Identifier: NCT03474341. (Continued from previous page) Discussion: If the multimodal PRIDE concept provides high predictive performance for pCR, the results of this study will play an important role in accurate identification of esophageal cancer patients with a pCR to nCRT. These patients might benefit from a patient-tailored approach with omission of surgery in the future. Vice versa, patients with non-pCR might benefit from additional neoadjuvant treatment, or ineffective therapy could be stopped. Trial registration: The article reports on a health care intervention on human participants and was prospectively registered on March 22, 2018 under ClinicalTrials.gov Identifier: NCT03474341. Keywords: Esophageal cancer, Neoadjuvant chemoradiotherapy, Pathologic complete response, Image-guided, MRI, DW-MRI, DCE-MRI, PET-CT, ctDNA Background Esophageal cancer is the ninth most common type of cancer and the sixth most leading cause of cancer related death [1]. Surgical resection has long been the standard curative treatment for locally advanced esophageal cancer. However, the poor survival rates of surgery alone prompted many researchers to explore neoadjuvant therapy approaches to improve survival. Randomized clinical trials have demonstrated a consistent prognostic benefit of neoadjuvant chemotherapy or chemoradiotherapy followed by surgery over surgery alone for locally advanced esophageal cancer [2][3][4]. In the Netherlands, this resulted in the adoption of neoadjuvant chemoradiotherapy (nCRT) according to the CROSS regimen followed by surgery as standard of care [4]. Nearly one third of all esophageal cancer patients (29%) treated with nCRT have no viable tumor cells detected at the primary tumor site at histopathological evaluation of the resection specimen, referred to as pathologic complete response (pCR) [4]. It has been argued that in patients who achieve a pCR, surgery may be omitted without substantially reducing survival outcomes. In fact, as an esophagectomy is associated with substantial morbidity, mortality (up to 3-5%) and impaired quality of life [5][6][7][8][9], it can be speculated that surgery may have a detrimental effect on these patients. Consequently, proper identification of pathologic complete responders prior to surgery could yield an organ-preserving regimen avoiding esophagectomy and its postoperative complications. Reversely, 18% of patients have more than 50% vital residual tumor cells in the primary tumor bed at histopathology after nCRT and surgery, referred to as non-responders [4]. The CROSS regimen is associated with grade ≥ 3 toxicity events according to the Common Terminology Criteria for Adverse Events (CTCAE) in up to 13% of patients [4]. Thus, these non-responders are exposed to side effects of nCRT probably without the benefits. Therefore, early identification of the non-responders during nCRT may be beneficial, as alternative treatment strategies could be explored for this group, such as additional neoadjuvant treatment, or ineffective therapy could be stopped. Several diagnostic strategies have been proposed to predict response and ultimately omit surgery in selected patients. Computed tomography (CT) is used preferably in initial staging of esophageal cancer, especially with regard to the presence of distant metastases, but does not satisfactorily restage after nCRT (accuracies ranging from 51 to 75%) [10][11][12]. Remaining tumor tissue is difficult to distinguish from therapy-induced peritumoral fibrosis and inflammation. As such, CT tends to overstage the preoperative tumor status. Moreover, promising results for response prediction were obtained using repeated integrated 18 F-fluorodeoxyglucose positron emission tomography and computed tomography ( 18 F-FDG PET-CT), with accuracies ranging from 76 to 85% [14,[16][17][18]. The change in 18 F-FDG uptake during nCRT, reflecting a change in glucose metabolism by cancer cells, may be used to identify these responders [19]. A systematic review on the value of these quantitative 18 F-FDG PET(-CT) measurements including 20 studies, showed that response could be predicted with sensitivities ranging from 33 to 100% (pooled estimate of 67%) and specificities ranging from 30 to 100% (pooled estimate of 68%) [19]. This supports the concept that functional imaging could play an important role in accurate response prediction. In this light, magnetic resonance imaging (MRI) has recently shown great potential for response prediction to nCRT for esophageal cancer [20][21][22][23]. Diffusion-weighted MRI (DW-MRI) is a functional imaging modality that allows for tissue characterization by deriving image contrast from restriction in the free diffusion (i.e. random mobility or Brownian motion) of water molecules, which is related to microstructural tissue organization. An apparent diffusion coefficient (ADC) map can be derived from the DW-MRI images to quantify the diffusion restriction in a certain volume of interest. The ADC is inversely correlated with tissue cellularity. As chemoradiotherapy can result in the loss of cell membrane integrity, tumor response can be detected as an increase in tumor ADC. In two exploratory studies, the treatment-induced relative changes in ADC over time (ΔADC), during nCRT, appeared highly predictive of histopathological response [22,24]. Using repeated DW-MRI only, a high area under the receiver operating curve (AUC ROC ) was attained for identifying pathologic complete responders was attained [22,24]. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), the acquisition of serial MR images while intravenously administering a contrast agent, provides further insight into the nature of the tissue properties related to perfusion. Based on these images, quantitative parameters such as the transfer constant (K trans ) and blood-normalized initial-area-under-thegadolinium-concentration curve (AUC) can be calculated. The AUC reflects blood flow, vascular permeability and the fraction of interstitial space [16]. In a pilot study, all pathologic complete responders showed a decrease in AUC of 25% or more over the entire treatment course (ΔAUC), whereas an increase in AUC during treatment was observed for those patients who did not obtain a pCR (p = 0.003) [18]. In addition to functional imaging, circulating tumor cells and corresponding circulating tumor DNA (ctDNA) have been proposed as noninvasive and real-time biomarkers for predicting patient prognosis in esophageal carcinomas [25][26][27][28]. Circulating tumor cells and ctDNA are present in the blood vessels adjacent to the tumor, and are subsequently transported throughout the body via the circulation [27]. As such, ctDNA reflects the presence of disease and could provide valuable information on the response to treatment. Since ctDNA can be detected from regular peripheral blood samples, the detection of ctDNA could be a promising, minimally invasive addition to the evaluation of treatment response and prognosis in esophageal cancer patients. Study aim As the aforementioned modalities do not individually fulfill the requirements to justify treatment decision making, the primary aim of the current study is to develop a multimodal prediction model that predicts the patients' individual probability of a pCR after nCRT for esophageal cancer. Accurate prediction of the response to nCRT could lead to a patient-tailored approach with omission of surgery in the future in case of predicted pCR, potentially improving quality of life and reducing health care costs. Furthermore, additional neoadjuvant treatment could be offered to patients in case of non-pCR. Objectives The primary objective of the study is to develop a multimodal prediction model that predicts a patients' individual probability of a pathologic complete response to nCRT in esophageal cancer by integrating DW-MRI, DCE-MRI and 18 F-FDG PET-CT scans acquired prior to, during and after administration of nCRT. The secondary objectives are as follows: To evaluate the accuracy of the multimodal prediction model as developed under the primary objective for the prediction of a pathologic good response (i.e. tumor regression grade [TRG] 1 or TRG 2). To evaluate the effectiveness and efficacy of an endoscopic and endosonographic assessment after nCRT for the detection of residual disease, in relation to the response classification as predicted by the model developed under the primary objective. To evaluate the presence of, and changes in, ctDNA during nCRT as a biomarker for a patients' response to nCRT, the detection of residual disease after nCRT and progression-free and overall survival. To evaluate the accuracy of the multimodal prediction model as developed under the primary objective with addition of the endoscopic and endosonographic assessment, and the ctDNA measurements for the prediction of pCR and pathologic good response (i.e. TRG 1 or TRG 2). To evaluate the accuracy of a visual assessment for the detection of residual disease after nCRT based on MRI and 18 F-FDG PET-CT. To evaluate the performance of MRI and 18 F-FDG PET-CT imaging parameters for the prediction of progression-free and overall survival. Study design The PRIDE study is a prospective, multi-center observational study with participation of 4 high-volume centers in the Netherlands (University Medical Center Utrecht, The Netherlands Cancer Institute -Antoni van Leeuwenhoek Hospital, University Medical Center Groningen and Amsterdam University Medical Centers). Patients will be informed and included at the outpatient department at one of these investigational centers. The study has been approved by the Medical Ethics Review Committee of the University Medical Center Utrecht (17-941, NL62881.041.17). All participating hospitals gave their consent after assessment of local feasibility. Written, voluntary, informed consent to participate in the study will be obtained from all patients. Study population In order to be eligible to participate in this study, a patient must be scheduled to receive nCRT for a potentially resectable, locally advanced (cT1b-4aN0-3 M0) esophageal or gastroesophageal junction tumor, either squamous cell carcinoma or adenocarcinoma. Neoadjuvant chemoradiotherapy will be delivered according to the CROSS-regimen [4], consisting of weekly administration of carboplatin (doses titrated to achieve an area under the curve of 2 mg per milliliter per minute) and paclitaxel (50 mg per square meter of body-surface area) for 5 weeks and concurrent radiotherapy (41.4 Gy in 23 fractions, delivered 5 days per week on workdays with intensity modulated radiotherapy) followed by esophagectomy after 8-10 weeks. Primary diagnosis and staging will be based on endoscopy, EUS, 18 F-FDG PET-CT and histopathological evaluation of a tumor biopsy. Patients who meet exclusion criteria for MRI or for intravenous gadolinium-based contrast, patients with a blood plasma glucose concentration > 10 mmol/L or poorly controlled diabetes mellitus, patients with a status after endoscopic mucosal resection (EMR) or endoscopic submucosal dissection (ESD) of the primary tumor prior to the start of nCRT, patients younger than 18 years and pregnant or breast-feeding patients are not eligible. Study protocol A schematic representation of the study protocol is depicted in Fig. 1. Patients will undergo standard diagnostic work-up and staging for esophageal cancer, including a baseline 18 F-FDG PET-CT (PET-CT pre ). After informed consent and before the start of nCRT, a baseline MRI (MRI pre ) is performed. A second MRI (MRI per ) and 18 F-FDG PET-CT (PET-CT per ) will be performed during the third week of nCRT (after 10-15 fractions of radiotherapy). A third MRI (MRI post ) will be performed 6-8 weeks after the completion of nCRT, and no sooner than within 2 weeks before the intended date of surgery. The third 18 F-FDG PET-CT (PET-CT post ) is usual care in all participating centers, and will be performed within the same timeframe as the MRI post . Blood samples will be acquired at 3 time points, i.e. before, during and after nCRT, to evaluate the presence of, and changes in circulating tumor DNA (ctDNA). Furthermore, patients will be asked to undergo an additional endoscopic assessment after nCRT, PET-CT post and MRI post , within 2 weeks prior to surgery. Surgical resection will be performed 8-10 weeks after completion of nCRT. In summary, for study purposes patients will undergo 3 additional MRI scans (MRI pre , MRI per , MRI post ), 1 additional 18 F-FDG PET-CT scan (PET-CT per ), blood samples at 3 time points and 1 postchemoradiation endoscopic and endosonographic assessment. The 18 F-FDG PET-CT scans before start of nCRT and after nCRT (PET-CT pre and PET-CT post ) are standard of care in all participating centers and will also be used for study purposes. All study related procedures will take place before surgery. MRI Patients will undergo anatomical (T2-weighted [T2W]) and functional MRI (DWI and DCE) in one scanning session every time. Two DWI series and one DCE scan will be acquired. The sagittal DWI series (sagittal intravoxel incoherent motion [sIVIM] with 13 b-values: 0, 10, 20, 30, 40, 50, 75, 100, 200, 350, 500, 650 and 800 s/mm 2 ) will be used for quantitative analyses but also for the visual assessment. Transversal DWI series (high resolution tDWI, b-values: 0 and 800 s/mm 2 )will mostly be used for visual assessments. The DCE-MRI scans will be acquired with a temporal resolution of 3 s and the injection of a gadolinium-based contrast agent. ADC and AUC values of the various time points will be used as quantitative measures of the DWI and DCE series, respectively. Extensive effort has been put in the standardization of MRI scan sequences by imaging experts and the exchange of test scans. 18 F-FDG PET-CT The PET-CT examinations will be performed according to EARL guidelines (European Association of Nuclear Medicine) [29]. 18 F-FDG is the tracer that will be used for the assessment of abnormal glucose metabolism in the tumor. On the 18 F-FDG PET-CT scans, standardized uptake volumes (SUV max , SUV mean ) and the total lesion glycolysis (TLG) will be measured to quantify changes over time in the glucose metabolism of the tumor. Postchemoradiation endoscopic assessment Patients will be asked to undergo a postchemoradiation endoscopic assessment, consisting of an additional endoscopy with (random) bite-on-bite biopsies of the primary tumor site and other suspected lesions in the esophagus, as well as endosonography with fine needle aspiration of suspected lymph nodes after completion of nCRT. This is an optional study procedure and patients can choose to opt-out for this additional procedure. The endoscopic reevaluation will be performed by 1 or 2 experts in each of the 4 centers, to ensure high quality and uniform procedures and to reduce the impact of operator dependency. Furthermore, video recordings of all patients with negative biopsies that showed visual abnormalities of any kind during the endoscopic procedure will be reevaluated by an expert panel that will be blinded for the pathological outcome of the resection specimen in order to investigate whether a qualitative assessment of an expert team can help to correctly identify residual tumor in patients with a negative biopsy. Blood samples Blood samples will be used to evaluate the presence of ctDNA and changes in ctDNA concentrations since the release of ctDNA within a patient during the course of chemoradiotherapy has recently been demonstrated to be a dynamic process [30]. To allow molecular analysis of liquid biopsies, blood will be collected in cell-free DNA collection tubes. The plasma will be aliquoted after 2 centrifugation steps and will be stored at − 80°C [31]. This will allow isolation of ctDNA and subsequent mutation analysis by means of Next Generation Sequencing (NGS) at a later stage. Surgery A transthoracic or transhiatal esophagectomy will be performed in all patients, depending on patient characteristics, tumor localization, and local preference. Open, hybrid and completely minimally invasive techniques are allowed. Resection of the primary tumor and regional lymph nodes will be carried out according to the current requirements for esophageal cancer surgery in the Netherlands [32]. For correct TNM-staging, the lymph Histopathological assessment The resection specimen will be evaluated meticulously according to a standardized protocol (tumor type and extension, lymph nodes, resection margins) by a dedicated pathologist with gastrointestinal subspecialty in each center. The pathologist will be blinded for the results of the MRI and PET-CT exams. The most recent edition of the UICC (International Union Against Cancer) protocol will be used for TNM-classification and stage grouping [33]. Special attention will be given to reporting the effects of nCRT in the resection specimen. The (estimated) location of the primary lesion plus surrounding areas and other suspected lesions in the esophagus will be embedded in order to adequately judge the presence of residual tumor and treatment effects. The percentage of viable tumor cells will be scored microscopically (ranging from 0 to 100%), which directly corresponds to a stage in either of the two most often used grading systems: 'TRG 1 to 4' [34] or the 'Mandard score 1 to 5' [35]. Therapy effects include necrosis, inflammation with multinucleated giant cells, fibrosis and calcifications. Fibrosis is the most remarkable effect and is used to estimate the extension of the tumor before treatment. Lastly, all resection specimens with TRG 1-2 will be revised by a second expert pathologist. Follow-up Patients will remain in follow-up for 5 years after surgery, according to local follow-up policies. The general follow-up guideline in the Netherlands consists of routine follow-up visits every 3 months during the first year after surgery. In the second year, follow-up takes place every 6 months, and then yearly until 5 years after surgery. Diagnostic investigations are generally only performed on indication [32]. Study outcomes The primary outcome of this study is the performance of the multimodal prediction model for the correct prediction of a patients' individual probability of a pCR to nCRT based on DW-MRI, DCE-MRI and 18 F-FDG PET-CT scans acquired prior to, during and after administration of nCRT. Secondary outcome parameters include the performance of the model for good response (i.e. TRG 1 and TRG 2), the effectiveness and efficacy of a postchemoradiation endoscopic and endosonographic assessment for pCR prediction, the value of ctDNA as a biomarker for a patients' response to nCRT, progression-free and overall survival, the performance of the model including results from the endoscopic and endosonographic assessment and ctDNA measurements for pCR and good response prediction, the performance of a visual assessment for the detection of pCR after nCRT based on MRI and 18 F-FDG PET-CT, and lastly the performance of the model for prediction of progression-free and overall survival. Statistical analysis Data analysis of primary study objective The analysis regarding the primary objective of this project will have pCR as the predicted outcome of interest. Statistical analysis and reporting will be performed in accordance with the Standards for the Reporting of Diagnostic accuracy studies (STARD) statement, and the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) statement [36,37]. The assessor(s) of the MRI and PET-CT images will be blinded for the histopathological outcome. A multivariable logistic regression model will be developed with pCR as dichotomous outcome measure. Many of the imaging parameters are likely highly correlated and provide similar (non-additional) information, particularly within one modality. The most valuable imaging parameters within one imaging modality (i.e. within DW-MRI, DCE-MRI and PET-CT imaging parameters) will be entered in the model, based on the results of previous knowledge. To determine whether the imaging modalities provide complementary value in the prediction of pCR, models will be compared based on Akaike's Information Criterion (AIC). Model discrimination and calibration results will be evaluated for the multivariable logistic regression models using receiver operating characteristic (ROC) curve analysis with area-under-the-curve (AUC ROC ) estimates and visual inspection of model calibration plots, respectively. Internal validation using the bootstrap method with 1000 repetitions will be carried out to provide insight into potential over-fitting and optimism in model performance. Bootstrapping will allow for calculation of bias-corrected c-indexes of the prediction model, and provides shrinkage factors that can be used to adjust the estimated regression coefficients in the model for overfitting and miscalibration. Sensitivity analyses will be performed excluding one participating center each time to study the influence of the multicenter study design on the model performance. Data analysis of secondary study objectives ROC curve analysis with AUC ROC estimates will be used to determine the additional value of the postchemoradiation endoscopic and endosonographic assessment and the ctDNA measurements to the model as developed under the primary objective, as well as the accuracy of the multimodal prediction model for the prediction of pathologic good response (i.e. combined TRG 1 and TRG 2). The performance of a visual assessment for the detection of pCR after nCRT will be analyzed by calculation of diagnostic performance measures such as sensitivity, specificity, positive predictive value, negative predictive value and accuracy (including corresponding 95% confidence intervals). This also applies to the individual performance of the endoscopic and endosonographic assessment for the detection residual disease, as well as for the performance of ctDNA measurements. Multivariable Cox regression models will be used to analyze the performance of the prediction model as developed under the primary objective, MRI and 18 F-FDG PET-CT imaging parameters, and ctDNA measurements for the prediction of progression-free and overall survival. Sample size calculation It is conservatively assumed that adenocarcinoma and squamous cell carcinoma need separate modeling and a priori stratification. Based on 3 independent imaging predictors (e.g. a DW-MRI imaging parameter such as ΔADC, a DCE-MRI imaging parameter such as ΔAUC and a 18 F-FDG PET-CT imaging parameter such as ΔSUV max ), this requires 30 events for both histopathological subtypes, according to the '1 predictor per~10 events' rule-of-thumb in logistic regression analysis [38,39]. The CROSS-trial demonstrated a pCR rate of 23% and 49% after nCRT for patients with adenocarcinomas and squamous-cell carcinomas respectively [4]. According to the 1 in 10 rule, this translates in a total accrual of at least 130 adenocarcinoma patients and at least 61 squamous-cell carcinoma patients. In case of an unexpected aberrant distribution of patients that leads to decreased pCR rates, the aim is an accrual of 200 patients. Discussion Currently, groups of patients with esophageal cancer fit in certain protocolled treatment approaches, but the treatment is rarely a perfect fit for the individual patient. The PRIDE study investigates whether a multimodal image-guided model can be developed that accurately predicts a patients' individual histopathological response to nCRT. Such a model would enable personalized treatment for patients with esophageal cancer. Recent studies indicate that an organ-sparing approach might be feasible in selected patients with esophageal cancer who have a pCR after nCRT [40][41][42]. However, satisfactory diagnostic strategies to select these pathologic complete responders are lacking up to now. Therefore, surgical resection after nCRT remains the most optimal curative treatment in terms of survival in patients with locally advanced esophageal cancer. If the PRIDE concept provides high predictive performance for pCR, this could potentially lead to a new standard of care with direct benefits to esophageal cancer patients. Furthermore, accurate identification of the non-responders may be beneficial, as these patients might benefit from alternative treatment strategies, such as additional neoadjuvant treatment, or ineffective therapy could be stopped in this group. In the current study protocol, strict time points are chosen for the MRI and 18 F-FDG PET-CT imaging, as well as for the blood samples and endoscopic assessment. This way, a homogeneous cohort will be created, in which measurement variability will be reduced as much as possible. This is also reflected in the extensive effort of the participating centers to standardize the imaging protocols. Since therapy effects continue to develop after treatment, previous studies have underlined that MRI and PET-CT imaging during nCRT as well as after nCRT for esophageal cancer can function as predictors for pCR [18,[43][44][45][46]. As such, the MRI post and PET-CT post scans should be as close to the histopathological assessment of the outcome (pCR) as possible, in order to make sure that the findings on the MRI post and PET-CT post will represent the histopathology accurately. This will likely also prevent false positive results caused by transient radiation-induced esophagitis, which is known to decrease over time after nCRT. Therefore, the chosen time points in our study include scans before the start of nCRT (MRI pre /PET-CT pre ), scans during the third week of nCRT (MRI per /PET-CT per ), and scans within 2 weeks before surgery (MRI post /PET-CT post ). In patients undergoing an additional endoscopic and endosonographic assessment, this will be intended after the 18 F-FDG PET-CT post and the MRI post , but prior to surgery. If an organ-sparing approach is eventually implemented in clinical practice for predicted complete responders, an endoscopic confirmation without signs of residual tumor will most likely be required. Of note, two studies, namely the Dutch SANO trial [47] and the French ESOSTRATE trial (ClinicalTrials.gov identifier NCT02551458), are currently studying active surveillance strategies after nCRT for patients with a clinical complete response. For the SANO trial, a clinical complete response is based on 18 F-FDG PET-CT and endoscopy with at least 8 (random) bite-on-bite biopsies. These studies together will include a total of 600 patients (300 within each trial) and the primary outcome of these studies is survival. In contrast to these trials, the current study involves the careful development of an accurate image-guided response evaluation strategy to predict pCR in an observational study, without the simultaneous implementation of postponed surgical resection in clinical complete responders that might harm the patient. The results of this study will therefore play an important role in the accurate identification of esophageal cancer patients with a pCR to nCRT who could benefit from an organ-sparing approach in the future. Ultimately, the results of the three trials together could lead to a patient-tailored wait-and-see approach with omission of surgery in the appropriate patients.
6,277.6
2018-10-20T00:00:00.000
[ "Medicine", "Engineering" ]