text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information flow among networks. A common property of neural network oscillations is that the spike probability of neurons is influenced by oscillation phase [3,22]. Oscillation cycles define periods of higher excitability where neurons spike and are more sensitive to incoming spikes and periods of lower excitability where spike probability is low and incoming spikes have lower impact [18,23,24]. If two networks can align the oscillatory cycles over time such that the excitability periods are in 'good' alignment [18,25], then 'communication' (information flow) will be optimized. Given the potential role of oscillatory phase locking for understanding neural information processing, it is therefore critical to have a valid and robust approach for experimentally measuring phaselocking between neural oscillations. Phase-locking is defined here as the amount of consistency between the instantaneous phases of oscillations, or put differently, how 'peaky' (non-uniform) the instantaneous phase-relation distribution is. In the field of neuroscience, phase locking strength is often estimated by computing spectral coherence. For example, in experimental neuroscience, spectral coherence has been frequently used to detect and quantify coupling between oscillatory signals of two or more brain areas [8,[12][13][14][15]. Spectral coherence estimates the linear phase-consistency between two frequencydecomposed signals over time windows (or trials) [26]. Specifically, spectral coherence reflects the consistency (mean resultant vector length) of cross-spectral densities between two signals, normalized by their auto-spectra densities. Spectral coherence has a long history [26] and has proven to be a useful method for many scientific questions. Great advantages of spectral coherence are that it is well understood and studied, computationally fast, relatively robust against noise and allows an easy overview over relevant coherent frequencies in the data. However, spectral coherence relies on several assumptions regarding the analyzed signal, with the requirements of linearity and stationarity in the signal being of prime importance. The assumption of (weak-sense) stationarity means that the autocorrelation structure of a signal is not dependent on the reference time point [27]. The Fourier transform assumes that the signal is linear; i.e. that it can use a linear superposition of trigonometric functions to represent it [28]. If these assumptions are violated, spectral coherence might give unreliable estimates. Unfortunately, these critical assumptions are rarely tested in neuroscience studies and it is uncertain how often the violation of stationarity and linearity assumptions of the signal affects the reported coherence results. In addition, there is increasing evidence that neural oscillatory signals show properties of non-stationarity that make them almost by definition unsuited for analysis by spectral coherence methods [10,[29][30][31][32][33][34]. There are various alternative approaches to spectral coherence, which do not rely on the strict assumptions underlying spectral coherence, and which estimate phase-locking (mean vector length) on the reconstructed instantaneous phases [34,35], referred to as the phase-locking value (PLV) approach [34]. These methods might be more suitable for neural oscillatory synchronization as they can better deal with non-stationary (frequency-varying) dynamics. The first aim of the present study was therefore to study the validity and accuracy of spectral coherence to estimate phase-locking in a range of oscillatory synchronization regimes, and to test whether it can robustly track phase-locking dependent changes in information flow between networks. The second aim was to compare spectral coherence with the PLV approach. For the reconstruction of the instantaneous phase we used Hilbert-Transform (HT) preceded by a singular spectrum decomposition (SSD) of the signal into interpretable oscillatory components [36]. To test the behavior of spectral coherence and PLV, we generated testing data, by simulating oscillatory signals with synchronization dynamics [7] using interacting phase-oscillators [37] as well as interacting excitatory-inhibitory spiking neural networks [38,39]. The resulting signals were representative for ordinarily recorded neural signals in neurophysiological studies [40]. Using signals generated by the mathematically simpler phase-oscillator model allowed us to understand the properties of spectral coherence in terms of mathematical properties. In the model, we manipulated the phase-locking strength between the oscillations by changing their detuning parameter (initial frequency difference). The data generated by the more complex neural network model reproduced the synchronization behaviors observed in the phase-oscillator model. On the simulated data, we tested to what extent coherence and PLV were able to capture the (instantaneous) phase relationships among simulated oscillations as well as information transfer among spiking neural networks. We found that spectral coherence, used here with settings common for neuroscientific research [41,42]deviated, from the expected phase-locking and did not converge to the expected value with increasing signal-to-noise ratio (SNR). The deviation was particularly strong when oscillators were not completely synchronized to 1, either because of detuning (intrinsic frequency difference) or intrinsic noise fluctuations. Moreover, spectral coherence was sensitive to phase-relation dependent amplitude fluctuations showing that it is not a pure phase-locking measure (even when amplitude correlation is 0). These deficiencies of spectral coherence reflect a mismatch between its underlying assumptions and the simulated data, which we suggest is likely to often also affect the usefulness of coherence when applied to experimental data. The PLV approach did not show these drawbacks and it converged to the expected value with increasing SNR. In addition, we used transfer entropy [43] to measure how well spectral coherence and PLV reflected changes in information flow between spiking neural networks. We found that spectral coherence did not robustly reflect changes in information flow between oscillating networks, whereas the PLV did. In summary, these results suggest that spectral coherence should be applied with prudence to neural oscillatory synchronization data, whereas PLV methods relying on the estimation of instantaneous phase appear to provide a more promising approach. Theory of phase synchronization Phase synchronization is the process in which oscillators adjust their rhythms [7], a phenomenon that has been first described by Huygens in the 17 th century for pendulum clocks [44]. Phase synchronization means that the oscillators have a preferred phase-relation to each other and that the oscillators adjust their phases as a function of their phase difference. The phase adjustment is defined by the 'phase response curve' (PRC), that has been described in various neuroscience domains [45]. The PRC captures the mutual forces that coupled oscillators exert on each other depending on their phases. The PRC thus defines how much a given force exerted by one oscillator at a given phase will delay or advance another oscillator's phase, as a function of the latter oscillator's phase. Hence, the PRC also defines which phase-relations among oscillators occur preferentially, thus representing fixed attractor points in the phase-relations among oscillators. The Theory of Weakly Coupled Oscillators (TWCO) [37,[46][47][48] describes mathematically the phase dynamics among weakly interacting oscillators. 'Weak' means that interactions lead to phase adjustments without strong perturbations of the oscillatory generative mechanism. 'Strong' coupling can lead to chaotic regimes [49,50] or to 'oscillation death/quenching' [7,51]. The TWCO has been applied in many neuroscience fields, including in gamma-generating neural network models [52]. To simulate oscillatory synchronization data, we used a basic model of phase-oscillators [37] that is simple, yet exhibits plausible phase synchronization dynamics relevant for neuroscience. The synchronization properties of two coupled phase-oscillators X and Y (Fig 1A) are governed by two factors: the level of detuning Δω (intrinsic or natural frequency difference, Δω = ω X -ω Y ) and the coupling strength κ. The detuning Δω determines the phase precession frequency (de-synchronization force) and the coupling κ determines the strength of phase adjustments (synchronization force). Both parameters define a two-dimensional space in which the phase-locking between oscillators can be determined. In this space one can observe inverted triangles that define the phase-locking region in the detuning versus coupling space. Such triangular phase-locking region looks like a tongue, and is referred to as the 'Arnold' tongue [7] ( Fig 1B). The triangular shape derives from the fact that oscillators with stronger coupling strength κ can converge to a phase-locking state for larger detuning values Δω. Phase-locking can be formally defined as the constancy of the instantaneous phase-relation between oscillators (Fig 1C), which hence show no phase precession [7]. This means that instantaneous phase of one oscillator always perfectly predicts instantaneous phase of the other, in which case phase-locking has a value of 1. A lower phase locking value represents the lower probability of making this prediction correctly. The mathematical definition of phaselocking is given in Eq 1. The variables n and m are integers and represent the frequency ratios (1:2,2:3,. . ., [7,52]) in which the oscillators can satisfy the condition of phase-locking despite different frequencies ω X and ω Y . Therefore one can observe several (higher-order) Arnold tongues. Here, we focused on phase-locking with n = m = 1 (Arnold tongue 1:1). This is because we were interested in the quantification of phase synchronization between oscillators with nearby frequencies (oscillators within a 'frequency band', e.g. the gamma band). If ω X 6 ¼ ω Y (different frequency) and coupling strength κ = 0 (no synchronization), then the phase-relation distribution is uniform (full phase precession, see Fig 1D). Yet, phase-locking is often neither completely synchronous nor completely asynchronous. The incomplete phaselocking is called the partially synchronized state [7] in which oscillators exhibit a preference for particular phase-relations intermixed with periods of phase precession (Fig 1E). This implies that phase-locking can be of different magnitudes. This is in line with the idea that biological systems are inherently noisy and time-varying (i.e. variable frequencies over time) and thus unlikely to engage in complete synchrony. The fact that the partially synchronized state is characterized by incomplete phase-locking and therefore phase precession entails that oscillators in this state will traverse all possible phase-relations over time. As we have described above, the PRC defines the phase adjustments in terms of positive or negative delays as a function of phase-relation and time. The rate of phase change over time, which is the time derivative of phase, defines instantaneous frequency (IF) [7]. Close to the preferred phase-relation, the IF difference between phase-oscillators is minimized (phase precession slows down), whereas in non-preferred phase-relations the IF difference is maximized (phase precession is faster and IF approaches the intrinsic frequency). The underlying model of this study was phase synchronization of two coupled phaseoscillators which could correspond to oscillatory signals measured from separate cortical areas. Phase-locking is the amount of consistency of instantaneous phases between the two oscillators. Phase-locking is resulting from synchronization process governed by two principal factors as described by Theory of This will lead to systematic phase-relation-dependent IF fluctuations (PrFM) (Fig 1F). The main frequency of PrFM equals the phase precession frequency [7]. In addition to PrFM, there can also be a form of phase-relation dependent amplitude modulation (PrAM). In our raw simulations, according to the TWCO, the phase-oscillators had unit amplitude, which did not change during the synchronization dynamics. However, we included PrAM post-hoc to the simulated oscillatory signals (Fig 1G), based on our observation of PrAM in simulations of interactions between gamma-oscillation generating excitatory-inhibitory neural networks during partial synchronization (see below). The phase-relation dependent frequency modulations (PrFM) in our simulation of coupled phase-oscillators can be described with the following equation: where ϕ is the phase-relation between the two oscillators, PRC is the phase response curve defined here as a sinusoid, and κ the coupling strength. Hence, the modulation function of PrFM is directly related to the PRC, whereas the strength of the fluctuation depends on the coupling strength κ. The phase-relation dependent amplitude modulations (PrAM) were defined as: where ϕ is the phase-relation between two oscillators, cos is the cosine modulation function (maximal amplitude at phase 0), and α is the amplitude modulation strength. Note that α was defined as percentage modulation of oscillation amplitude. For example, a α of 20% modulation means that the oscillation amplitude varies by 20% as a function of phase (e.g. for an amplitude of 1 the means a variation between 0.8 and 1.2). Generative models of oscillatory synchronization Oscillations generated by the phase-oscillator model. For generating testing data with plausible and well-understood synchronization properties, we used the phase-oscillator model as the underlying generative model. The model is very similar to the well-known Kuramoto model [37]. Here, we simulated two zero-mean oscillatory signals X(t) and Y(t) governed by the phase-oscillator equation. We restricted ourselves for simplicity to the case of unidirectional coupling (X!Y). However, all results can be generalized to the case of mutually coupled phase-oscillators. Weakly Coupled Oscillators (TWCO): The intrinsic (natural) frequency ω and the coupling strength κ. The intrinsic frequency difference (detuning Δω) between oscillators determines the phase precession. The coupling strength κ determines the interaction strength, which is a function of the phase-relation (defined by the phase response curve, PRC). (B) The detuning Δω and the coupling strength κ defined a 2-dimensional space, in which phase-locking (gray shading) occurs within certain ranges. In a noiseless system, full synchrony (phase locking of 1) occurs in a limited area of detuning and coupling strength that appears as inverted triangle (Arnold tongue). The stronger the coupling strength, the more detuning is possible while still reaching full synchrony. Full synchrony (C) occurs if the oscillators converge on a common frequency (no phase precession). The phase-relation distribution exhibits a strong peak at the attractor phase relation. (D) Complete asynchrony is only possible when the oscillators are uncoupled. The phase-precession is smooth and the phasedifference distribution uniform. (E) The state of partial synchrony is characterized by phase-locking between 0 and 1. In most regions in the Δω vs. κ space, phase-locking might be close to 0. Yet, close to the Arnold tongue the phase-locking might still be relevant. The oscillators do not converge to a common frequency, but exhibit phase precession. The phase precession does not have a smooth trajectory, but is modulated depending on the phase-relation. This leads to non-uniform phase-difference distribution with a peak at the phase-relation in which the oscillators have the smallest frequency difference. In noisy phase-oscillatory systems, the partial synchronized regime can be the most dominant regime. (F) Because the phase-precession (= instantaneous frequency difference) is not smooth and changes as a function of phase-relation, it implies phase-relation dependent frequency modulations (PrFM). (G) We also included phase-relation dependent amplitude modulations (PrAM) due to observations in many of our neural network simulations. We assume here that PrAM, in the ranges included here, did not substantially change the phase trajectories and hence TWCO is still an adequate theoretical framework. The phase evolution ϕ x (t) of oscillator X, unperturbed by oscillator Y, is defined only by its intrinsic frequency ω x and an intrinsic phase noise process Np x (if included): In case of oscillator Y, the exact phase evolution ϕ x (t) depends on the interaction term (phase response curve, PRC) with oscillator X and the coupling strength κ: with θ(t) = ϕ y (t)-ϕ x (t). The interaction term describes the phase adjustments (phase response curve, PRC) induced by the other oscillator X depending on the coupling constant κ. As the interaction term we used a sinusoidal function with an attractor fixed point (in-phase). It has been shown that the evolution of the phase relation between the two oscillators without intrinsic noise (Np = 0) can be described with a single equation, referred to as the Adler equation [7]: The equation shows that the time evolution of the phase relation θ(t) is a function of the frequency difference Δω(t) at that time, as well as the coupling strength κ of the sinusoidal interaction function. In case of Np = 0 (no intrinsic noise), the phase-locking properties of the two coupled phase-oscillators can be analytically derived using the Adler equation (Methods A in S1 File). As described above, the phase-locking properties are a function of the independent variables detuning Δω and coupling term κ. This means that depending on the chosen parameter ranges for Δω and coupling term κ, phase-locking can vary between 0 and 1 in a manner that is fully deterministic (as long as the phase oscillator model is noise-free). Hence, for a given parameter set used for simulating data, we generated the instantaneous phase traces by numerically solving the differential equations (Euler method, step size 1ms). We generated 500 trials of 3sec simulation with randomized initial phase conditions and intrinsic frequencies in the gamma frequency range . The frequency range is of no importance for the phaseoscillator model and the results can be generalized to any frequency range. For each trial the first 2sec were discarded to exclude any transient dynamics and to have a state as stable as possible. In one exceptional case we used 200ms time windows instead of the 1s time window after the first discarded 2sec. The (instantaneous) phase traces were wrapped around ±pi. We generated multiple shorter trials instead of a long single trial to permit easy application of spectral coherence. As a next step, we converted the phase-traces into zero-mean real-valued signals to be used for testing phase-locking estimation methods. In experimental measurement conditions, the signals always suffer from added extrinsic (measurement noise) which is signal unrelated to the process of interest. We therefore added different levels of zero-mean extrinsic white noise. The amount of signal-to-noise ratio (SNR) was defined as defined as: where S SN (ω) 2 stands for spectral power of the noisy oscillatory signal at frequency ω and S N ( ω) 2 stands for spectral power of just the noise term. This is also the definition of the relative power ratio. In some conditions, we added phase-relation dependent amplitude fluctuations (PrAM) in oscillator Y using Eq 3. The amount is defined as amplitude modulation in % of the mean oscillator amplitude. Oscillations generated by spiking excitatory-inhibitory neural network model. To demonstrate that the results from the phase-oscillator equations are generalizable to more biophysically plausible neuronal network oscillations, we simulated two coupled excitatory-inhibitory spiking neural networks generating pyramidal-interneuron gamma (PING, [39]) oscillations. The neural voltage dynamics v were of the Izhikevich-type [38] and defined as follows: The coupled differential equations were numerically solved using the Euler method (1ms step size). We simulated 300 trials of 1.3sec length, from which the first 300ms were discarded to reduce transient dynamics. The networks were both composed of two types of neurons: 400 regular spiking neurons RS (a = 0.02,b = 0.2, c = -65mV, d = 8) and 100 fast-spiking interneuron FS (a = 0.1,b = 0.2, c = -65mV, d = 2). RS were excitatory neurons and FS inhibitory neurons (ratio 4:1). The neural networks were all-to-all synaptically connected. Synapses were modeled as exponential decaying functions, reset to 1 after the presynaptic neurons fired. Synaptic connection values set the maximum synaptic connection strength (max syn). The synaptic strengths were chosen from a random uniform distribution defined between the 0 and the maximal connection strength. The input drive to RS neurons was composed of a fixed input current to each neuron (10mv), unique Gaussian input noise for a given neuron (SD±3.5mV) and a 1/f 1.5 input noise shared among neurons (±3.5mV std) of the same network. So each network received uncorrelated 1/f 1.5 input noise to RS neurons with the effect to induce instantaneous frequency variation of a network over time (similar to intrinsic phase noise in the phase-oscillator model). For FS neurons, each received a fixed input current (3.5mV) and Gaussian input noise (SD ±3.5mV). FS neurons received further excitatory drive from the RS neurons. For reconstructing a population field signal for both networks (resembling LFP signals), we summed the spike trains of all RS neurons for a given network, followed by demeaning and smoothing with a pseudo-Gaussian kernel (SD = 3ms). For computing the expected PLV, we Hilbert transformed the signals and computed their instantaneous phase-relations from which the PLVs were derived. For the generation of the final testing data, we added different levels of extrinsic noise (here: 1/f 1.5 noise). Phase-locking estimation methods Was assumed that signals X(t) and Y(t), which might represent two cortical regions, contained underlying oscillatory processes with instantaneous phase evolution ϕ X (t) and ϕ Y (t). We were interested to understand how interdependent or phase-locked the two phase parameters were. Or in other terms, how consistent the phase-relations θ(t) = ϕ X (t)-ϕ Y (t) were over time. If one measures the occurring phase-relation θ(t) for a period T, one obtains a distribution of phase-relations. For quantifying the consistency of phase-relation θ(t), we computed the mean resultant vector length (MRVL). Each phase can be represented as a vector in the complex plane. If θ(t) is consistent over time, the vectors have the same angle, therefore the vectors add up and the MRVL will be non-zero. If vectors are unity (= 1) then the MRVL will be 1. If θ(t) is not consistent, the vectors will be equally distributed over-π to π and will cancel each other out. The MRVL will be 0. The MRVL is appropriate if the ϕ X (t) and ϕ Y (t) are linearly interdependent (assumed here). In this study the MRVL for phase-locking estimation was applied on phase-relation distributions obtained by two very different approaches. The first one was based on the normalized Fourier cross-spectral coefficients which is referred to as the 'spectral coherence approach' [26]. The second was based on the estimation of the instantaneous phases using Hilbert Transform or time-frequency representations and is termed the 'phase-locking value (PLV)' approach [34] (see Figure A in S1 File for schematic illustration of the approaches). Both approaches are used currently in neuroscience. In this study we test explicitly whether the two approaches are appropriate phase-locking estimation methods for oscillatory synchronization. In the following segment we introduce the spectral coherence and the PLV approaches in more detail. Note that the reason we limited ourselves to a description of two specific methods is because we did not aim to make a detailed investigation of the performance of different variations of spectral coherence and different variations of the PLV approach (e.g., testing different signal decomposition methods or different approaches for extracting instantaneous phase). We suggest that the findings we will report here for the specific spectral coherence and the specific PLV approach used here will be representative, respectively, for the classes of methods computing phase relations based on spectral methods, and those computing phase relations based on instantaneous phase estimations. Spectral coherence. Spectral phase-locking measures are used in many experimental studies and also offered by widely-used analysis toolbox in neuroscience as the principal method to quantify phase-locking (Fieldtrip [41], Chronux [42]). Commonly used spectral-based measures are the coherence index [26] or its modification [53] to increase robustness against amplitude correlation. Here, the time-domain signals of each trial are transformed in the frequencydomain and phase-coupling is assessed frequency-by-frequency. The advantage is that one can observe, in a computationally efficient manner, frequency-resolved peaks in phase-locking, which yield a quick overview of phase locking over relevant frequencies. However, the spectral coherence measure assumes (weak-sense) stationary processes. To estimate the spectral phase, we computed the discrete Fourier transform of the time series of the oscillatory signal X(t,n) (t = 0,. . .T-1) of a given trial n with length T. where S x (ω) is the complex-valued Fourier coefficient at integer ω related by ω / T to normalized frequency. For simplicity we will call ω just frequency assuming appropriate rescaling. The power spectrum is defined as: where S xx (ω) is the estimated spectral power of oscillation X at frequency ω, E[] is estimation of a function and à the complex conjugate. S YY was computed as S XX . The spectral coherence between two signals X(t,n) and Y(t,n) are based on the cross-spectral density estimate of S x (ω, n) and S y (ω,n), defined as follows: where S xy (ω,n) is the estimated complex-valued cross-spectral density at frequency ω, where E [] is estimation of a function and à is the complex conjugate. The cross-spectral density reflects both the mean phase-difference as well as the power correlation between S x (ω,n) and S y (ω,n). The spectral (sample) coherence Coh(ω) [53] is the absolute value of the cross-spectral density normalized by the respective power spectra and is defined as follows: where n is the trial number (n = 1,.N) used for estimation. A critical point of spectral coherence is that the S xy (ω) value depends on the phase as well as the amplitude correlation. Each trial therefore contributes as a function of amplitude correlation, making spectral coherence sensitive to amplitude correlation values [34,54].To make spectral coherence not sensitive to amplitude correlation the spectral coherence formula, used in the following Result sections, was modified [53] as follows: Sxyðo; nÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Sxxðo; nÞSyyðo; nÞ p ð15Þ Each absolute cross-spectral product for a given trial n and frequency ω is normalized with the square-root product of the autospectra S xx (ω,n) and S yy (ω,n). Therefore different levels of amplitude correlations do not affect the phase-locking measure. It has therefore been assumed that this spectral coherence formula is a pure phase-locking measure [41,53]. It can be seen as applying MRVL on the angle values of the complex Fourier cross-spectral coefficients. This formulation was used to assure straightforward comparison to the other phase-locking estimates used in this study. The results shown in this study for spectral coherence as defined by Eq 14 or Eq 15 can be expected to be very similar, because the amplitude correlations were negligible in our simulations (also if PrAM included). The effect of amplitude correlation on spectral coherence has recently been studied systematically [54]. For a given testing data set we first computed the (Fast) Fourier transform (FFT). For simplifying the derivation of the analytical coherence value we did not use tapering or padding. Then we computed the spectral coherence spectra. We took the maximum (peak height) of the coherence spectrum. The peak of the coherence spectra was the phase-locking estimate, which was compared to the expected phase-locking. We applied trial-number correction formula (see below) for the coherence estimates. Estimation based on the instantaneous phase: phase-locking value (PLV). Various methods have been proposed that quantify phase-locking based on the instantaneous phase [55]. These methods deal better with non-stationary dynamics, which are likely to be present in neural signals. The main challenge is to decompose the often complex multi-component measured brain signal into well-defined oscillatory components (e.g. through filtering or wavelet decomposition techniques) from which the instantaneous phase can be extracted (i.e., after a Hilbert-Transform or directly from a time-frequency representation (TFR), [35]). Below we propose an alternative method that is based on singular spectrum decomposition (SSD, see https://project.dke.maastrichtuniversity.nl/ssd/) [36] and the Hilbert-Transform. We then applied this method on the simulated two coupled phase-oscillator as well as neural network signals having intrinsic and extrinsic noise. SSD is a recently proposed method for the decomposition of nonlinear and non-stationary time series [36,56]. In the present work, the method is applied to reduce the influence of noise and to provide a PLV estimate that unlike spectral coherence is able to handle nonstationary signals. Additionally, the SSD method is also able to deal with nonlinear signals unlike waveletbased approaches. Here, the key ideas underlying SSD are introduced (see [36] for additional details). The method originates from singular spectrum analysis (SSA), a nonparametric spectral estimation method used for analysis and prediction of time series. The advantage of SSAderived components over Fourier-derived sines and cosines is that SSA-components are model-free (data-driven), and therefore are not necessarily harmonic functions. Being datadriven, SSA components can capture highly non-harmonic oscillatory shapes, making them suitable for the analysis of nonlinear and non-stationary time series. In the SSD method, the choice of the main SSA parameters, the embedding dimension and the selection of the principal components for the representation of a specific component series have been made fully data-driven and automated. This makes SSD an adaptive decomposition method. Similar to empirical mode decomposition (EMD) [57], the decomposition is based on the extraction of the energy associated with various intrinsic time scales. One advantage of SSD over EMD is that it tends to avoid mixing components with different frequency bands and provides accurate separation between intermittent components at the transition points [36].The Hilbert Transform can be used on SSD components when interpreting its outcome with caution. Indeed, SSD-components contain several frequencies, with no clear indication about how many instantaneous frequencies per time instant may be present [57]. However, the narrow-banded frequency content of each SSD-component permits one to consider the results of the Hilbert Transform as sufficiently reliable under most conditions [36]. For both oscillator signals X(t,n) and Y(t,n) we applied SSD for each trial and oscillator separately to extract the oscillatory components (SSD comp ). For deriving the instantaneous phase of a SSD component, we applied the Hilbert transform (HT). where HT(SSD comp ) is the Hilbert-Transform of the selected SSD component. The HT in essence adds the imaginary component to a real-valued signal to reconstruct the analytical signal. SSDα comp is the analytical signal of the SSD comp . The instantaneous phase ϕ and frequency ω can then easily be derived from the analytical signal: To compute the phase-locking value (PLV), we first computed the instantaneous phase relations θ(t,n) = ϕ x (t,n)-ϕ Y (t,n). We then concatenated the trials to compute the overall distribution of phase relations and to eliminate variable n. The PLV was then computed simply as: with T representing here the overall number of sample time points. Eq 18 is the mean resultant vector length (MLVR). Note that in contrast to spectral coherence, which cannot give single-trial estimates, the estimation of phase-locking for single-trials is possible for the PLV approach [34]. This is interesting if one expects that the synchronization properties (e.g. detuning or coupling strength) change over trials. In all of the work presented in this study, we assume that a given state of (partial) synchronization is constant for the duration of the trial or time window used. Expected phase locking. To determine the expected phase-locking (PL), we computed (without any extrinsic noise) the instantaneous phase-relation between oscillator X and Y. We concatenated all the trials again to have an overall instantaneous phase-relation distribution. From this distribution, we computed the mean resultant vector length. In the case of no intrinsic noise (Np = 0), we also analytically derived the expected phase-locking (Methods A in S1 File). The comparison of the numerically and analytically derived expected PL showed that they were very closely matching (mean square error (MSE) = 1.4e -5 ). Trial-number unbiased phase-locking estimates. To compare the estimates of coherence and PLV in simulated data, we relied on an adapted formula for coherence and PLV that is more robust against inflation due to finite numbers of trials for coherence [53] or time points for PLV. The trial correction formula was mainly relevant for the coherence estimates. For clarity, the number of phase estimates was represented by the number of trials N for coherence and the number of time points T for PLV. The unbiased estimator of squared PLV is the following [58]: where T is the number of sample points. The same was done for deriving the expected PLV (i.e., the PLV obtained in the absence of any noise). For coherence the formula is: where N is the number of trials. The Transfer Entropy (TE) measure In addition to the phase-locking estimation, we also quantified the directed information flow between network X and Y by applying (delayed) transfer entropy measure TE according to [43]. TE measures a directed flow of information between two processes. The TE measure allowed us to more concretely demonstrate the relevance of PLV and coherence for information transmission among neural networks. For a given spike train from neuron I, TE gives the amount of reduced uncertainty when using knowledge of preceding time bins in the spike train from neuron J over using knowledge of preceding time bins in the spike train from neuron I itself. We quantified the directed information flow from RS neurons of network X to RS neurons of network Y. For quantifying TE, we selected 80 RS neurons of each network defining 80x80 combinations. We concatenated the spike trains of all trials to increase sensitivity. Then TE was applied for all 80x80 combinations, where the transfer entropy for different delay time values (delays up to 16ms were included) was computed and then combined. The TE was averaged overall 80x80 combinations. The TE of spike train from neuron I (binary 0/1 vector where 1 denotes spike) on the spike train from neuron J was computed as follows [43]: where d is the delay (here in terms of 1ms time steps),i t or j t is the status of spike train at time t (0 = no spike, 1 = spike) of neuron I or J respectively. Results In the following, we will first show simulations of coupled phase-oscillators with no intrinsic phase noise. We used the simpler case first as it allowed us to analyze the generative model analytically, in addition to numerical simulations to precisely understand the behavior of spectral coherence. In a next step, we added intrinsic phase noise to make the phase evolution more similar to experimental data and we then also compared the behavior of spectral coherence to the PLV method. As a last step, we extended the results from the phase-oscillator simulations to two mutually interacting gamma-generating spiking neural networks representing a biophysical plausible model of gamma oscillations. An evaluation of spectral coherence as a measure of phase locking between two intrinsically noise-free coupled phase-oscillators as a function of added extrinsic noise We systematically investigated the behavior of coherence, for different SNRs, as a function of detuning Δω (PrFM) and different levels of PrAM without intrinsic phase noise. To that aim, we compared in our simulations the analytically derived expected PL 2 of two interacting phase-oscillators with the analytically derived spectral coh 2 as well as the numerically estimated coh 2 . We first show two illustrative example conditions. In the first example (Fig 2A), the phaseoscillators were uncoupled and the oscillators X and Y had a detuning Δω of 3Hz. In the middle panel of Fig 2A, the Fourier power spectra can be observed with two power peaks; one for each oscillator. The right hand panel in Fig 2A shows the absence of a coherence peak as expected. In the second example (Fig 2B), oscillator X interacted unidirectionally with oscillator Y with κ = 1. Strikingly, in the power spectrum of oscillator Y, two side peaks can be observed around the central power peak with a distance of ±Δω = 3Hz. These are so-called modulation sidebands, which have been described in the cross-frequency coupling (CFC) literature [59][60][61]. They arise because the synchronization force induced by oscillator X on Y (see interaction term) leads to systematic phase-relation dependent frequency modulations (PrFM) at the 3Hz detuning frequency Δω. Notice that the left modulation sideband power peak of oscillator Y and the power peak for oscillator X are at the same frequency. To the right, a strong coherence peak can be observed which is much higher than the expected phase-locking value. The coherence peak is at the frequency where oscillator X and the left modulation sideband of oscillator Y have their power peak. The coherence peak hence reflects the phase-locking between the PrFM of oscillator Y (induced by oscillator X) with oscillator X itself. We then systematically compared the numerical and the analytically derived coh 2 ( Fig 2C) with the true phase-locking, for different SNR of extrinsic noise, as a function of detuning frequency Δω. The SNR was manipulated by changing the level of extrinsic noise. The oscillator X was unidirectionally coupled to oscillator Y by κ = 0.75 (horizontal line in left upper panel Fig 2C). Oscillator X had a frequency of 40Hz. At the constant coupling strength, the frequency of oscillator Y was shifted from 40 to 48Hz in step size of 0.25Hz (detuning Δω 0 to 8Hz; schematically shown in top right panel Fig 2C). The values chosen corresponded to a half cross-section of the (1:1) Arnold tongue. In the above-defined conditions, we evaluated SNRs of 500, 50, 10, 5 and 2 on coherence (Fig 2C bottom panel). The numerically and analytically derived coh 2 (red symbols, black line in Fig 2C bottom panel) matched well for all conditions, and we therefore do not distinguish them further. For low detuning Δω (inside the Arnold tongue) and high SNR, the oscillators were in full synchrony and showed a coh 2 value of 1, matching the true phase locking (blue line). For lower SNR the noise started to affect the coh 2 estimate more substantially. As white noise was uncorrelated between oscillators, the white noise tended to decrease the coh 2 . At an SNR of 2, the coh 2 gave an estimate reduced by 50%. At a particular detuning frequency (Δω = 1.5Hz, around the edge of the Arnold tongue) the phase-locking between the oscillators started to drop (partially synchronized state). The oscillators were therefore not completely frequency-frequency locked and had disparate frequencies. In this case, the coh 2 (red symbols in Fig 2C bottom panel) deviated strongly from the PL 2 indicated in the blue line in Fig 2C (bottom panel), and depended strongly on the SNR. Without any additive white noise (high SNR), the coh 2 estimates were 1 or approached 1 despite the PL 2 being very low. The lower the SNR, the more noise affected the estimate and the more coh 2 turned towards 0. Importantly, with higher SNR, the estimates converged not on the true phase-locking (with the exception of perfect locking), but towards a phase-locking of 1. Nevertheless, the coh 2 estimates reflected to a certain degree the underlying changes in the PL 2 as a function of detuning if noise was present. This is because the amplitude of the modulation sideband (induced by PrFM) is a function of the PL (see below). The noise unmasks this dependence. This is because the amplitude of the modulation sideband decreases with detuning, the SNR decreases as well if noise is present. Because coherence is sensitive to SNR, the value will reflect the SNR changes of the modulation sideband peak. Notice that at very low SNR the values (<2) coh 2 converged to 0 as the noise was uncorrelated between oscillators. We then investigated the dependence of coh 2 estimates on the presence of PrAM. For simplicity, we used conditions in which the phase-oscillators were uncoupled (black dot Fig 2D upper panel) and hence the true phase-locking was 0 for all conditions (blue line bottom panel Fig 2D). The oscillators had a detuning of 3Hz. We evaluated different levels of PrAM ranging from 0 to 100%. We used the same SNR conditions as before. Again, the numerically and analytically derived coh 2 estimates matched well. We observed that coh 2 estimates deviated from the true phase-locking as a function of both the level of PrAM and the level of SNR (Fig 2B). The higher the PrAM was set, the higher the coh 2 values became, and hence the more coh 2 deviated from true locking. This was because higher PrAM leads to higher amplitude of the modulation sideband making it more dominant above to the noise. Therefore, similar to the case with PrFM, with increasing SNR, the coh 2 estimates did not converge towards the expected phase-locking of 0, but to a phase-locking of 1. These results show that spectral coherence is not a pure phase-locking measure, but also reflects phase-relation dependent amplitude fluctuations. Note that only one oscillator had amplitude fluctuations, hence the amplitude correlation between oscillators was 0. Conceptual and mathematical understanding of the underlying cause of spectral coherence deviations from expected phase-locking. Oscillatory processes that exhibit phase synchronization without complete phase locking, are in a partially synchronized state, and will show characteristic PrFM and likely also PrAM during phase precession. These systematic modulations of the oscillation frequency and amplitude occur at the frequency of the phase precession (equal to the frequency difference between oscillators). These types of modulations In (A) oscillator X and Y had a detuning of 3Hz and did not interact. The power spectra (middle panel) show the two power peak of the two oscillators. The coherence spectrum (right panel) was flat as expected. In (B), the oscillators did interact (κ = 1), where oscillator X influenced the phase trajectory of oscillator Y. The power spectra of oscillator Y show two extra power peaks with ± the detuning. These are the so-called modulation sidebands, well described in the cross-frequency coupling literature [62]. Notice that one of the sidebands overlap with the power peak of oscillator X. The coherence spectrum show a strong phase-locking estimate, much higher than expected. This is because the coherence estimate reflected mostly the locking between the sideband of oscillator Y and the main power peak of oscillator X, which can be completely unrelated to the actual phase-locking of oscillators X ad Y. (C) Rendering of the Arnold tongue, shown with a 1/2 cross-section at the level of a 0.75 coupling strength, for which phase locking values are plotted as a function of positive, increasing intrinsic frequency differences between oscillators X and Y (Δω). Here, we did not add PrAM to the oscillatory signal. We compared the numerically (red dot) and analytically derived (black line) Coherence with the analytically derived true phase-locking (purple line) between two oscillators as a function of frequency detuning (Δω) and different levels of SNR. We used trial-number corrected squared coh values to minimize inflation due to a finite number of trials. In the partially synchronized states associated with different Δω values in the selected coupling condition, we observed strong deviations of Coherence from the true locking. The coh 2 values became more inflated with higher SNR. The numerically computed coh 2 matched with the analytically derived coh 2 . (D) The impact of different levels of PrAM is shown with different level of SNR. The oscillators were uncoupled and hence asynchronous (in the condition indicated by the fat dot at the bottom of the Arnold tongue) and the true locking was therefore 0. The oscillators had a phase precession of 3Hz (chosen condition is located off the midline of the Arnold tongue). We observed strong deviations from the true locking with increasing PrAM and SNR. The numerically and analytically derived values matched. can be seen as a form of cross-frequency coupling 'CFC' [62], here between oscillators of nearby frequencies and with the phase-relation as the modulation variable. The CFC between two oscillations of nearby frequencies (e.g. 42Hz and 45Hz) leads to so-called modulation sidebands (SM) that are located ±3Hz nearby the main power peak. These 'modulation sidebands' between lower and higher-frequency oscillations have been previously described in CFC literature, with specific reports on the modulation of the amplitude of the higher frequency oscillations by the phase of the lower frequency oscillation [59][60][61]. We will describe below in more detail the underlying causes of SM. Here, we were interested in computing the CFC phase-phase locking between the two oscillatory signals [63]. To that aim, one would need to compute the relationship of the phases at the higher frequency (45Hz) to the phases at the lower frequency (42Hz). Spectral coherence was not being expected to be applicable here because the oscillators X and Y did not share power at common frequencies. Yet, when applied, spectral coherence measures yielded inflated values. This was because one of the modulation sidebands (by definition) overlapped with the power peak of other oscillators. Note that this coherence estimate is bound to be incorrect, because it computes coherence frequency per frequency, yet here, one would need to compute coherence across frequencies. Notice also that in practice, it might be difficult to discover these modulation sidebands in largely overlapping power spectra in experimental data due to intrinsic and extrinsic noise, yet they might still affect the computations of coherence. In fact, as we show below, this type of CFC interactions occur even if power spectra completely overlap (zero mean difference), yet there is frequency variation from trial to trial. Below, we explain in more detail why the modulation sidebands are induced by PrFM and PrAM and how phase-locking computed with coherence leads to erroneous estimates of phase locking under these conditions. To understand why spectral coherence gave incorrect estimates, one needs to understand how these modulation sidebands arose and how they affected the coherence estimates. Oscillatory synchronization can lead to systematic PrFM and PrAM which both can induce modulation sidebands. In the Methods A in S1 File we show mathematically how for the noiseless case the modulation sideband SM is related to PrFM and PrAM. It can be shown that the amount of modulation sideband induced by PrFM (if PrAM is also absent) is a function of the (expected) phase locking scaled by the oscillation amplitude. Hence, for a given oscillation amplitude, the amount of SM was proportionally related to PL. For SM induced by PrAM (without PrFM, PL = 0) it can be shown that it is a direct function of PrAM modulation strength (α) scaled by the oscillation amplitude. The SM induced by either PrFM or PrAM in oscillator Y has a constant phase-relation to the oscillator X inducing the PrAM or PrFM. Hence, in the noiseless case the coherence peak between the modulation sideband SM of oscillator Y and the oscillatory X has to be 1 (Methods A in S1 File). So whatever detuning one chooses or even in the case of no coupling, the coherence peak will be 1 (if ω x 6 ¼ ω y ). Adding extrinsic noise to the signals, being uncorrelated between the oscillators, had the overall effect of decreasing the coherence estimates. Increasing the extrinsic noise level caused the coherence estimates to converge towards 0. However, the effect of adding noise to the signal had more implications than decreasing the coh 2 values towards 0. Adding extrinsic noise made the coh 2 related to the true underlying phase-locking changes with detuning. By 'related' we mean that the coh 2 and the (expected) PL 2 values were correlated, yet not implying that the values matched exactly. In Fig 2C, for different SNRs the coh 2 deviated substantially from the PL 2 values. Yet, they shared the property that they overall decreased with detuning. One could raise the question of how this is possible given that the coherence peak reflected the phase consistency between the modulation sideband of oscillation Y and oscillation X, while this phase consistency was defined as perfect for all coupling and detuning conditions. The reason for this lies in the property of SM PrFM to change its amplitude as a function of PL (see Eq 22). The effect of including extrinsic noise can now be understood as follows: In the simulations (Fig 2C), when the detuning decreased, the PL increased. Because the PL increased, the amplitude of SM PrFM increased, and given a specific fixed noise level the ratio of SM PrFM amplitude versus noise amplitude also increased, which meant that the SM PrFM SNR increased. Because the SM PrFM SNR increased, the coh 2 values increased. Through this relationship, for a large range of extrinsic noise levels, there was a rough relation between the PL and the coherence estimates. Note that this is only the case for SM PrFM , because SM PrAM is unrelated to the underlying PL (see Eq 23). Deriving analytically the spectral coherence estimates. Adding uncorrelated white noise (simulated 'measurement error') decreased the coherence values in general. The amount of reduction was a function of the ratio between the signal power and the noise power (the SNR). The essential ratio determining the spectral coherence peak is the ratio between the amplitude of the modulation sideband and the white noise amplitude. There are four variables affecting the true phase difference between the two oscillators: the amplitudes of two independent white noise processes and their phase values. The power from a white noise process is known to have a chi-square distribution of order 2 [64] with mean power being equal to its variance. For amplitudes it corresponds to a chi-distribution of order 2, for which the probability density function is given by: The phase distribution of a white noise process is the uniform distribution: : The actual phase of each oscillator is the complex vector addition of the signal and noise. The product of these complex values between the two oscillators gives the actual phase difference. The spectral coherence evaluated at the frequency where the oscillation X shares power with the modulation sideband of oscillation Y is: where A X and A Y represent the amplitudes of the white noise. φ Xw and φ Yw represent the phases of the white noise of the oscillator X and Y respectively. S X is the amplitude of oscillator X and S Y is the amplitude of the modulation sideband of Y. We assumed for simplicity that the characteristics of PrAM and PrFM are constant therefore the oscillator X and the sideband modulation of oscillator Y had a constant phase relationship (ϕ X -ϕ y = constant). The accuracy of the estimates can be seen in Fig 2C and 2D. They fitted the numerically estimated coherence estimate well, demonstrating that the induction of SM by PrAM and PrFM as well as the SNR were the underlying determinants of the peaks in the coherence spectra. Comparing coherence with PLV as measures of phase locking between two coupled intrinsically noisy phase-oscillators as a function of added extrinsic noise The previous sections have demonstrated severe limitations for coherence as a measure of phase locking. In various studies [28,34,35,65] it has been proposed that phase locking approaches like the phase-locking value (PLV)-which are based on the reconstruction of the instantaneous phase, here by applying Hilbert transform (HT) and singular spectrum decomposition (SSD)-can be a viable alternative to coherence. To make a fair comparison between coherence and PLV, it is necessary to do this on model data that more accurately reflect data as they could be measured empirically. We achieved this by making the phase oscillators intrinsically noisy. In previous sections this was not done, as the main aim in preceding sections was to illustrate maximally the conceptual and mathematical problems of using coherence as a measure of phase relations between oscillators. Here, the aim will be to use model data that show noisy characteristics closer to those that could be recorded empirically (e.g., in electrophysiological recordings), but still with the underlying phase relations known, which are to be estimated by coherence and PLV. In addition to including intrinsic noise, we added different levels of extrinsic (measurement) noise as in previous simulations to manipulate SNR. Intrinsic noise was modeled as a noise process Np(t) (pink, scaling factor of 1, SD = 1.5Hz) added in the phase-oscillator equation affecting the phase evolution of the oscillator. The properties of the intrinsic noise process did not change over the simulations conditions. It is termed 'noise' as the phase variability is of unknown origins. Notice that including intrinsic noise affects the phase-evolution, but this variation is of biological interest and needs to be included for phase-locking estimation. This contrasts with extrinsic (measurement) noise, which is unrelated to the dynamics of interest and should be ignored. What is the effect of using phase oscillators that are intrinsically noisy? Without noise and thus with a fixed intrinsic frequency, oscillations were very narrow-banded and the frequency distributions of the two coupled oscillations in many cases non-overlapping. Under these conditions, coherence was shown to be a poor estimator of phase relations. Measured neural oscillations however (e.g. gamma-band) have broader spectral power peaks [14,30,66,67]. This indicates that neural oscillations exhibit rapid phase and frequency dynamics, which can be expected from noisy and complex networks [68] of which brain networks are prime examples. Sources of the variability in oscillation frequencies and relative phases include intrinsic noise/instability within a network [66], perturbations from other networks [17] and cross-frequency interactions [69,70]. In Fig 3, we show data where oscillator X was unidirectionally coupled with oscillator Y (κ = 1, X!Y). The pink noise (applied to X and Y) was uncorrelated between the oscillators. Including the pink noise term had two effects: It broadened the range of frequencies of the oscillator. Second, because the intrinsic frequency varied due to noise, the oscillators did not have a precise position on the detuning dimension anymore, but it varied over time. That makes full synchrony very difficult to achieve, because strong noise fluctuations kick the oscillators out of their 'attractor' phase-relation [7]. Even if the mean detuning is Δω = 0, the phase-locking strength might be lower than 1 due to the intrinsic phase noise. In the simulations described in Fig 3 a systematic comparison was made between the ability of coherence to estimate expected phase locking and the ability PLV to estimate true phase locking, for 5 different SNR (0.8, 5, 10, 23 and 47). Note that SNR refers here to extrinsic (measurement) noise that is uncorrelated Oscillators X and Y are interacting (κ = 1 X-> Y) with a detuning of 2Hz (X = 38Hz, Y = 40Hz). The power spectra (here in the gamma range, although exact range is irrelevant here) are shown in the middle-panel. The power spectra do largely overlap and the modulation sidebands cannot be easily observed (but are present in the data). The coherence spectrum (right panel) gave a phase-locking estimate larger than expected with a peak at 38Hz (where the left modulation sideband of oscillator Y overlaps with the power peak of oscillator X). In (B) the detuning was increased to 6Hz (X = 34Hz, Y = 40Hz) with same coupling conditions. Now the left modulation sideband of oscillator Y can be observed as a small peak at 34Hz. The coherence spectrum gave a phase-locking estimate much larger than expected reflecting the influence of the modulation sideband. (C) A 1/2 cross-section of the Arnold tongue, similar to Fig 3A, is shown. The continuous lines represent simulations without PrAM and the dashed lines represent simulation with a PrAM of 20%. We compared the coherence to the expected phase-locking. We used the noise-free instantaneous phases of the phase-oscillators to compute the PL 2 , which was a good estimator of the analytically derived true phase-locking. We observed that the coh 2 values deviated strongly from the expected phase-locking. The exact deviation depended on the detuning frequency and SNR. between oscillators. Further, we evaluated conditions without any PrAM and conditions with 20% PrAM. Simulated data comprised for each condition 500 trials of 3s length. The first 2s were discarded of each trial to assure that the oscillators reached stable state. For coherence, we analyzed data using the full 1s window or by splitting the 1s window into 0.2s windows. For these simulations, we did not derive the analytical phase-locking between the two oscillators, but relied on a numerical estimation only. For the same conditions as described above, we then computed PLV and compared with the expected phase locking PL (i.e. the known phase locking prior to the addition of extrinsic noise sources). Below we describe the simulation results in detail. Two examples are shown illustrating the behavior of coherence under two conditions of detuning. In Fig 3A (left panel) oscillator X and Y (κ = 1, X!Y) had a detuning of Δω = 2Hz. Their power spectra (middle panel) overlapped largely. Note that experimentally reported relative power values can range from !1 to !25 (or higher), depending on the oscillation band, the recording method used and neuronal structure investigated. Hence, the (high SNR) power values simulated here are in line with experimentally possible values. Note furthermore that the modulation sidebands are basically invisible. However, they are still present (e.g. they can be observed in single-trial power spectra) and affect considerably the resulting coherence spectra as depicted to the right in Fig 3A as the coh 2 estimate (peak height) was higher than expected (right hand panel). In Fig 3B we used the same configuration, but now with a detuning of Δω = 6Hz. Notice that now, the left modulation sideband of oscillator Y, induced by oscillator X can be seen (indicated by arrow) as a small peak coinciding with the peak power frequency of oscillator X. Also in this case, the peak of the coherence spectrum was much larger than expected. In Fig 3C, we systematically modulated the detuning Δω to observe the behavior of spectral coherence estimates. The PL 2 was at~0.3 for a mean 0Hz detuning (Fig 3C). Despite the fact that the mean frequency of the oscillators matched, considerable instantaneous frequency variation was present centered around 0. Due to the phase noise the oscillators could not reach full synchrony, and instead exhibited phase precession. We then compared coh 2 estimates computed for a 1sec trial length to the PL 2 values. The coh 2 estimates were dependent on the SNR. With higher SNR, coh 2 did not converge to the PL 2 , but exceeded it. Including a 20% PrAM led to a further inflation of the coh 2 estimates. Hence, despite the oscillators having a matching mean frequency, coh 2 estimates showed deviations (Fig 3C, left panel) from the expected value. Increasing the detuning frequency Δω led to a smooth decrease of the expected phase-locking. The coh 2 estimate of the highest SNR did not decrease at all for a large range of detuning values. Moderate SNR levels (23, 10, 5) showed a very slow decrease of coh 2 as function of detuning, but still deviated from the PL 2 . At low SNRs the effect of the extrinsic white noise became dominant and coh 2 estimates were converging to 0 (as the noise was uncorrelated between oscillators). Applying the 20% PrAM led to an overall inflation of the coh 2 estimates. Note that the inflation by PrAM increased with increasing SNR (as expected from Fig 3C). These results show that also under more realistic oscillatory dynamics, spectral coherence exhibits strong deviation from the PL, thus confirming the analysis of the previous simulations without intrinsic phase noise. We re-analyzed the same simulation data, but restricted the coherence estimation to 0.2sec trials length (Fig 3D) by using the time window 2-2.2sec after simulation onset (the first 2 Including a PrAM of 20% led to a further inflation of the coh 2 values. Note also the deviations of coh 2 from the PL 2 at a zero detuning frequency. (D) The same analysis as in (C) but with PLV values estimated by the SSD-HT method. We observed that for higher SNR the estimate behaved better and remained close to the expected phase locking. At lower SNR the PLV 2 showed lower than expected values due to the effect of (uncorrelated) noise. Including a PrAM of 20% led to an inflation of PLV 2 values in the lower SNR only. doi:10.1371/journal.pone.0146443.g003 seconds were discarded). We explored the 0.2sec time-window coherence, because spectral coherence is also applied in the context of TFR, e.g. using short-time Fourier transform. We used 0.2sec time-windows also because it is a typical time-scale used for the TFR analysis of higher frequency oscillations in neuroscience [8,71]. For lower oscillations longer windows are usually used. The use of a 0.2sec time-window restricted the frequency resolution to 5Hz. This had an impact on the coherence estimation properties. For detuning conditions below 5Hz, the coh 2 estimates converged to the expected phase-locking with higher SNR. Starting from detun-ing~4Hz the coh 2 estimates lost this property and deviated again as shown in Fig 3C, obtained with 1sec trial lengths. Including 20% PrAM affected the coh 2 estimates, yet much less below 4Hz than above 4Hz detuning (Fig 3D), and also much less compared to 1sec trial length coherence (Fig 3C). The particular detuning value at which the coh 2 estimates started to lose the property of converging towards the PL with increasing SNR, was a function of the timewindow length and its associated frequency resolution. This is because the coherence deviation from the PL depends on the separation of the oscillation band and the modulation sideband. If, due to short time-window length, the frequency resolution is low and spectral leakage is large, the phase estimates of the cross-spectral density will largely reflect the oscillation band (and not the modulation sideband). This is because the oscillation band has larger amplitude than the modulation sideband. Comparing Fig 3C and 3D, one could argue that decreasing the time-window for coherence estimation to 0.1sec or lower (generally problematic for lower frequency oscillations) could solve the issue of coherence misestimations induced by modulation sidebands. Using very small time-window for coherence estimation can be seen as approximating instantaneous phase, however with the cost of low frequency resolved coherence spectra. It can be questioned whether spectral coherence was designed for this, and whether it is not better to use methods that were specifically designed for instantaneous phase reconstruction. A further drawback of very short-time window coherence is that, if the signal consists of several components (for example any combination of theta, alpha, beta or gamma components), the coherence estimates might be affected by component mixing due to the very low frequency resolution. In comparison, SSD helps to avoid this problem in a completely data-driven manner [36]. In Fig 3E, we show the PLV 2 estimates based on the same testing dataset as used in Fig 3C. In brief, for the PLV approach, the signals were first decomposed using SSD [36,56]. We chose the SSD oscillatory component with the most power in the expected frequency range. The SSD helped to reduce impact of extrinsic noise on the instantaneous phase estimation and further assured that the signals were mono-component. We applied the Hilbert transform to the extracted components, to extract the instantaneous phases. From the instantaneous phase-relation we then computed PLV 2 . We observed that for increasing SNR the PLV 2 estimates converged to the PL 2 over the whole detuning frequency range (0 to 8Hz). For lower SNRs, the PLV 2 estimates were lower than the PL 2 , because the extrinsic noise started to affect the estimates more substantially. At the lowest SNR, the PLV 2 estimates were very close to 0. The addition of a 20% PrAM affected the lower SNR condition, but not the higher SNR conditions. Because the amplitude (or power) fluctuates with phase-relation, the SNR fluctuates with phase-relation, as the SNR is the ratio of signal amplitude and the (extrinsic) noise level. This leads to an SNR-weighted phase-relation distribution causing the PLV estimate to increase. For higher SNR this effect vanished. Overall, the PLV 2 estimates behaved much more appropriately in the conditions tested compared to coh 2 estimates, particularly in the higher SNR ranges. A drawback of PLV (which is the same for all methods reconstructing instantaneous phase) is that it is less robust against extrinsic noise (e.g. compare blue lines of Fig 3E with Fig 3C). Comparing coherence with PLV during manipulations of input drive and connectivity between two coupled gamma-generating excitatoryinhibitory neural networks So far, we have used a rather abstract model of oscillatory phase synchronization. Although it has been shown that the phase-oscillator equations and the theory of weakly coupled oscillators (TWCO) constitute a fruitful and powerful framework to describe and understand oscillatory synchronization in biophysical systems like neural networks [7,37,[45][46][47][48]52,[72][73][74][75][76][77], we will now show that the same principles described in the data generated by two coupled phase-oscillators, hold true in more complex datasets generated by two interacting excitatory-inhibitory spiking neural networks [40,52,72,73]. The networks used here generated gamma oscillations of the so-called 'pyramidal-interneuron gamma' PING type [39]. The overall structure of the network is shown in Fig 4A and the derivation of the network oscillatory signal is described in Fig 4B (see also Methods). In short, each network consisted of excitatory (E-cells) and inhibitory cells (I-cells) which interacted through synaptic AMPA (excitatory) and GABA-A (inhibitory) interactions [40]. The two networks interacted through weak AMPA connections targeting E-cells as well as I-cells of the other network. For each network we derived a population oscillatory signal which was computed as the smoothed combined spiking of all E-cells. For changing the noise-levels we added different amount of 1/f 1.5 noise to the population signals. As with the phase-oscillator generated data, we applied in the same manner spectral coherence and PLV. In addition to the phase-locking estimation, we also quantified the directed information flow between network X and Y by applying transfer entropy measure TE according to [43]. TE measures a directed flow of information between two processes. We quantified the directed information flow from E-cells of network X to the E-cells of network Y (and vice versa). The TE measure allowed us to more concretely demonstrate the implications of coherence misestimations for the understanding of information transmission among neural networks. Below we will show results from network simulations where we manipulated the detuning Δω as done in previously described phase-oscillator simulations. It has been shown that the frequency preference of gamma oscillations shifts as a function of input drive, both in experimental studies [13,14,78,79] and in computational studies of gamma-generating networks [14,23,41]. Therefore, one can manipulate input drive to change the frequency preference of a network and hence the detuning Δω. This relationship was exploited here to replicate the coherence and PLV findings from the phase-oscillator simulations. For manipulating detuning Δω, we altered the mean fixed input current to the E-cells of network Y (Fig 4C) while keeping input current fixed to network X. These manipulations are reported in Fig 4D and 4E. In addition, we will also show results from network simulations in which we changed the cross-network synaptic strengths. For manipulating coupling strength κ, we altered E!E , connection strength or E!I connection strength (both excitatory AMPA type). These manipulations are reported in Fig 5B-5E. Interacting networks receiving different levels of input-drive (detuning). We first manipulated the detuning Δω by changing the input drive for a given fixed bi-directional coupling E-E and E-I coupling values (Fig 4A). In Figure B in S1 File we show the same results for the uni-directional connectivity case which gave similar results. The testing data were generated as described above (Fig 4B). To demonstrate that we could shift the oscillation frequency difference by changing the input drive difference, we plotted in Fig 4C the intrinsic frequency difference between network X and Y as a function of the input drive difference. Here, the networks were uncoupled to have an accurate measure of the intrinsic (natural) frequency [39]. Neurons were interconnected with AMPA (excitatory) and GABAA (inhibitory) connections. The networks generated so called 'pyramidal-interneuron network gamma' (PING). The two networks where weakly interconnected by E-I and EE interconnections. The detuning was manipulated by altering the difference in excitatory input drive (to E-cells) between the networks. We used the experimentally and theoretically established observation that the frequency of gamma oscillations is tightly linked to input drive. (B) Generation of test-signals. From each of the 300 1sec trials the 'LFP' (population signal) was extracted from each network by summing and smoothing (pseudo Gaussian function of 3ms width) the E-cell spikes. Then we added 1/f noise (exponent = 1.5) to manipulate SNR. Compared to the phase-oscillator model, phase-relation dependent amplitude modulations (PrAM) were generated intrinsically in the model. (C) By changing the relative excitatory input drive to E-cells between the networks, we could manipulate the detuning (frequency difference). (D) Coh 2 (in the gamma frequency range 30-50Hz) and PLV 2 as a function of input drive difference (ΔE-drive) between networks. Different line colors represent difference SNR (relative power). The black line represents the PLV 2 with no noise added (PL). (E) Information flow (combined directions), as measured by transfer entropy (TE), as a function of input drive difference (ΔE-drive) between networks. (F) Variance in information flow explained by coh 2 (black) and PLV 2 (red) as a function of SNR derived by computing pearson correlation. difference. In Fig 4D, the coh 2 and the PLV 2 estimates are plotted as a function of excitatory input drive (ΔE-drive), where different lines represent different SNR levels (0.3, 2.2,4.8,9.1,36.2). The black line represents the PL (Fig 4D), which decreased rapidly with increasing excitatory input difference (ΔE-drive). The coh 2 (left panel Fig 4D) showed strong overestimations particulary in the high SNR range and at higher ΔE-drive. By contrast, the PLV 2 (right panel Fig 4D) converged with higher SNR towards the PL 2 (computed from the noise free signals). With low SNR the PLV 2 underestimated the PL 2 . In Fig 4E, we plot the information flow (TE) between network X and Y as a function of ΔE-drive. The information flow between the networks declined overall as a function of ΔE-drive in a manner very similar to the dependency of PLV 2 on ΔE-drive, confirming that information flow can be highly dependent on oscillatory synchronization [18,21]. Note that the changes in information flow were achieved here without changing the synaptic coupling values, and relied purely on shifting the input frequency (E-drive) to the networks. Importantly, PLV 2 tracked the synchronydependent information flow changes accurately, whereas coh 2 appeared rather unrelated to information flow changes, particularly in the higher SNR range. We quantified these observations in Fig 4F by plotting the squared correlation coefficient (given an estimate of explained variance) between information flow TE measure and coh 2 (black line) or between TE and PLV 2 (red line), as a function SNR (in log scale). It can be seen that the explained variance in TE measured by coh 2 is lower than by PLV 2 , particulary for the higher SNR ranges. Interacting networks with different strengths of excitatory-to-excitatory and excitatoryto-inhibitory connections. We then explored the behavior of coh 2 , PLV 2 and TE on testing data obtained after systematic and independent manipulation of the cross-network connections of E!I and E!E between network X and Y (Fig 5A). The detuning Δω was 2Hz (ΔEdrive = 2.5mV) for all conditions reported below. The power spectra were largely overlapping. Before describing the effects of the E!I and E!E connectivity manipulation, we will summarize the main findings. Remarkably, we found that E!I connections were powerful in tuning the phase synchronization strength between networks (increasing PrFM), whereas E!E connections only weakly changed phase synchronization strength, but induced robust PrAM in the network signals. This can be understood by considering that the I-cells neurons are thought to be critical in determining the gamma dynamics [39,80], hence E!I connections might be particularly well-placed for inducing synchronization between the networks. On the other hand E!E connections are less powerful for inducing synchronization, yet strongly modulate the spike probability of the receiving neurons. If the network oscillations are in-phase, E!E connections increase the spike-probability and hence the number of spikes in the receiving network, contributing to the gamma wave by giving it a higher amplitude. If the network oscillations are in anti-phase, E!E connections are not very effective in increasing spike-probability and instead they decrease the resulting gamma wave amplitude. Therefore PrAM was an inherent property of the oscillatory interactions compared to the phase-oscillator interactions. Note that despite the presence of PrAM, the phase synchronization dynamics could be well understood within the framework of the phase-oscillator model (TWCO framework) for the parameter ranges used here. We observed in these E!E and E!I manipulations that both types could have, in particular parameter ranges, a desynchronization effect. This is the case because E!E and E!I prefer synchronization at different preferred phase-relations. Hence, if both connections types are strong, there will be interference between the effects exerted by the two types of connectivity leading to an overall desynchronization. A detailed investigation of these phenomena goes beyond the scope of this paper. Of importance here is the question whether coh 2 was able to represent robustly the PL 2 and information flow TE in these different synchronization conditions. To demonstrate the effect of connectivity manipulations between networks X and Y, we first show (in Fig 5B) an E!E connection strength manipulation (0mV-0.02mV) for a fixed E!I connection strength (0.02mV). The top panel shows the information flow (TE), which decreased approximately monotonically with E!E strength. As shown in the middle panel of Fig 5B, the coh 2 estimates increased monotonically with E!E strength. Similarly to TE, the PLV 2 decreased monotonically with E!E strength. This striking example shows that in these configurations coh 2 behaved opposite to PLV 2 and information flow (TE). In Fig 5C-5F, we show the behavior of TE, PLV 2 , coh 2 as well as estimated PrAM strength as a function of both E!I and E!E connection strength (SNR of~10). The TE increased for both E!I and E!E, but more strongly for the former (Fig 5C). When both were relatively strong the information flow was low. The same pattern could be observed with PLV 2 (Fig 5D) indicating that phase synchronization to a large extent formed the basis for changes in information flow. By contrast, coh 2 ( Fig 5E) increased for both E!I and E!E, but more strongly in the latter case, hence showing a very different pattern compared to TE and PLV 2 . For all simulation conditions, we estimated PrAM (modulation strength in %, Fig 5F). PrAM clearly increased with increases in E!E strength, but not with increases in E!I strength. These results indicate that coh 2 estimates substantially reflected changes in PrAM (which were less predicative for changes in information flow). We quantified these observations in Fig 5G and 5H. We computed the squared correlation coefficient between changes in PrAM and changes in coh 2 (black line) as well as the between changes in PrAM and PLV 2 (red line) as a function of different SNR (Fig 5G). The results show that coh 2 strongly reflected changes in PrAM induced by E!E connections, with explained variance increasing with SNR. On the other hand PLV 2 estimates did not explain variance in PrAM, with explained variance approaching 0 with increasing SNR. In Fig 5H we computed the squared correlation coefficient between changes in information flow (TE) and changes in coh 2 (black line) as well as between changes in information flow (TE) and PLV 2 (red line), again as a function of SNR. Here the picture was opposite. PLV 2 reflected well the information flow, and even more so with increasing SNR. Coh 2 on the other hand, could hardly explain any changes in information flow with values tending towards 0 when SNR was increased. This shows that spectral coherence in certain parameter regimes poorly reflects changes in oscillatory phase-locking as well as the associated changes in information flow. Discussion In this study, we have demonstrated that spectral coherence exhibited serious problems over a large parameter range in quantifying phase-synchronization and its influence on information flow among oscillating neural networks. As an alternative approach for quantifying phase relations, we explored the behavior of a phase-locking value (PLV) method that is based on the reconstruction of the instantaneous phase. To derive phase information we used singular spectrum decompositions (SSD) followed by the Hilbert Transform. We will now discuss in more details the implications of the results. Can spectral coherence be used for quantifying neural phase synchronization? Spectral coherence or magnitude squared coherence [26] has been and still is a very useful statistical measure for quantifying frequency-resolved linear interdependencies between neural signals. Its robustness against noise, its mathematically well-analyzed statistical properties as well as its availability in neural data analysis software (Fieldtrip [41], Chronux [42]) makes spectral coherence estimation attractive to many neuroscientists. However, the validity of coherence estimation relies on the principal assumptions of weak-sense stationarity and linear interdependency, which need to be fulfilled by the data. In neuroscience, spectral coherence is largely used to determine the interdependency or 'phase consistency' of neural oscillations in different frequency bands (e.g., delta, theta, alpha, beta and gamma) between cortical or subcortical regions. The intriguing hypothesis investigated by many neuroscientists is that the amount of phase consistency among synchronizing neural oscillators might have important consequences for information processing and transmission in the brain [17,18,52,[80][81][82]. The fundamental reason why spectral coherence in a large number of conditions cannot robustly estimate the phase locking among synchronizing oscillations lies in the process of synchronization itself. According to the physical definition of synchronization, going back to Huygens' first description of interacting pendulums nearly 350 years ago [7,44], synchronization is a process in which 'oscillators mutually adapt their frequency'. In other words, oscillators synchronize towards a common frequency by influencing the instantaneous frequency (phase derivative) of each other. These mutual influences among oscillators are described by the phase response curve [45]. Hence, phase synchronization goes hand in hand with systematic frequency variations over time, which makes the process non-stationary. Therefore, it violates an essential assumption that is required for computing spectral coherence. The underlying problem of spectral coherence to estimate phase synchronization has its roots in the phase estimation of the signal, rather than in the quantification of the phase-relation distribution. In other words, the problem lies in what the phases represent that are derived from the Fourier cross-spectral density for a giving frequency. Therefore, the spectral coherence behavior reported in this paper can be generalized to any other spectral approach that uses a distribution of phases derived from the Fourier cross-spectral densities. As we have shown in detail, the synchronization process leads to phase-relation dependent frequency modulations (PrFM) as well as potentially to phase-relation amplitude modulations (PrAM). These systematic modulations lead to modulation sideband peaks in the Fourier periodogram. These modulation sidebands lead to high coherence values, because a modulation sideband induced by oscillator X in the Fourier periodogram of oscillator Y shares frequency and consistent phase-relations with oscillation X itself. These modulation sidebands are well known from the cross-frequency coupling CFC literature [59,60], where interdependencies between frequency bands (e.g. between theta 3-7Hz [10,63,83,84] or alpha~8-12Hz [85,86] and gamma~30-80Hz) are investigated. In particular, systematic power modulation of a higher frequency oscillation (e.g. gamma) as a function of the phase of a slower frequency oscillation (e.g. alpha) has been described [62,87]. This is similar to the PrAM effect described in this paper with the difference that PrAMs are induced by as a result of interactions between oscillations within the same frequency band. Further, the modulation is not a function of the phase of an oscillation, but a function of the phase-relation between two given oscillations. In the present study, we show that the modulation sidebands induced by both PrFM and PrAM can strongly affect the spectral coherence estimate, because the Fourier estimated phases not only represent the phase of the oscillation, but also the phase of the modulation sidebands (PrFM, PrAM). Most relevant to experimental neuroscience, when oscillatory data were simulated using not only extrinsic (measurement) noise but also intrinsic (phase) noise, which lead to realistic, broad power spectra, the modulation sidebands still affected the coherence spectra. As a consequence, for each frequency of the Fourier cross-spectral density, the phase might represent the phase of the oscillation and/or the phase of the modulation sideband. In other words, the phase-relation distribution in this case did not reflect the true underlying phaserelation distribution between two given synchronizing oscillations, but a distribution modified by the presence of the modulation sidebands. We have demonstrated this phenomenon in abstract phase-oscillator models (which can be applied to any frequency band) as well as in detailed spiking neural network models (specific for gamma-band oscillations). In the latter neural network simulations, we showed that spectral coherence did not robustly reflect the synchronization-dependent information flow between two oscillating networks when input strength or connectivity strength was manipulated. Particularly disastrous for the idea that coherence is related to information transmission in the brain were the effects of changes in the strength of cross-network E!E synaptic connections, which modulated the magnitude of PrAM. This E!E connectivity manipulation firstly confirmed that spectral coherence is not a pure phase-locking estimate, but instead strongly reflects amplitude fluctuations. This held true even for the modified coherence formula that has been shown to be robust against amplitude correlation [41,53]. Secondly, we found that spectral coherence was weakly related to information flow. We would like to emphasize that the effects of PrAM should not be confused with the known effects of amplitude correlation on coherence [34,54], because PrAM still affects coherence if amplitude correlation is 0, as was the case in our simulations. This is because amplitude fluctuations in the form of PrAM only need to be present in one of two oscillators. The mixture of PrFM and PrAM with the true phase-locking between oscillators makes coherence values difficult to interpret, as shown by the weak relation between coherence and information transmission for certain conditions. Hence, it could be argued that spectral coherence often will not be the preferred method to test particular theories on phase relations and information transmission during neural oscillatory synchronization in brain networks. Generality of simulation and analysis results The question can be asked whether the limitations revealed by our work for the use of spectral coherence for estimating phase locking is due to the use of the particular phase-oscillator model for generating simulated oscillatory data. However, the phase-oscillator model used here is a very general model (theory of weakly coupled oscillators, TWCO) and that the underlying synchronization theory [7] is widely accepted. It describes the core concepts of the phase response curve (PRC) and the Arnold tongue. These concepts are widely used for describing interactions of oscillations of various types including (noisy) limit-cycle as well as chaotic oscillations [7,50]. Moreover, synchronization theory is used in many scientific fields, for example, to describe the synchronization of electrochemical oscillations [88], of molecular circadian rhythms [89] among individual fireflies [90], and of climate oscillations [91]. In (theoretical) neuroscience these concepts are also well established [37,45,73,76] and have been used to understand synchronization properties of single neurons [45], the emergence of neural network oscillations [40] and their oscillatory interactions [52] and traveling wave properties [73]. Since neural networks should be seen as dynamical oscillating systems, similar in their basic characteristics to other dynamical systems in nature, we expect that the synchronization theory is a plausible and relevant framework for understanding neural oscillatory interactions. By the same token we expect that the results of the present study has general relevance for any type of neural oscillations in the brain, and perhaps also for the study of oscillations outside neuroscience. However, to further strengthen the methodological investigation of coherence (and the PLV approach), we also used a neural network model that generated gamma oscillations according to known neural principles [40]. This neural network represents a biophysical more plausible implementation of the principles shown with the phase-oscillator model. When using the simulated data from this neural network, we could confirm the results from the phase-oscillator model [40]. Another question that can be asked relates to potentials limitation of our work due to restricting our analysis to continuous network field signals. The effects of the various manipulations in our simulated data on Spike-Field coherence (SFC), which quantifies the locking of spike probability to a particular field oscillation phase, were not investigated. However, it can be expected that estimating the locking of spikes from spike trains in network X to the field signal from network Y will lead to similar problems for SFC as we have described for spectral coherence. This is because also here, the phase estimate from the field signal will be affected by modulation sidebands. These distorted phases will also be problematic for SFC. Further, phaserelation dependent amplitude fluctuations might also affect SFC estimates. Note that in neuroscience, other methods are used that assume weak-sense stationarity, such as frequencyresolved granger directionality analysis [92,93]. We expect that also these techniques might show problems due to the non-stationary nature of oscillatory synchronization. The partial synchronization state: Realistic for neuronal oscillatory data? Using simulated data from both the phase oscillator model as well as neural network models demonstrated severe limitations of spectral coherence to accurately capture phase locking and information transmission across a broad range of conditions. Nevertheless, the question can be asked whether the non-stationary nature of the synchronization process that follows from it is biologically plausible and likely. We have indeed stressed that the synchronization among neural networks (oscillators) is inherently non-stationary, and leads in a broad range of conditions to partial (intermittent) synchronization. In this regime, oscillators do have frequency differences (do phase precess), yet they still have preferred phase-relations that are reflected in non-uniform phase-relation distributions. In a complete or perfect phase-locking state, the phase-relation is constant (no phase precession) and the synchronized oscillators do not have a frequency mismatch. Why would the partially synchronized regime (being problematic for spectral coherence) be the most likely regime for neural oscillatory data? Even in the hypothetical case of noiseless empirical data, it would be unlikely (due to detuning) that synchronization would be perfect, and imperfect synchronization by definition leads to a complex state of partial synchronization, characterized by changes in phase relations and frequency among oscillators over time. In real empirical data, noise is inescapably present, and noise will further degrade the imperfect phase locking between oscillators (or neural networks approximated by oscillators) [7,52]. With noise we mean intrinsic (phase) noise that changes the instantaneous frequency of the oscillations. 'Noise' can for example be due to the inherent instability of the oscillation (e.g. low synchrony among neurons generating the network oscillation). Hence, given the noisiness and complexity of cortical networks we believe that the partially synchrony regime can be expected to be the dominant regime. These theoretical considerations are supported by experimental data. A few studies have directly investigated and shown partial (intermittent) synchronization in cortical activity [32,94]. Moreover, there is indirect evidence from studies showing that for cortical gamma oscillations the frequency and amplitude evolution over time is noisy and complex [30], and changes as a function of cortical state [71,85]. Neuronal oscillations in general behave rather like very noisy limit-cycle oscillations or chaotic oscillations, and clearly not like noiseless oscillators [95]. Furthermore, several experimental papers have indicated phase-locking among neural oscillations of slightly different frequencies [8,12,13] indicating that detuning among interacting neural oscillation does exist. Another study on cortical gamma oscillation showed synchronization and phase precession at the same time [17] indicating a state of partial synchronization. Therefore, theoretical as well experimental indications suggest that partial synchrony should be expected as the most likely regime. Testing the underlying assumptions of spectral coherence The concept of systematic frequency variation and frequency differences among oscillations within a 'frequency band' has only recently attracted scientific interest [23,31,78,79,96]. This might explain why the problem of frequency variation of neural oscillatory signals for stationary methods like spectral coherence is still not commonly recognized. It is striking that till today most research studies using spectral coherence are published without including a test of the assumption of weak-sense stationarity (nor linearity), despite that experimental and theoretical work suggests that neuronal oscillations are likely to be non-stationary. Whether one assumes a phase synchronization model or not underlying oscillatory interactions, it is important to validly test for weak-sense stationarity (or at last have good reasons to assume stationarity) before applying spectral coherence on neural signals. A reason this is rarely done might be that appropriate tests of weak-sense stationarity are not well-known in the neuroscientist community and not always available in popular software packages. The weak-sense stationarity assumption states that the auto-correlation function of a process should not systematically change over time. If for example an oscillatory signals has periods of higher frequency followed by periods of lower frequency, the signal violates the weak-sense stationarity assumption. Methods have been proposed [27,97] that statistically test the null hypothesis of weak-sense stationarity by quantifying the variability in time-frequency representation of a signal and by testing whether it deviates from expected variation of a stationary random process. The testing of stationarity in empirical data is necessary assuming that the analysis is directed at single trial data. It is only in single trial data that the detailed non-stationary interactions can be appropriately quantified. Yet, it often is common practice to justify the use of spectral coherence based on the stability of the trial-averaged TFR. For example, a stimulus onset trial-averaged TFR [98] shows often transients shortly after stimulus onset and then it looks rather stable. Therefore, the transients are typically excluded from analysis, after which spectral coherence is applied on the 'sustained part' of the (trial-averaged) TFR. Importantly, trial-averaging eliminates all frequency or power variations that are not strictly locked to the stimulusonset (or the event to which the data are aligned). An example of variation lost by trial averaging is the presence of effects induced by saccadic or microsaccadic eye movements during presentation of a stimulus. It is known that saccadic eye-movements strongly affect the oscillatory properties [29,71,99,100] in the 'sustained part' of the trial-averaged TFR of signals in visual cortex, and that these saccadic effects can be useful indicators of perceptual and cognitive states [101]. However, these TFR variations are not locked to stimulus onset, but to saccades which occur at various time points within a trial, and therefore these interesting single-trial TFR variations are removed by classical stimulus-onset triggered trial averaging. Hence, trial-averaged TFR cannot be used to assess whether signals in certain periods are stationary or not. Another condition that needs to be satisfied for spectral coherence methods to be applied in a valid manner is linearity. In the present study, the phase coupling in the phase-oscillator model as well as in the neural network model was linear. This was evident in phase-phase plots of pairs of oscillators, which exhibited clear straight lines indicating that the phases were indeed linearly related. Hence, in our simulated data the linearity assumption for the application of spectral methods was satisfied. However, similarly to the weak-sense stationarity assumption, the nature of phase coupling should be investigated in empirical data before using linear methods such as spectral coherence or the phase-locking value. Moreover, methods derived from information theory are suitable for linear as well non-linear interactions [102] and should be used by default if one does not know whether phase relationships are linear. Alternative phase locking estimation approaches The problem of non-stationarity in neuronal signals has been previously reported [33][34][35]103,104] and several alternative methods, based on instantaneous phase, have been proposed and successfully used. Below, we will give a short overview of possible strategies to estimate instantaneous phase in experimental multi-component signals. It is noteworthy that the concepts of 'instantaneous phase' and particularly 'instantaneous frequency' could be regarded with skepticism, because of the time-frequency uncertainty principle [105], and because instantaneous phase or frequency cannot be easily defined for signals like those observed in the brain. It is correct that in a TFR, higher time resolution leads to lower frequency resolution, and vice versa. At the same time, the higher the time resolution, the more the estimated phase will approximate the instantaneous phase, at the cost of low frequency resolution. Hence it is impossible to get an accurate estimate of both phase and frequency at the same time. However, the instantaneous frequency as a mathematical concept is nothing else than the derivative of the instantaneous phase, and hence, if instantaneous phase is known then the instantaneous frequency can be. Nevertheless, instantaneous phase or frequency can only meaningfully be defined if the signal is mono-componential (e.g., not mixed with two or more oscillations, trends or transients). That means that the signal must be well described by a single peak frequency at each point in time. Because neuronal signals (like the local field potential) usually consist of many components, a decomposition technique of some sort must precede the estimation of instantaneous phase, such that phase estimates can be linked with a well-defined frequency component. Further, instantaneous phase estimations have their own challenges. For example, they can be affected by amplitude variation properties of the signal [106] and are often more sensitive to noise leading to phase slips [55]. In practice, TFR techniques approximating instantaneous phase might give appropriate estimates for most cases [35]. There are two basic analysis strategies from which the instantaneous phase can be approximated, one based on complex TFR representations (based on wavelet or short-time Fourier transform) and the other based on the Hilbert Transform (HT) preceded by a decomposition of the signal. The first strategy convolves the signal with e.g. complex wavelets [35] to estimate the complex wavelet TFR. From each complex value in the time-frequency plane the phase can be estimated. The advantage of the approach is that it is relatively robust against noise and that wavelets are well understood. Disadvantages are the lower time resolution of the phase trace and that the selection of points in the TFR for reconstructing the phase trace is not straightforward. Furthermore, the decomposition is linear [36]. The second strategy uses the Hilbert Transform (HT) [105], which converts a real-valued signal into a complex analytical signal from which the instantaneous phase can be extracted. The instantaneous phase is only well defined if the signal is mono-componential [57,107,108]. Oscillatory brain data are multi-componential and therefore these signals need to be decomposed. One approach is filtering the signal in a predefined frequency range in which most of the oscillatory power is expected to be. This approach is often used in neuroscience, for example for estimating cross-frequency coupling CFC [60,62,85,102]. An advantage of filtering is computational efficiency, yet with the disadvantage of fixed frequency borders, defined by the researcher based on pre-knowledge. Another approach is based on empirical mode decomposition EMD [57] which decomposes in a data-driven manner the signal into intrinsic mode functions (IMF), which are well-suited for applying the HT. Similarly, singular spectrum decomposition (SSD) decomposes the signal in oscillatory narrow-band components on which HT can be applied [36,56]. An advantage of SSD is that it has good de-mixing properties [36], assuring that signals are narrow-banded. Moreover, the SSD algorithm has been optimized to be useable for neurophysiological datasets [56]. In this study, we used the SSD approach in combination with HT to reconstruct the instantaneous phases. Our results showed that PLV estimates based on the instantaneous phase reconstructed by SSD-HT gave more robust estimates of the true underlying phase-locking values and reflected better variations in synchronization-dependent information flow between neural networks compared to spectral coherence. The estimates were robust against amplitude fluctuation in the form of PrAM, although small effects of PrAM could be observed in the low SNR regimes. Although PLV could severely underestimate phase locking for lower SNR, increasing SNR allowed the PLV to approach the expected phase locking value. Moreover, in terms of explained variance of information flow changes between networks, the PLV estimates were superior to coherence for any SNR investigated. We expect similarly superior results for all approaches that are based on reconstruction of the instantaneous phase (based on the use of wavelets or the Hilbert transform preceded by some decomposition technique). Finally, although there are conditions in which spectral coherence might be appropriate, methods that combine appropriate signal decomposition with instantaneous phase reconstruction permit a more detailed and accurate look on time-dependent changes in synchronization properties of neural signals. This provides definite advantages when trying to determine neural network mechanisms underlying perception, cognition, and behavior. Supporting Information S1 File. Supplementary Information and Simulation Codes. The supplementary file contains supplementary methods, including mathematical derivation of phase-locking strength between two-coupled phase-oscillators, a more detailed description of the modulation sidebands in the FFT induced by systematic frequency or amplitude fluctuations and more information about the singular spectrum decomposition (SSD) (Methods A). The file contains two figures. A schematic description of the different phase-locking approaches used in this study ( Figure A). Analysis of unidirectionally interacting gamma-generating neural networks receiving different detuning levels ( Figure B). The file contains also two Matlab simulation codes. Phase-oscillator simulation (Simulation Code A) and simulation of gamma-generating neural network (Simulation Code B). (DOCX)
22,681.4
2016-01-08T00:00:00.000
[ "Physics" ]
Engineering monocyte/macrophage−specific glucocerebrosidase expression in human hematopoietic stem cells using genome editing Gaucher disease is a lysosomal storage disorder caused by insufficient glucocerebroside activity. Its hallmark manifestations are attributed to infiltration and inflammation by macrophages. Current therapies for Gaucher disease include life−long intravenous administration of recombinant glucocerebroside and orally-available glucosylceramide synthase inhibitors. An alternative approach is to engineer the patient’s own hematopoietic system to restore glucocerebrosidase expression, thereby replacing the affected cells, and constituting a potential one-time therapy for this disease. Here, we report an efficient CRISPR/Cas9-based approach that targets glucocerebrosidase expression cassettes with a monocyte/macrophage-specific element to the CCR5 safe-harbor locus in human hematopoietic stem and progenitor cells. The targeted cells generate glucocerebroside-expressing macrophages and maintain long-term repopulation and multi-lineage differentiation potential with serial transplantation. The combination of a safe-harbor and a lineage-specific promoter establishes a universal correction strategy and circumvents potential toxicity of ectopic glucocerebrosidase in the stem cells. Furthermore, it constitutes an adaptable platform for other lysosomal enzyme deficiencies. G aucher disease (GD) is genetic disorder caused by mutations in the GBA gene that result in glucocerebrosidase (GCase) deficiency and the accumulation of glycolipids in cell types with high-glycolipid degradation burden, especially macrophages 1 . GD encompasses a spectrum of clinical findings from a perinatal-lethal form to mildly symptomatic forms. Three major clinical types delineated by the presence (types 2 and 3) or absence (type 1) of central nervous system involvement are commonly used for determining prognosis and management 2 . In western countries, GD type 1 (GD1) is the most common phenotype (~94% of patients) and typically manifests with hepatosplenomegaly, bone disease, cytopenias, and variably with pulmonary disease, as well as elevated risk for malignancies and Parkinson's disease 3,4 . The pathophysiology in GD1 is thought to be driven by glucocerebroside-engorged macrophages that infiltrate the bone marrow, spleen and liver, and promote chronic inflammation, as well as low-grade activation of coagulation and complement cascades [5][6][7] . Current therapies for GD1 include orally available small-molecule inhibitors of glucosylceramide synthase (substrate reduction therapy or SRT) and glucocerebrosidase enzyme replacement (ERT) targeted to macrophages via mannose receptor-mediated uptake 8 . While ameliorative for visceral and skeletal disease manifestations, these therapies are chronically administered, life-long, and costly. Allogeneic hematopoietic stem-cell transplantation (allo-HSCT) has been applied successfully as a one-time treatment for GD1 9 and its therapeutic effect is achieved by supplying graft-derived GCase-competent macrophages. However, because of the significant transplantrelated morbidity and mortality of allo-HSCT, ERT, and SRT are standard of care for patients with GD1 10,11 . The effectiveness of macrophage-targeted ERT and allo-HSCT for treating GD1 suggests that restoration of GCase function in macrophages alone is sufficient for phenotypic correction in GD1. Consequently, restoring GCase activity in the patient's own hematopoietic system to establish an autologous approach that averts many of the risks of allo-HSCT could be a safer and potentially curative therapy for this disease. Furthermore, unlike ERT and the best tolerated SRT, it could provide enzyme reconstitution in the brain that could benefit neuronopathic forms of the disease 9 . For these reasons, non-targeted gene addition into human hematopoietic stem and progenitor cells (HSPCs) have been explored, first using retroviruses [12][13][14][15] and later lentiviral vectors, and have yielded promising results in murine GD models [16][17][18] . Nevertheless, concerns remain about the potential for insertional mutagenesis and malignant transformation in viral gene transfer 19,20 stressing the need for the development of targeted gene addition strategies to generate genetically modified HSPCs for human therapy. Modern genome-editing tools can achieve genetic modifications and integrations with single-base pair precision 21 . A highly engineerable platform derived from the bacterial CRISPR/ Cas9 system has been optimized for gene editing in HSPCs [22][23][24] . This platform consists of two main components: (1) a sgRNA/ Cas9 ribonucleoprotein complex (RNP) functioning as an RNAguided endonuclease, and (2) a designed homologous repair template delivered using adeno-associated viral vector serotype six (AAV6). The RNP comprises a 100-bp, chemically modified, synthetically generated, single-guide RNA (sgRNA) complexed with Streptococcus pyogenes Cas9-endonuclase and delivered into the cells by electroporation 25 . In the nucleus, the RNP binds to the target sequence and Cas9 catalyzes a double-stranded break, stimulating one of two repair pathways: (1) non-homologous end joining (NHEJ), in which broken ends are directly ligated, often producing small insertions and deletions (indels); and (2) homology-directed repair (HDR), in which recombination with the supplied homologous repair template is used for precise sequence changes 21 . In human HSPCs, the AAV6 genome is an efficient delivery method for the homologous repair templates containing an experimenter-defined genetic change flanked by homology arms centered at the break site 22 . Accordingly, the HDR pathway can be leveraged not only to achieve single-base pair changes, but also to integrate entire expression cassettes into a non-essential safe harbor locus, thus enabling stable expression of tailorable combinations of regulatory regions, transgenes, and selectable markers 24,26 . One potential safe harbor locus is CCR5. This gene encodes the major co-receptor for HIV-1, and is considered a non-essential locus because of the high prevalence of healthy homozygous CCR5 Δ32 individuals in European populations (>10%) 27 and the observation that homozygous carriers of the Δ32 mutation are resistant to HIV-1 infection 28 . Here, we describe our generation and characterization of GCasetargeted human HSPCs, a crucial step towards establishing autologous transplantation of genome-edited cells for GD. We use the RNP/AAV6 platform to achieve efficient integration of GCase cassettes into the CCR5 safe harbor locus. By leveraging a lineagespecific promoter highly expressed in the monocyte/macrophage lineage, we achieve GCase expression in the affected cell lineages while also minimizing ectopic expression in hematopoietic stem and progenitor compartments. GCase-targeted HSPCs demonstrate the capacity for long-term engraftment and multi-lineage differentiation, including the generation of functional macrophages with supraphysiologic GCase expression in vivo. Results Efficient targeting of GCase to the CCR5 locus in human HSPCs. We used the CRISPR/Cas9 and AAV system to target glucocerebrosidase (GCase) expression cassettes to the human CCR5 safe harbor locus (Fig. 1a). The sgRNA targeting the third exon of CCR5 was previously validated for high on-target activity in primary human HSPCs 24,29 and has excellent specificity as prior studies failed to reveal any detectable off-target activity using high-fidelity Cas9 24 . AAV donor repair templates were generated to drive GCase expression by two different promoters: (1) the Spleen Focus-Forming Virus (SFFV) promoter, which drives constitutive supraphysiologic expression; and (2) the CD68S promoter, a shortened derivative of the endogenous human CD68 promoter with expression restricted to the monocyte/macrophage lineage 30,31 (Fig. 1b). This lineage-specific promoter was chosen to minimize potential complications of GCase overexpression in the stem-cell compartment. The Citrinecontaining vectors were designated SFFV-GCase-P2A-Citrine and CD68S-GCase-P2A-Citrine. A third AAV, CD68S-GCase, lacking the reporter protein, was developed as a more clinically relevant vector for in vivo studies (Fig. 1a). The targeting efficiencies achievable for each vector were determined by the percent of Citrine-positive (Citrine+) cells and by the percent of CCR5 alleles with on-target cassette integrations using molecular analysis (giving the cell and allele targeting frequencies, respectively). In the presence of both AAV and RNP, the SFFV-driven cassette resulted in approximately 51.5 ± 9.1% (mean ± SD) Citrine+ HSPCs 48-h post-targeting, while AAV alone produced 5.9 ± 4.2% dim Citrine+ cells, likely reflecting episomal expression (Fig. 1c, d). The fraction of CCR5 alleles with on-target cassette integration in the unselected population was 29 ± 9% as measured by droplet digital PCR (ddPCR) (Fig. 1e and Supplementary Fig. 1a). To verify targeting in Citrine+ cells, these cells were sorted by FACS and the fraction of modified alleles measured ( Fig. 1e and Supplementary Fig. 1a). The allelic modification frequency of HSPCs treated with the SFFV-GCase-P2A-Citrine vector that were Citrine+ (SFFV-GCase-Citrine+) was 65.9 ± 4.9%, corresponding to 69% and 31% mono-allelically and bi-allelically targeted cells, respectively. Genotyping of single-cell-derived colonies corroborated that 98% percent of the Citrine+ HSPCs were targeted and, consistent with the ddPCR data, showed 67% mono-allelic and 33% bi-allelic targeting ( Supplementary Fig. 1b-d). We predicted that because the CD68S promoter should be lineage-specific, Citrine would not be highly expressed in stem and non-myeloid biased progenitor cells and therefore, Citrine expression in HSPCs would not reflect the true editing efficiency of the CD68S-P2A-GCase-Citrine vector (Fig. 1b). Consistent with this, we found that at 48-h post-modification, Citrine expression from HSPCs treated with the CD68S-GCase-P2A-Citrine AAV and RNP was dim (mean fluorescence intensity (MFI) was 24-fold lower than for the SFFV-GCase-Citrine+ cells) and the mean percentage of CD68S-GCase-Citrine+ HSPCs was 27.7 ± 8.5%, significantly lower than for the SSFV-driven construct despite having comparable CCR5 allele targeting frequencies (32.3 ± 9.6%) (Fig. 1c-e). Most importantly, the allele targeting frequency within the CD68S-GCase-Citrinenegative population (CD68S-GCase-Citrine-) ranged from 11.8 to 36.4%, confirming the presence of targeted cells lacking Citrine expression (Fig. 1e). We reasoned that the subset of Fig. 1d). The allele targeting frequency of the CD68S-GCase vector lacking Citrine was 35.8 ± 7.9% in unselected cell populations corresponding to~52% of cells having targeted integrations (Fig. 1e). Generation of human GCase-macrophages from edited HSPCs. One mechanism by which HSCT is therapeutic in Gaucher disease is through the generation of GCase-expressing macrophages. To confirm the development of macrophages from GCasetargeted HSPCs, we first differentiated control human CD34+ HSPCs using a cytokine cocktail, including M-CSF, GM-CSF, SCF, IL-3, FLT3 ligand, and IL-6 32 . HSPCs differentiated in this manner exhibited characteristic ameboid morphology as well as expression of the monocyte/macrophage lineage markers CD14 and CD11b, with concurrent loss of the HSPC marker CD34 (Fig. 2a, b and Supplementary Fig. 2a). Following the same differentiation protocol, human HSPCs targeted with the SFFV-GCase-P2A-Citrine and CD68S-GCase-P2A-Citrine constructs, produced macrophages that exhibited Citrine expression, characteristic morphology, and normal phagocytosis of pHrodolabeled E. coli (Fig. 2c). CD14 and CD11b marker expression in mock-treated, Citrine+ and Citrine-populations from these two constructs revealed comparable expression compared to unmodified cells in all conditions except in CD68S-GCase-Citrine+ cells, which had higher expression in both the standard HSPC and macrophage differentiation conditions (Fig. 2d, e and Supplementary Fig. 2b). These results indicate that GCase-targeted HSPCs can produce functional macrophages in vitro and suggest that CD68S-GCase-Citrine+ HSPCs are already primed for differentiation along this lineage. CCR5 is absent from HSPCs but becomes expressed with monocyte/macrophage differentiation. To examine the effect of our genome editing process on CCR5 expression we targeted human HSPCs, differentiated them, and quantified CCR5 protein by FACS ( Supplementary Fig. 3). In the RNP alone condition, the efficiency of double-strand DNA break generation by our CCR5 RNP complex was estimated by measuring the frequency of insertions/deletions (Indel) at the predicted cut site. The mean indel frequencies in the undifferentiated and differentiated populations was 96.8% ± 1.2 and 96.4% ± 1.6, respectively, resulting in almost complete knock-down of CCR5 protein expression ( Supplementary Fig. 3a). In the presence of both RNP and AAV, cells that successfully underwent HDR (Citrine+) lacked CCR5 expression, consistent with disruption of both CCR5 alleles by either bi-allelic integration of the cassette or mono-allelic with indel formation in the second allele ( Supplementary Fig. 3b). In the presence of AAV, CCR5+ cells can be found in the population that did not undergo HDR (~20%), suggesting that AAV transduction decreases indel generation or exerts a smallnegative selection in cells containing both AAV and RNP. CD68S confines expression to the monocyte/macrophage lineage. The CD68S cassettes were designed to selectively express GCase in the monocyte/macrophage lineage in order to prevent potential toxicity to stem cells from ectopic GCase overexpression. To validate the lineage specificity of the CD68S promoter, CD68S-GCase-Citrine+ and SFFV-GCase-Citrine+ HSPCs were cultured with growth factors that promoted either HSPC maintenance (HSPC) or macrophage differentiation (MΦ) and Citrine expression was monitored for 20 days. As expected for a constitutive promoter, the fraction of SFFV-GCase-Citrine+ cells remained stable over time in both HSPC and MΦ cultures (>95%). An average of 9.2% and 16.3% of SFFV-GCase-Citrinecells became positive in the HSPC and MΦ cultures, respectively, which was consistent with the presence of targeted CCR5 alleles in this population based on ddPCR (Fig. 3a, b). When cultured long-term, the MFI of SFFV-GCase-Citrine+ cells decreased, but the drop in fluorescence intensity was seen exclusively in a subset of cells with very high Citrine expression ( Supplementary Fig. 4a, b). Notably, the allele modification frequency did not differ throughout the culturing process, suggesting that the change in Citrine expression was due to regulation of transcription from SFFV promoter or translation but not to selection against the modified cells ( Supplementary Fig. 4c). In contrast, the percentage of CD68S-GCase-Citrine+ cells decreased in the HSPC cultures but was maintained in the MΦ cultures (Fig. 3a, b). Moreover, there was a substantial increase (~30-fold) in Citrine MFI from CD68S-GCase-Citrine+ cells in the MΦ compared to the HSPCs culture over the 21-day differentiation (Fig. 3c). As Citrine is only a proxy for GCase cassette expression, we also examined GCase protein expression directly by quantifying its enzymatic activity in HSPC and MΦ culture conditions. In HSPC cultures, SFFV-GCase-Citrine+ and CD68S-GCase-Citrine+ cells showed~7.7 and 1.3-fold more GCase activity, respectively, compared to unmodified cells (mock-treated). The CD68S-GCase-Citrine-population showed the same activity as unmodified cells (1.0-fold) supporting the idea that there is no leakage GCase expression from the CD68S promoter in more primitive and nonmyeloid HSPCs (Fig. 3d). Macrophages derived from CD68S-GCase-Citrine+ and SFFV-GCase-Citrine+ HSPCs expressed~2fold higher GCase than macrophages derived from mock-treated cells (Fig. 3e). In all but the SFFV-GCase-Citrine+ population, macrophage differentiation resulted in higher levels of GCase expression. This explains the decrease in fold expression in cells targeted with the SFFV-driven cassette with differentiation (from 7.7 to 2.3), as it reflects the marked increase in endogenous GCase (~4-fold) in the mock cells without a proportional change in exogenous GCase expression from the SFFV expression cassette ( Supplementary Fig. 4d). To examine the possibility that differential expression of the GCase cassette was due to changes in the targeted cell populations, we measured the allele targeting frequencies at the time of sorting and post-culture in the HSPC and MΦ cultures using ddPCR (Fig. 3f). We found that the percentage of alleles with on-target cassette integration within Citrine+ and Citrine-populations targeted with both cassettes did not differ between culturing conditions, thus confirming that the changes in expression were attributable to the lineage-specific activity of the CD68S promoter. GCase-targeted HSPCs sustain long-term hematopoiesis. To examine the potential of GCase-HSPCs to become a one-time therapy for GD1, we tested their long-term repopulation capacity. We first assessed the colony-forming ability of the targeted HSPCs in vitro using the colony-forming unit (CFU) assay. We sorted mock, Citrine+ and Citrine-from SFFV and CD68S targeted populations as single cells in 96-well plates 48-h posttransplantation and assessed their phenotype 14 days later. Notably, SFFV-GCase-Citrine+ HSPCs produced the fewest colonies of all conditions and exhibited the highest variability in the distribution of colony phenotypes formed, suggesting that supraphysiologic GCase expression or other aspects of SFFV promoter physiology may have a toxic effect on HSPCs (Fig. 4a). To test in vivo engraftment potential, GCase-targeted HSPCs were serially transplanted into NOD-scid IL2Rgamma (NSG) mice. Cell doses varied from 2.5 × 10 5 to 2 × 10 6 HSPCs and were dependent on the CD34+ cell yield per human donor. We focused our long-term engraftment experiments on the CD68S-GCase-P2A-Citrine and CD68S-GCase vectors because of the potential detrimental effect of the SFFV promoter, its observed drop in expression, and its barriers to clinical translation. Targeted cells were transplanted without selection intrafemorally or intrahepaticaly into sublethally irradiated NSG mice. Primary human engraftment was quantified after 16 weeks as the percentage of cells expressing human CD45 within the total hematopoietic Transplantation of GCase-targeted HSPCs resulted in substantial human cell chimerism. In the bone marrow, the median human cell chimerism was 23.2% (min: 0.17%; max: 91.5%) and 50.6% (0.53%; 91.7%) in CD68S-GCase-targeted and CD68S-GCase-P2A-Citrine-targeted cells, respectively (Fig. 4c). Similar engraftment numbers were seen in the spleen: 20.4% (0.14%; 79.3%) for the cassette lacking Citrine and 35.8% (0.38%; 89.6%) for the cassette having Citrine (Fig. 4d). To determine the proportion of engrafted cells derived from targeted HSPCs, the targeted allele frequency of the engrafted hCD45+ population in the bone marrow was measured using ddPCR in cell preparations that included mouse and human CD45+ cells as the ddPCR assay recognizes only human alleles ( Fig. 4e and Supplementary Fig. 6a). The median allele targeting frequencies of the engrafted cell populations were 4.4% (min: 0.23%; max: 51.0%) and 4.2% (0.73%; 34.6%) for the CD68S-GCase and CD68S-GCase-P2A-Citrine cassettes, respectively; however, allele targeting frequency varied highly across human cell donors and mice. The allele targeting frequency of the engrafted cells tended to be lower compared to the transplanted HSPCs, with an observed drop ranging from 1.9 to 12.5-fold ( Supplementary Fig. 6b). As cell doses of transplantation varied in the mice targeted with the Citrine-containing construct, the mice were colored-coded and tracked for engraftment and targeting efficiency in engrafted cells. This suggested a correlation between higher cell dose and higher engraftment of modified cells, a finding that is not surprising as there are likely more targeted long-term stem cells available for engraftment. Serial engraftment studies are the gold standard to determine self-renewal capacity of hematopoietic stem cells. Secondary transplants were performed by isolating human CD34+ cells from bone marrow in eight 16-week mice (seven from CD68S-GCase and one from CD68S-GCase-P2A-Citrine targeted cells) and transplanting them (without pooling) into eight NSG recipient mice. Human engraftment and allele targeting frequency were assessed 16 weeks later (32 weeks post-modification) as previously described (Supplementary Fig. 7). The median human cell chimerism of all transplants was 10% (Range: 0.04%-48.9%) (Fig. 4f). Droplet digital PCR analysis of the engrafted cells from mice with human cell chimerism >1% (n = 5) showed a median allele targeting frequency of 21.9% (min: 1.3%; max: 40.5%), compared to 6.3% in the cells prior to transplantation (Fig. 4g). We reason that this increase in allelic targeting pre-to-post transplantation in secondary transplants reflects that targeted HSPCs that undergo primary engraftment in an NSG recipient have high engraftment potential and confirms the presence of long-term repopulating hematopoietic stem cells in the genomeedited population that are capable of long-term engraftment in vivo. In vivo differentiation of GCase-targeted HSPCs. To examine the multi-lineage differentiation potential of GCase-targeted HSPCs in vivo we measured lymphoid and myeloid engraftment by the expression of the cell surface markers hCD19 (B-cells) and hCD33 (pan-myeloid), respectively. We included only mice with human engraftment >1% as these have sufficient cell numbers to reliably measure myeloid and lymphoid reconstitution. In primary engraftment studies, the median percentage of myeloid cells and B-cells in the bone marrow was 27.4% and 65.9%, respectively, for the mice transplanted with CD68S-GCase-targeted HSPCs, and 19.3% and 70%, respectively, for the mice transplanted with CD68S-GCase-P2A-Citrine-targeted HSPCs (Fig. 5a). In general, B-cell production was higher than myeloid and consistent with what has been previously reported for unmodified cells 33,34 . We similarly found myeloid and lymphoid cell production in secondary engraftment mice in five of the eight mice with bone marrow chimerism >1% (Fig. 5b). Mice with low human cell chimerism (<1%), have low cells numbers making the quantitation of targeted human alleles and human subpopulations less reliable. To assess the lineage specificity of the CD68S promoter in vivo, we compared Citrine expression in the B-lymphoid and myeloid compartments in primary engraftments studies of CD68S-GCase-P2A-Citrine-targeted HSPCs that had robust engraftment of targeted cells (allele modification fraction >10%). As expected, expression of the CD68S-GBA-P2A-Citrine cassette was restricted to the myeloid (CD33+) and monocyte lineages (CD14+), with more frequent expression seen in monocytes (Fig. 5c, d). Despite robust modification in the bone marrow, three mice did not show Citrine expression in monocytes, which could be due to incomplete differentiation along this lineage since the human cells are lacking the appropriate cytokines or expression that is below our rigorous gating strategy. As the generation of GCase-expressing macrophages is critical to addressing Gaucher disease pathophysiology, it was also important to verify that engrafted, GCase-targeted HSPCs have the capacity to produce human macrophages with heterologous GCase expression. Towards this end, human CD14+ monocytes were isolated via FACS from the bone marrow of transplanted mice 16 weeks post-transplantation and differentiated by adding human macrophage colony stimulating factor (M-CSF). This step was performed in vitro because mouse M-CSF, a cytokine required for macrophage differentiation, does not have activity on human cells 35 . Human macrophages differentiated in this manner showed expression of the lineage marker CD68, as well as Citrine (12.3 ± 4.5% of human CD68+ cells), verifying that engrafted, targeted HSPCs can produce macrophages that express the therapeutic GCase cassette ( Fig. 5e and Supplementary Fig. 8). To improve engraftment and differentiation of myeloid lineages of our modified HSPCs in vivo, we performed transplantation experiments in NSG-SGM3 mice. These are (Fig. 6a). The median allele targeting frequencies of the engrafted cell populations were 15.6% (min: 12%; max: 20%), 20.4% (min: 16%; max: 25%), 5.0% (min: 2%; max: 29%) in the same tissues (Fig. 6b). The observed drop in modified engrafted cells relative to the pre-transplant level (43%) was 2.7-fold in the bone marrow, consistent with but in the low range of studies in NSG mice (Fig. 4e). We observed B, myeloid, and monocyte development with less preponderance of B-lymphoid population compared to NSG mice. As before, Citrine+ cells were seen exclusively in the myeloid and monocyte cells (Fig. 6c). Tissue macrophages were extracted from liver and lung using an enzymatic method and peritoneal macrophages were obtained by analysis of peritoneal fluid. We found robust human cell populations that were CD45+ or CD45/CD11b+ as well as Citrine+ in these macrophage cell preparations (Fig. 6d-f). Samples with high cell numbers that allowed enrichment of live human-myeloid-Citrine+ for enzymatic analysis were sorted and the GCase activity measured. Consistent with our studies of HSPCs differentiated in culture, the Citrine+ cells expressed 2.0 (bone marrow), 2.1 (spleen), and 1.6-fold (lungs) higher GCase than Citrine-cells (Figs. 3e and 6g). Analysis of targeted CCR5 alleles from sorted cells populations, including bone marrow, lung, spleen, liver, and peritoneal macrophages show enrichment of targeted alleles in the Citrine+ cells compared to Citrine-cells confirming that the observed Citrine expression is from targeted cells (Fig. 6h). Discussion Gaucher disease is currently treated using enzyme replacement therapy (ERT) and substrate reduction therapy (SRT). Both approaches have been shown to be effective at addressing hematological and visceral manifestations 38,39 and can reduce, but not eliminate, bone complications in this disease 40,41 . Neither ERT, not the best tolerated form of SRT (eliglustat), are expected to impact neuronopathic forms of GD (GD2 and GD3) or the increasingly recognized neurological symptoms in GD1 42,43 . ERT involves life-long, bi-weekly infusions, and the development of antibodies can, in some cases, decrease enzyme bioavailability and impact clinical outcome 44,45 . Approved SRTs (miglustat and eliglustat) also require life-long administration, repeated dosing (three and two times per day, respectively) and, particularly for miglustat, significant side effects due to non-specific inhibition of other enzymes 46 . Both modalities are very costly with estimated annual cost of $300,000 to $450,000 (estimated life-time cost of$ 6 to $22 million dollars) limiting their availability worldwide 47,48 . In the past, allo-HSCT was used effectively and led to rapid improvement in the hematological and visceral parameters as well as regression of skeletal disease, but given its significant morbidity and mortality, its use has been reserved for individuals with neurologic or progressive disease unresponsive to ERT and SRT [49][50][51][52] . Specifically, allo-HSCT has shown potential to halt neurological progression in patients with GD type 3 (D3) when treated at a young age and early in the disease process [53][54][55][56] . Given the potential for HSCT to constitute a one-time therapy for GD1 and its likely beneficial effect in the central nervous system (CNS), improving the safety of HSCT for GD would be a significant development. The use of autologous HSPCs is safer because it eliminates the morbidity of graft-versus-host disease, results in faster engraftment, and can lead to earlier intervention by obviating the need for donor matching. For this reason, non-otargeted lentiviral-mediated delivery of constitutively expressed GCase is being explored in HSPCs and has yielded promising results in murine GD models where transplantation of these cells achieved normalization of GCase levels, reduced Gaucher cell infiltration, and lowered glucocerebroside storage [16][17][18] . However, because of the pseudorandom integration of the viral genomes, concerns remain about its potential for tumorigenicity 19,20 . Genome editing, as a more precise genetic tool, decreases the chance of random integration and ensures more predictable and consistent transgene expression. In addition to the hematopoietic system, the liver has also been considered as potential enzyme replacement depot and in vivo liverdirected approaches using zinc finger nucleases have also been investigated in mouse models 57 . However, it is not clear the liversecreted GCase would have the proper glycosylation to crosscorrect affected cells or that it could cross into the CNS. Transplantation of ex vivo genome-edited HSPCs can provide direct replacement of pathological cells and leverages the ability of graft-derived macrophages that can migrate to the brain 14 and bone. Therefore, autologous transplantation of gene-corrected cells, if coupled with safer conditioning regimens, could be a promising therapy for GD patients regardless of disease subtype. To begin the development of autologous transplantation of genome-edited hematopoietic stem cells, we established an efficient application of CRISPR/Cas9 to target a functional copy of GCase into human CD34+ HSPCs. Here, we use sgRNA/Cas9 and AAV6-mediated template delivery to target GCase to the CCR5 locus, a gene previously used for the insertion and expression of therapeutic genes 24,26 . CCR5 is considered a safe harbor because germline deletions in this gene are common (up to 10% in the Northern European population) and have no overt developmental phenotype 27 . Germline CCR5 loss might be beneficial as it provides protection against HIV 28 , and possibly smallpox 58 , although it also appears to reduce protection against influenza 59 and West Nile virus 60 . Compared to genetic correction of the affected locus, the use of a safe harbor is a universal therapy for all patient mutations and has greater designability as regulatory and GCase protein sequences can be engineered with enhanced therapeutic properties. For targeting Gaucher disease specifically, it circumvents the design of genetic tools for the GBA locus, which can be non-specific given the presence of GBAP, a pseudogene with 96% sequence homology to the GBA gene. To express GCase from the CCR5 locus, we used a previously characterized derivative of the CD68 promoter and confirmed through in vitro and in vivo differentiation protocols that it achieves monocyte/macrophage-specific expression of GCase 30,31 . We reasoned that because the primary manifestations of Gaucher disease are due to pathology in monocyte/macrophage lineage cells, enzyme reconstitution in this lineage should be sufficient to provide phenotypic correction in this disease. Furthermore, our studies with the SFFV promoter did not consistently result in sustained GCase and reporter expression in human HSPCs, suggesting that high and sustained GCase in the stem and progenitor compartment might have detrimental effects. This would not be surprising, as negative impact in long-term engraftment by lysosomal enzyme overexpression has been seen previously for galactocerebrosidase 61 . Furthermore, transplantation using retrovirally transduced CD34+ HSPCs in human where GCase was driven by the LTR promoter failed to show long-term reconstitution 13 . While several reasons can explain this observation, including insufficient cell dose and lack of conditioning, one explanation is that constitutive GCase expression by the LTR had a detrimental effect in the repopulating stem cell. We examined the ability of the targeted human HSPCs to engraft and differentiate in serial transplantation studies in immunocompromised mice and demonstrate that our approach can modify cells with long-term repopulation potential and preserves multi-lineage differentiation capacity. We re-demonstrated a reduced repopulation capacity of the edited HSPC population in primary engraftment studies reported previously for engineered HSPCs in viral-mediated gene addition and gene-editing contexts 24,62,63 . However, the enhanced allele modification frequencies in the secondary transplants suggest that this initial decreased capacity is due to a reduced number of targeted longterm repopulating stem cells (LT-HSCs) compared to targeted shorter-lived progenitors and not to detrimental effect on engraftment per se. Interestingly, the allele targeting frequency of the engrafted cell population increased in some cases, suggesting that the variability in targeted HSPC engraftment may be accounted for by stochastic engraftment dynamics driven by oligoclonal reconstitution 64 . Even though these experiments do not achieve 100% human cell chimerism, transplantation outcomes in humans and mice indicate that low level chimerism could be sufficient to provide symptomatic relief 65,66 . Specifically, in mice, 7% wild type cell engraftment was shown to be sufficient to reverse disease pathology 67 . In our primary engraftment studies, the median allele modification frequency of the engrafted cells was~4%, which corresponds to 4-8% of targeted cells (depending on the ratio bi-allelic or mono-allelic modification in the engrafted cells) and an 8-16% unmodified cell dose (given that our cells express twofold more GCase). Future experiments in an immunocompromised models of GD to allow engraftment and proliferation of human cells will establish the potential of these cells to correct the phenotype. Regardless of the outcome, future efforts aimed at increasing the permissiveness of long-term HSCs to undergo homology-dependent genome editing will be important for the therapeutic application of these cells. Herein, we report the use of a genome editing to target a safe harbor to create lineage-specific expression of proteins. This approach is highly flexible and could serve as a platform to restore the expression of lysosomal enzymes and potentially other secreted proteins with therapeutic potential, provided the therapeutic cassettes are within the packaging capacity of AAV. These studies exemplify a specific use for this approach for the expression of human glucocerebrosidase as a potential intervention for the definitive treatment of GD and support further preclinical development of this strategy. Methods rAAV vector plasmid construction. The CCR5 donor vectors have been constructed by PCR amplification of 500 bp left and right homology arms for the CCR5 locus from human genomic DNA. SFFV and wild-type GBA sequences were amplified from plasmids. The CD68S sequence was obtained from Dahl et al. 68 and was cloned from a gblock Gene Fragment (IDT, San Jose, CA, USA). Primers were designed using an online assembly tool (NEBuilder, New England Biolabs, Ipswich, MA, USA) and were ordered from Integrated DNA Technologies (IDT, San Jose, CA, USA). Fragments were Gibson-assembled into a the pAAV-MCS plasmid (Agilent Technologies, Santa Clara, CA, USA). Constructs were planned, visualized, and documented using Snapgene 4.2 Software. rAAV production. rAAV was produced using a dual-plasmid system as described in Khan et al. 69 . Briefly, HEK293 cells were transfected with plasmids encoding an AAV vector and AAV rep and cap genes. HEK293 cells were harvested 48-h posttransfection and lysed using three cycles of freeze-thaw. Cellular debris was pelleted by centrifugation at 1350 × g for 20 min and the supernatant collected. Active rAAV particles were purified using iodixanol density gradient ultracentrifugation, dialyzed in phosphate-buffered saline (PBS), and stored in PBS at -80°C. rAAV vectors for in vivo applications were ordered from Vigene Biosciences (Rockville, MD, USA). Viral titers were determined using droplet digital PCR with the following primer/probe combination: F: GGA ACC CCT AGT GAT GGA GTT, R: CGG CCT CAG TGA GCG A, P: /56FAM/CAC TCC CTC/ZEN/TCT GCG CGC TCG/ 3IABkFQ/. HSPC isolation and culturing. Human CD34+ HSPCs mobilized from peripheral blood were purchased frozen from AllCells (Almeda, CA, USA) and thawed per manufacturer's instructions. Human Cord blood was obtained through The Binns Program for Cord Blood Research Program and not by the investigators themselves. The Program was approved by Stanford's IRB. Eligible donors were expectant mothers scheduled to deliver at Lucile Packard Children's Hospital who provided informed consent prior to collection. Briefly, mononuclear cells were isolated by density gradient centrifugation using Ficoll Plaque Plus density gradient medium followed by two platelets washes. CD34+ mononuclear cells were positively selected using CD34+ Microbead Kit Ultrapure (Miltenyi Biotec, San Diego, CA, USA) per manufacturer's instructions. Purity of the isolation was assessed by staining cells with APC-conjugated anti-human CD34+ (Clone 561; Biolegend, San Jose, CA, USA) and analyzing the fraction of APC+ cells using an Accuri C6 flow cytometer (BD Biosciences, San Jose, CA, USA). Cells were cultured in media consisting of StemSpan SFEM II (Stemcell Technologies, Vancouver, Canada) supplemented with SCF (100 ng/ml), TPO (100 ng/ml), Flt3-Ligand (100 ng/ml), IL-6 (100 ng/ml), UM171 (35 nM), and StemRegenin1 (0.75 mM). Measurement of cassette integration using ddPCR. Genomic DNA was extracted from selected or unselected cell populations using QuickExtract DNA Extract Solution and digested using AFIII (New England Biosciences). Two detection probes were used in the assay to simultaneously quantify wild-type CCLR2 reference alleles gene targeted CCR5 alleles. The ratio of detected CCLR2/ CCR5 events gave the fraction of targeted alleles in the original cell population. The CCR5 detection assay was designed as follows: F:5ʹ-GGG AGG ATT GGG AAG ACA-3ʹ, R: 5ʹ-AGG TGT TCA GGA GAA GGA CA-3ʹ, labeled probe: 5ʹ-FAM/ AGC AGG CAT/ZEN/GCT GGG GAT GCG GTG G/3IABkFQ-3ʹ. The reference assay was designed as follows: F:5ʹ-CCT CCT GGC TGA GAA AAA G-3ʹ, R: 5ʹ-CCT CCT GGC TGA GAA AAA G-3ʹ, and probe: /5HEX/TGT TTC CTC/ZEN/ CAG GAT AAG GCA GCT GT/3IABkFQ/. Primer and probes final concentrations were 900 and 250 nM, respectively. Twenty microliters of the PCR reaction was used for droplet generation. Forty microliters of droplets was used in a PCR reaction with the conditions: 95°C for 10 min, 45 cycles of melting at 94°C for 30 s, annealing at 57°C for 30 s, and extension at 72°C for 2 min, with a final extension at 98°C for 10 min. All steps were performed with ramping of 2 C/s and reactions were stored at 4°C covered from light until droplet analysis. Analysis was performed on a Qx200 Droplet Reader (Bio-Rad) detecting FAM and HEX-positive droplets. Control samples included Mock (non-modified) genomic DNA and notemplate control. Data analysis was performed using Quantasoft analysis software v1.4 (Bio-Rad). Phagocytosis assay. pHrodo Red E.coli BioParticles conjugate for Phagocytosis were purchased from ThermoFisher, USA and reconstituted to 1 mg/ml in 10% FBS-containing media. Reconstituted Bioparticles were added at a final concentration of 0.1 mg/ml to IDUA-HSPC-derived macrophages and incubated at 37°C for 1 h. The cells were then washed and bathed in imaging media (DMEM Fluorobright, 15 mM HEPES, 5% FBS). Imaging followed using the appropriate absorption and fluorescence emission maxima (560 and 585 nm, respectively) with a BZ-X710 Keyence fluorescence microscope. Images were quantified using ImageJ 1.51. Transplantation of CD34+ HSPCs into NSG Mice. Targeted HSPCs (unselected) were transplanted 48 h post-targeting into sub-lethally irradiated NSG recipients. Primary transplants were performed by intrahepatic injection into newborn pups or by intrafemoral injection at 6-8 weeks of age. Approximately 1 × 10 6 cells were transplanted into each mouse for all primary transplants. For secondary transplants, human CD34+ HSPCs were isolated from transplanted 16-week-old-mice at the time of primary engraftment analysis using CD34+ Microbead Kit Ultrapure (Miltenyi Biotec, San Diego, CA, USA) and transplanted without pooling into a second sub-lethally irradiated NSG recipient. Secondary transplants were performed by intrahepatic injection into newborn pups. Glucocerebrosidase activity assay. To facilitate comparisons between different conditions, cells were FAC-sorted prior to quantification of enzyme activity and cell number ranged from 2 × 10 4 to 1 × 10 5 cells. Protein was extracted by lysing cells in 200 µl of deionized water with a Branson Sonicator with probe, centrifuging lysates at 17,000 × g for 10 min at 4°C, and collecting the supernatant containing the soluble proteins. Protein concentration in the supernatants was measured by Bradford assay kit with BSA standard curve ranging from 0.25-0.5 mg/ml (Thermo Scientific). To prepare the GCase assay working reagent, the fluorogenic substrate 4-methylumbeliferyl-β-D-glucopyranoside (Sigma, #M3633) was dissolved to a final concentration of 5 mM in citrate/phosphate buffer (pH 5.5) supplemented with 15% (w/v) sodium taurocholate. To perform the GCase assay, 25-50 µg protein extract (50 µL) was mixed with 100 µL of working reagent and incubated for 1 h at 37°C covered from light. Reactions were stopped with 200 µL stop buffer (0.2 M glycine/ carbonate, pH 10.7). Fluorescence of 4-methylumbeliferone (4MU) liberated by GCase enzyme cleavage was measured using a Molecular Devices SpectraMax M3 multi-mode microplate reader with SoftMax Pro 7 software at excitation and emission wavelengths of 355 and 460 nm, respectively (top read). A standard curve for 4MU was established using 4MU sodium salt (Sigma) in assay buffer. Mice. NOD.Cg-Prkdc scid Il2rg tm1Wjl /SzJ (NSG) mice were developed at The Jackson Laboratory. NOD.Cg-Prkdc scid Il2rg tm1Wjl Tg (CMV-IL3,CSF2,KITLG) 1Eav/MloySzJ were described in Wunderlich et al. 37 and Billerbeck et al. 36 and obtained from The Jackson Laboratory. Mice were housed in a 12-h dark/light cycle, temperature-and humidity-controlled environment with pressurized individually ventilated caging, sterile bedding, and unlimited access to sterile food and water in the animal barrier facility at Stanford University. All experiments were performed in accordance with National Institutes of Health institutional guidelines and were approved by the University Administrative Panel on Laboratory Animal Care (IACUC 20565 and 33365). Statistical analysis. All statistical test including paired and unpaired t-tests, and one-way analysis of variance (ANOVA) followed by Tukey's multiple comparisons test was performed using GraphPad Prism version 7 for Mac OS X, GraphPad Software, La Jolla California USA. Data was reported as means when all conditions passed three normality tests (D'Agostino & Pearson, Shapiro-Wilk, and Kolmogorov-Smirnov (KS) normality test). Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All flow cytometry datasets in this study are available in Flowrepository, experiment number FR-FCM-Z2LQ. The authors declare that the other data that support the findings of this study are present within the paper, its Supplementary Information files, or are available from the corresponding author upon reasonable request. The source data underlying Figs. 1d-e, 2b, d, 3b-f, 4a-g and 5a, b, and as well as Supplementary Figs. 1d, 3a, 4b-d, 6a-b, and 8b are provided as a Source Data file.
8,855
2020-07-03T00:00:00.000
[ "Medicine", "Engineering" ]
Perfect drain for the Maxwell Fish Eye lens Perfect imaging for electromagnetic waves using the Maxwell Fish Eye (MFE) requires a new concept: the perfect drain. From the mathematical point of view, a perfect point drain is just like an ideal point source, except that it drains power from the electromagnetic field instead of generating it. We show here that the perfect drain for the MFE can be seen as a dissipative region the diameter of which tends to zero. The complex permittivity $\varepsilon$ of this region cannot take arbitrary values, however, since it depends on the size of the drain as well as on the frequency. This interpretation of the perfect drain connects well with central concepts of electromagnetic theory. This opens up both the modeling in computer simulations and the experimental verification of the perfect drain. Introduction The possibility of focusing light below the diffraction limit (super-resolution) has been demonstrated in the last decade using left-handle materials [1] [2][3] [4] (that is, materials with negative dielectric and magnetic constants [5]). Recently, a new possibility has been introduced using the Maxwell Fish Eye (MFE) lens, for a material with a positive, isotropic and inhomogeneous refractive index. It is well known that, in the Geometrical Optics (GO) framework, the MFE perfectly focuses rays emitted by an arbitrary point of space into another (its image point). Leonhardt [6] has demonstrated in that the MFE lens in two dimensions not only perfectly focuses radiation in the GO approximation, but also does so for actual fields of any frequency, a result that has been confirmed via a different approach [7]. This two dimensional analysis describes TE-polarized light in a cylindrical medium (where the electricfield vector E points orthogonal to the plane), and the electric field magnitude fulfills the Helmholtz equation. Leonhardt and Philbin have also demonstrated the analogous ideality of a novel impedance-matched spherical MFE for perfect focusing of electromagnetic waves in three dimensions [8]. In the two-dimensional case, the perfect focusing of the MFE in [6] assures that the medium will perfectly transport an outward (monopole) Helmholtz wave field, one generated by a point source, towards an "infinitely-well localized drain" [6] (one that we will call "perfect point drain") located at the desired image point. Note that the perfect point drain must be such that it totally absorbs all incident radiation, with no reflection or scattering by it. Note also that the field around the drain asymptotically coincides with an inward (monopole) wave. We will refer here to such a wave as "Leonhardt's forward wave". Even though the physical significance of a point source as a limiting case seems to be well accepted, that of a passive perfect point drain has been considered very controversial [9] [10] [11] [12]. In reference [6], the drain was not physically described, but only considered as a mathematical entity, leaving no clues as to how to simulate that drain in software. Particularly, an analysis of such a drain, one located at a position different from the image point, would help to prove the super-resolution, which could not be done with the information in reference [6]. Recently, however, a candidate for perfect drain has been proposed for a microwave-frequency MFE [13], wherein a two-dimensional MFE medium has been assembled as a planar waveguide with concentric layers of copper circuit board forming the desired refractive index profile of the MFE. Also, both source and drain have been built as identical coaxial probes, one to introduce power into the planar waveguide and the other to extract it. The coaxial probe was intended to act as the perfect sink and is completely passive and loaded with the characteristic impedance of the coaxial cable [13]. There is no theoretical proof that such a coaxial probe will actually behave as a perfect drain, but a practical proof is claimed by comparing (Figure 4 in [13]) the measured electric field distribution and the analytical expression of Leonhardt's forward wave ( [6], reviewed here in Section 1.1). Since the measured and analytical values differ significantly (deviations attributed primarily to imperfections in the probes), we think a more detailed analysis is called for in order to clarify whether or not that receiving coaxial probe is acting as the perfect drain as defined here. Nevertheless, this clarification does not affect the main conclusion in [13]: such a probe certainly leads to super-resolution in the conditions of that measurement. In this paper we present a different realization of a passive perfect drain, obtained theoretically from the analytical equations. It consists in the introduction of a certain non-magnetic material inside a circle of radius R enclosing the image point (although not centered upon it, as discussed later). As will be proved in Section 2, the complex permittivity of that non-magnetic material will be such that the field outside that circle coincides exactly with Leonhardt's forward wave and the incoming power will be fully absorbed inside it. In general, this perfect drain can have a finite radius R; the perfect point drain is just the limit case R0. Leonhardt's forward wave The strength of the TE monochromatic field E(x,y)e -it z in a region without sources or drains fulfils the 2D-Helmholtz equation: where 0 0 0 k     and n is the MFE refractive index distribution given by: where 22 xy   . Leonhardt's forward wave is a particular family of solutions of Eq. (1), given by Eq. (12) in [6]: where P  is the Legendre function of the first kind, Here z=x+iy is the complex notation of the point (x,y), and x 0 is an arbitrary real number. Without lack of generality we have located the point source described in [5] at the object point (x 0 , 0). Note that -11 and by the divergence of P  () when -1, E is infinite at ||=1. A wave according to Eq. (3) is generated by the point source located at (x 0 ,0), and it propagates towards the perfect point drain at the image point (1/x 0 , 0). The time evolution of the field, Re(E(x,y)e -it ) is shown in the associated media files Media1a and Media1b for x 0 =-2, k=15. This evolution clearly includes the one-directional propagation of the wave from source to drain, with no reflection or scattering by it. Note that the magnetic field H(x,y) of Leonhardt's forward wave can be easily computed from the field E, Eq.(3), and Eq. (5), as: Current of the source Eq. (1) is only valid in a region without sources or drains, in our case the full plane except for the points (x 0 , 0) and (1/x 0 , 0). The equation valid for the full plane must include the Dirac delta at those points. The amplitude of Eq. (3) was specifically selected in [6] to make it behave as a Green function, i.e., so the weights of those Dirac deltas have unit moduli. It can be written as [6][7] [8]: The right hand side of Eq. (7) Alternative expression for the forward wave There is an alternative way to express Eq. (3) (Eq. 12 of [6]), which uses the Legendre function of the second kind Q  . We consider here the branch of Q  that is real-valued when Im( )=0 and ||<1 (another complex-valued branch Q  is claimed to be considered in [6]). Taking into account, from [14] (Eq. (15) in p. 144), that: We already know that the Eq. (3)can be alternatively rewritten as: Let us designate the factor inside the parenthesis of (10) as: Interesting asymptotic expressions for function F  are set forth in Appendix 1, which gives comprehensive discussion of Leonhardt's forward wave. Additionally, this alternative expression will greatly simplify some calculations in the perfect sink design of the next section. Perfect sink The theoretically ideal point drain of Leonhardt is located at the point (-1/x 0 , 0). We will design first a finite-area perfect drain, which will comprise a convex region surrounding that point (-1/x 0 , 0), filled with an inhomogeneous, isotropic, non-magnetic material with complex dielectric permittivity (thus, absorptive), such that the field outside that region coincides exactly with Leonhardt's forward wave. The equation satisfied by the field in that region will then be the homogeneous Helmholtz equation. Since the incident wave fields E and H are known, the necessary continuity of E and H on the drain boundary will be forced by particularizing on the boundary the values of E and H in Eq. (3) and Eq. (6), respectively. Selection of the boundary From Eq. (5), it is easy to confirm that the line = d constis a circle with its centre at the point (x c , 0) and its radius R given by: We select the drain region as that containing the points fulfilling  d , with  d fulfilling: and coincide with the GO wavefronts. This property is fulfilled by a more general class of inhomogeneous media, a class analyzed in [7]. The blue lines in Fig.1, which we can call =constant, are also circumferences, and coincide with the GO rays. and are the Poynting vector lines of Leonhardt's forward wave. Both families of lines coincide with coordinate grid lines of the bipolar orthogonal coordinate system (see, for instance, [7]). From Eq. (12) we see that selecting  d close enough to 1 will make the radius R as small as desired. At the limit  d , we obtain the point drain (R0, x c  -1/x 0 ). Inhomogeneous complex refractive index of the drain Inside the drain ( d ), we select = 0 and the refractive index has the following form: where  d is a complex constant and with Im( d )0 to be calculated later (in Section 3). Then, the homogeneous Helmholtz equation in the drain is: Designate 00 dd k      (which is complex). Using the expression for the refractive index n of the MFE (Eq. (2)), we find that the selection made with Eq. (14) fulfills that 0 dd n k nk  , so Eq. (15) can also be written as: This equation is identical to the Helmholtz equation of the MFE, Eq.(1), after substituting the real wave number k 0 by k d (still to be calculated) Ordinary differential equation of the drain One of the boundary conditions on the line  d is the continuity of field E=Ez. As said before, E is constant on that boundary surface, so it would be particularly interesting to express Eq. (16) in the bipolar coordinate system -. That was already done in Section 3.1 of reference [7]. As shown there, the expression of Eq. (16) for solutions depending only on  is the same as that Eq. (9) in [6]. Using the change of variables  =(r 2 -1)/(r 2 +1), this equation is: Its general solution [14] can be written as: where ' is given by The three constants A, B and ´ are fixed by three conditions. Two of them are given by the continuity of the E and H fields at the boundary, and the third is that the field E must be bounded (i.e., it cannot diverge), since the Helmholtz Eq. (15) is homogeneous. Consider first the third condition. From the properties of the Legendre functions [14], we know that the function Q ´(  ) diverges when 1, if Im(´)0 and P ' ( ) does not (P ´( )=1). Therefore, this boundary condition imposes B=0, which means that the field inside the drain region ( d ) has the form: A and ´ are calculated forcing the other two conditions i.e., the continuity of E and H at the boundary. E and H outside the drain are taken from the solution in the absence of reversed wave (so the drain is reflection-less) i.e., by using Eq. (10) and Eq. (6)). Electric field and current inside the drain. The conductivity  of the media inside the drain can be calculated as a function of  d from its definition: Using Eq. (23) we can calculate  as a function of ´. The electric field inside the drain is given by Eq. (20). Consequently the current density J= E in the drain is: Fig. 4 shows the modulus of the current density inside the drain for a drain radius=0.2 cm and a total current in the source of 1mA when the drain center is at the point (0.519, 0) cm. Power absorption. The power emitted by the source P can be obtained by integrating the Poynting vector over a surface enclosing the source. The MFE is a lossless system, so this power has to equal both the total power entering the drain and the power it absorbs. This integration has been made as shown in Eq. (26). This surface has cylindrical symmetry along the z-axis, so for the sake of simplicity we take a surface whose projection on the x-y plane is a =constant curve, let's say = (8). The power P depends on the source total current and frequency, but obviously it does not depend on the drain radius. Conclusions. We have found that the perfect drain concept can be modeled as a dissipative region the diameter of which tends to zero. The Leonhardt's forward wave solution inside the MFE can not be obtained for any material used as a drain. We have calculated the complex permittivity of the non-magnetic material forming the perfect drain. When the size of the drain tends to zero, the drain tends to the perfect drain concept Introduced by Leonhardt. Without such a perfect drain, there will be two waves F  () and R  () inside the MFE (forward and backward respectively, and described in Eq.(31)) as response to the point source. This concept of the drain as a small dissipative region can be easily included in electromagnetic modeling software. Both characteristics are desirable to analyze and experimentally verify the super-resolution properties of the MFE, i.e. to check that the power P changes drastically when the source (or the drain) moves to a neighbor point located at a distance much smaller than the wavelength. Appendix. Asymptotic expression when >>1, and the Backward wave. Using Eq. (1) and Eq. (2) in page 162 of [14], functions P  () and Q  (),  = cos, for  << (>0) can be approximated as: to describe a field propagating from image point to object point. In analogy to canonical solutions of the Helmholtz Equation (as plan waves, cylindrical waves or spherical waves in free-space), it seems more natural for this MFE problem to define the general solution of Eq. (17) as a superposition of functions F v and R v , that is, to rewrite the field E as : When there is a perfect drain, outside it only the Leonhardt's forward wave F  exists, which implies that D in Eq. (31) is null. Inside the drain, the condition that the field E must be bounded leads to Eq. (20), which in terms of F v´ and R v´ implies the condition C=D. This means that inside the perfect drain there is a standing wave, which is necessary to avoid the singularity at the image point ( =1). This is analogous to the superposition of converging and diverging cylindrical waves, of equal amplitude, in free-space as described by the Hankel functions (1) 00 () Hk  and (2) 00 () Hk  , which results in the bounded Bessel function J 0 (k 0 ). The forward and backward wave defined here for the MFE lens are intimately related to the retarded and advanced field defined in [12] and [13] for the MFE mirror. Using the formulation in [12] and [13], a wave bounded at the image point for the MFE lens can be written: Besides the complexity of expression Eq. (32), it can be easily seen that it is (up to a multiplicative constant) simply equal to P  (which is the bounded expression used above in Eq. (20)). This is obtained by direct computation, after considering that  (-k 0 )= -(k 0 )-1, and then using Eq. (7) and (16)
3,785.6
2010-10-29T00:00:00.000
[ "Physics" ]
Supersymmetric field theories on AdSp × Sq In this paper we study supersymmetric field theories on an AdSp × Sq spacetime that preserves their full supersymmetry. This is an interesting example of supersymmetry on a non-compact curved space. The supersymmetry algebra on such a space is a (p − 1)-dimensional superconformal algebra, and we classify all possible algebras that can arise for p ≥ 3. In some AdS3 cases more than one superconformal algebra can arise from the same field theory. We discuss in detail the special case of four dimensional field theories with N=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=1 $$\end{document} and N=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=2 $$\end{document} supersymmetry on AdS3 × S1. Introduction In the last few years, it has been useful to study supersymmetric field theories on curved manifolds. In many cases exact results in such backgrounds (when they preserve supersymmetry) may be computed (following [1]), including partition functions and some expectation values. In some cases these contain information beyond the usual exactly-computable results in flat space. This allows us to learn more about these field theories, even when they are strongly coupled, providing new windows into strongly coupled field theories. So far, this study has been almost exclusively limited to compact curved manifolds. In this paper we take first steps towards a systematic study of supersymmetric field theories on non-compact curved space-times, by looking at the specific example of d-dimensional supersymmetric (SUSY) field theories on AdS d−q × S q . This example is maximally symmetric, and, for an appropriate definition of the theory and of the boundary conditions, can often preserve the full supersymmetry of the d-dimensional field theory. Another reason for being interested in this specific example is that field theories on AdS d−q × S q often arise on various branes and singularities in string theories on anti-de Sitter (AdS) space, and in some cases they can be decoupled from the full gravitational theory. In a specific example, the 6d N = (2, 0) superconformal field theory on AdS 5 × S 1 was recently embedded in string theory [2], and this led to surprising results. Four dimensional supersymmetric theories on AdS 4 were studied in detail in the past, for instance in [3][4][5][6][7][8][9][10][11][12] (this example is particularly interesting since it is related by a conformal transformation to four dimensional theories on a half-space). Supersymmetric theories on AdS 3,4,5 were studied in [13][14][15][16][17][18][19][20][21][22], on AdS 5 in [23], and on AdS d and AdS p × S p in [24]. Other examples did not attract much attention. Another example worth mentioning is the N = 4 super-Yang-Mills theory on AdS 3 × S 1 , that can be embedded in string theory by considering D3-branes on AdS 3 × S 1 in type IIB string theory on AdS 5 × S 5 . We limit our study to anti-de Sitter spaces of at least three dimensions. In section 2 we go over all possible UV-complete supersymmetric field theories on AdS d−q × S q , and classify the possible supersymmetry algebras that they can have that preserve the full JHEP04(2016)066 supersymmetry. These algebras always contain the isometries of AdS d−q , so if we start from a field theory with n supercharges, we obtain a superconformal algebra in (d − q − 1) dimensions with n total supercharges, n/2 regular and n/2 conformal supercharges. When (d − q) > 3 such a superconformal algebra is unique, so the only questions are whether we can preserve all the supercharges and obtain this algebra or not. For (d − q) = 3 there are several 2d superconformal algebras with the same total number of supercharges, and we often find that more than one possibility can be realized by the same field theory when we put it on AdS d−q × S q . There are several different methods to perform this analysis. One possibility is to use the formalism of supersymmetry in curved space, by coupling the supersymmetric theory to a background supergravity theory with an appropriate metric (and additional background fields). This formalism is only available for some cases, but when it is available it is straightforward to find that the AdS d−q × S q background can preserve all the supercharges, and to write the supersymmetric actions and Killing spinor equations on AdS d−q × S q . There are actually two variations of this formalism. One can use a "regular" supergravity containing background fields coupled to the usual supersymmetry algebra, or one can use a "conformal" supergravity containing background fields coupling to the superconformal algebra (see, for instance, [13][14][15][16][17][18][19][20][21][22]). When both formalisms are available they are identical, since the "regular" supergravity arises as a particular gauge-fixing of the "conformal" supergravity. In particular they give rise to the same Killing spinor equations, though they may be written in terms of different background fields. In section 3 we use a "regular" supergravity to analyze (following [25]) the case of 4d N = 1 theories on AdS 3 × S 1 , and we use "conformal" supergravity to analyze the case of 4d N = 2 theories on the same space. We analyze these two cases in detail, constructing explicitly the Killing spinors and the supersymmetry transformations. For various different free 4d theories, we analyze in detail the spectrum that we obtain on AdS 3 × S 1 , and the corresponding representations of the 2d superconformal algebra. In section 2, in order to perform the complete classification, we use a completely different method. This is a more general approach, that does not require knowledge of the precise background supergravity that is relevant (and that is not always available); this method was previously used in [24]. To do this we note that AdS d−q × S q (with equal radii for AdS and the sphere) is related to flat space R d by a conformal transformation. Thus, the resulting (d − q − 1)-dimensional superconformal algebra must be a subalgebra of the d-dimensional superconformal algebra, and we can just classify all such subalgebras (that contain half of the fermionic charges of the d-dimensional superconformal algebra, and the isometries of AdS d−q × S q ). In this method the supercharges that we preserve on AdS d−q × S q are combinations of regular supercharges and conformal supercharges, that arise after performing the conformal transformation from flat space. This is similar to what we obtain by coupling our theory to conformal supergravity, but here we do not need to use any details of this coupling, and it is clear from the discussion above that the results apply to general supersymmetric field theories on AdS d−q × S q (not necessarily superconformal). The embedding of the (d − q − 1)-dimensional superconformal algebra into the d-dimensional superconformal algebra immediately tells us which d-dimensional JHEP04(2016)066 R-symmetries are required for preserving supersymmetry on AdS d−q × S q . Our analysis is limited to supersymmetric field theories that have a UV-completion as superconformal field theories; we do not discuss in this paper theories with no known field-theoretic UV completion, such as the 6d N = (1, 1) supersymmetric Yang-Mills theory. It is important to emphasize that a supersymmetric field theory on AdS d−q × S q is not equivalent to a (d − q − 1)-dimensional superconformal theory; for instance it does not contain a graviton that would map to the energy-momentum tensor of such a theory. However, such theories do have a natural action of the (d − q − 1)-dimensional superconformal algebra, such that their states and fields sit in representations of this algebra, and they can arise as decoupled subsectors of full-fledged (d − q − 1)-dimensional superconformal theories [2]. One new aspect which arises for theories on non-compact space-times like AdS is the need to specify boundary conditions, in particular in a way that preserves the full supersymmetry. We do not discuss this issue in general here, assuming that such a choice is always possible. In the cases that we discuss in detail we explicitly discuss some boundary conditions which preserve supersymmetry. Often there are many different choices of maximally supersymmetric boundary conditions, in particular for non-Abelian gauge theories [26]. It would be interesting to understand what are the specific quantities that can be computed exactly for supersymmetric field theories on AdS d−q × S q , by localization or other methods. We leave this to future work. 2 Superconformal field theories on AdS d−q × S q In this paper we study supersymmetric field theories in d = 3, 4, 5, 6, that are put on manifolds of the form AdS p × S q (d = p + q, p ≥ 3) in a way that preserves the full supersymmetry (SUSY). While some partial results are available (for example, for 4d N = 1 theories [27]), a general method for analyzing and constructing supersymmetric theories on curved space is not yet available; in some cases one can use a coupling to background supergravity fields for this. However, on space-times that include anti-de Sitter space, we have the advantage that any supersymmetry algebra must include the isometry algebra SO(p − 1, 2), which means that it must be equivalent to a superconformal algebra in (p − 1) dimensions, with the same total number of supercharges. We can use the following trick to analyze all possible SUSY algebras that can arise from (p + q)-dimensional supersymmetric theories on AdS p × S q . When the AdS space and the sphere have equal radii of curvature, the space AdS p × S q is conformally equivalent to (p + q)-dimensional flat space. We can use for AdS p × S q the metric where µ = 0, · · · , p − 1, dΩ 2 q is the metric on a unit S q , and the boundary is at z = 0. Then, multiplying the metric by z 2 /L 2 gives the metric on flat space, where z is a radial coordinate in (q + 1) dimensions. Boundary conditions imposed on AdS space may lead to singularities of various fields on the (p − 1)-dimensional subspace z = 0. JHEP04(2016)066 Suppose we have a (p+q)-dimensional superconformal theory, whose symmetry algebra includes n supercharges and also n superconformal charges. The fact that AdS p × S q is related to flat space by a conformal transformation, means that the supersymmetry algebra on AdS p × S q must be a subalgebra of the superconformal algebra in (p + q)-dimensions. It is clear that the boundary conditions on AdS p must break at least half of the total number of fermionic generators of the superconformal theory. We will be interested in the cases where exactly half of the fermionic generators are broken, leading to a supersymmetry algebra on AdS p × S q with n fermionic generators. Thus, our goal will be to classify all possible (p − 1) dimensional superconformal algebras with n total supercharges (n/2 regular supercharges and n/2 superconformal charges in (p − 1) dimensions) that arise as subalgebras of (p+q)-dimensional superconformal algebras with 2n total supercharges. The superconformal groups in different dimensions on flat space were classified by Nahm [28], and we will use this classification in our analysis. Naively this classification is only relevant for superconformal theories in (p + q) dimensions. However, it is clear that if we have a general supersymmetric theory on AdS p × S q , it cannot preserve a larger supersymmetry than that of a superconformal theory on the same space. Thus, the same classification will give us all supersymmetry algebras that can arise when we put a general UV-complete supersymmetric theory on AdS p × S q in a way that preserves the same number of supercharges as we had in (p + q) dimensions. Similarly, it is clear that when we discuss AdS p and S q spaces with different radii of curvature, the supersymmetry algebra cannot be larger than the one which arises for equal radii, so any supersymmetry algebra that arises for such space-times must also be included in our classification. For q = 1 one can show that the same supersymmetry algebra arises for any ratio of the radii. In this section we will describe this classification of all possible symmetry algebras that preserve all supercharges of a (p + q)-dimensional theory on AdS p × S q , going over all possible p ≥ 3 cases one by one. The results are summarized in table 1. Four dimensional theories on AdS 3 × S 1 In four dimensions we have four consistent superconformal algebras that do not contain higher spin conserved currents: N = 1, N = 2, N = 3 and N = 4. Each algebra contains 4N regular supercharges, and 4N superconformal charges. We would like to analyze the possible algebras that can arise for supersymmetric theories on AdS 3 ×S 1 , preserving all 4N supercharges. As discussed above, such algebras should be two dimensional superconformal algebras with 2N regular supercharges, that arise as subalgebras of the four dimensional superconformal algebras. Thus, they must be (p, 2N − p) superconformal algebras in two dimensions. Throughout this section we will follow the notations of [29]; our conventions may be found in appendix A. N = 1 In this section we describe the algebraic structure of these theories; the construction of some explicit field theories for this case will be discussed in detail in section 3. JHEP04(2016)066 and it should include half of the fermionic operators Q and S, that must close on this subalgebra. We expect the SO(2) generator of rotations on the circle to include P 3 , and to commute with all other bosonic generators, in particular with the SO(2, 2) generators. The correct choice turns out to be P 3 − c 2 K 3 , where we show in appendix B that the absolute value of c is related to the curvature of AdS 3 by 3) The generator P 3 −c 2 K 3 does not commute with the following eight bosonic generators: P a − c 2 K a , M a3 , D, and P 3 + c 2 K 3 (a = 0, 1, 2), so these generators will not be part of the resulting supersymmetry algebra. Six of the remaining bosonic generators, M ab and P a + c 2 K a , form an SO(2, 2) algebra. In order to see this it is convenient to define the operators and The SO(2, 2) ≃ SO(2, 1) × SO(2, 1) algebra may then be written as: When c > 0, the L a are the right-moving SO(2, 1) charges and theL a are the left-moving SO(2, 1) charges. Without loss of generality we will assume from here on that c > 0; the other sign is related to this by a two dimensional parity transformation. When the SO(2, 2) is embedded into a full Virasoro algebra (which may or may not be the case), these charges are proportional to the L −1,0,1 andL −1,0,1 charges of the Virasoro algebra (the constant c above should not be confused with the central charge of the Virasoro algebra, which can be non-zero when our field theory is embedded into some gravitational theory). The supercharges that close only on the remaining bosonic generators mentioned above are half of all the fermionic generators, that may be written as Q ≡ Q + icγ * γ 3 S. Their algebra is We can identify this as part of the two dimensional right-moving N = 2 superconformal algebra, where we identify (P 3 − c 2 K 3 + 2cT ) as the two dimensional R-charge; note that JHEP04(2016)066 this is a mixture of the isometry of the circle and the 4d R-charge. Together with the other generators written above we have the N = (0, 2) superconformal algebra (when c is positive), with a bosonic subgroup SO(2, 2) × SO(2); in particular In addition to this algebra we have an extra U(1) generator (P 3 − c 2 K 3 − 2cT ) that commutes with all the generators of the superconformal algebra. When this generator is preserved, it gives an extra global U(1) symmetry, but we can also preserve the same amount of supersymmetry on AdS 3 ×S 1 when this generator is broken, and only the specific combination (P 3 − c 2 K 3 + 2cT ) is conserved; we will see examples of both possibilities in section 3. However, the fact that we must preserve the combination (P 3 − c 2 K 3 + 2cT ) which appears on the right-hand side of the supersymmetry algebra on AdS 3 × S 1 means that even when the 4d theory that we start with is not superconformal, it must still have an R-symmetry in order to preserve supersymmetry on AdS 3 × S 1 [25,27]. This will be true also in our subsequent examples -preserving supersymmetry on AdS p × S q requires having in the (p + q)-dimensional field theory all the R-symmetries that appear on the right-hand side of the supersymmetry algebra on AdS p × S q . Note that even though just the counting of supercharges would have allowed also a 2d N = (1, 1) superconformal algebra in this case, we see that this possibility does not arise. We will see also in the other cases below that on AdS 3 × S q (q ≥ 1) we always obtain an even number of left-moving and of right-moving supersymmetry generators. N = 2 In the 4d N = 2 case, we use a convention where we have eight complex Weyl supercharges Q iα and S j β , with i, j = 1, 2, α, β = 1, 2, 3, 4, and a SU(2) × U(1) R-symmetry generated by a traceless 2 × 2 matrix U j i and by T . The position of the R symmetry index i, j, · · · is used to distinguish between left-handed and right-handed supercharges in the following way: In order to connect the results here to our later results in 5d and 6d, we follow [29] and treat the Weyl spinors as four component spinors, with two components vanishing according to (2.9). The spinor indices α, β, · · · can be raised and lowered by the charge conjugation matrix C αβ . In the usual notations of [30], our Q i 's are the Q's, and our Q i 's are the Q's. Together with the other bosonic generators these satisfy the SU(2, 2|2) superconformal algebra As in the N = 1 case, we choose a subgroup of the bosonic generators to give the isometry algebra of AdS 3 × S 1 , and we correlate the ordinary supercharges and the superconformal charges so that we get a consistent superalgebra on AdS 3 × S 1 . In the N = 2 case there are two different ways to do so, which are a straightforward generalization of the N = 1 case (see also [31]). The first option is the diagonal option preserving SU(2) R , where for i = 1, 2 we choose the conserved fermionic generators to be (2.11) These satisfy the algebra Notice that P 3 − c 2 K 3 + 2cT appears on the right-hand side of {Q i , Q j }, but it commutes with all the other generators in the theory. Therefore, k θ ≡ P 3 − c 2 K 3 + 2cT is a central charge and fixed within a representation. This should not be confused with the central charge of the (super)-Virasoro algebra, which does not appear in its global subalgebra that JHEP04(2016)066 we obtain here. We can rewrite the second line of (2.12) in the following way: with L a defined as in (2.4), and U i defined by This supersymmetry algebra (together with the commutators of the bosonic generators) is isomorphic to the 'small' N = (0, 4) superconformal algebra in two dimensions, which contains an SU(2) R-symmetry, together with an extra central charge k θ . Note that this central charge is consistent with the algebra we wrote, but not with its extension to a super-Virasoro algebra; thus if we embed our theory into a theory that has this extended algebra, k θ must vanish. The other possible option for N = 2 theories on AdS 3 × S 1 is It will be convenient to define the following basis for the charges The algebra that includes this specific half of the supercharges is then Notice that we get two separate subalgebras, each satisfying the N = 1 algebra of section 2.1.1, but with opposite chirality. These subalgebras contain two independent U(1) generators JHEP04(2016)066 and each one of them acts on half of the supercharges: This algebra is the N = (2, 2) superconformal algebra in two dimensions, with left-moving and right-moving U(1) R generators T ± . N = 4 In the 4d N = 4 case we have 16 complex Weyl supercharges satisfying the superconformal algebra P SU(2, 2|4). In a similar notation to the previous subsection where now i, j = 1, 2, 3, 4 are SU(4) R indices. As in the previous cases, we define supercharges Q ii ′ ≡ Q i + icγ * γ 3 S i ′ , where we begin with arbitrary and independent i and i ′ . We wish to see which combinations of indices will close on the isometries of AdS 3 × S 1 , chosen as above. Generally, the commutation relations of these supercharges are The algebra will close only if since then we get the specific combinations of the generators P 3 − c 2 K 3 , P a + c 2 K a . Moreover, as discussed above D cannot appear in the algebra, implying another constraint This condition also ensures that is in the algebra. With these constraints the commutator is simplified to Up to permutations, there are three possible solutions to the constraints, and we will analyze each of them separately: The naive analysis of the R-symmetry in the 2d superconformal algebra would be to check what subgroup of the entire R group is consistent with the supercharges Q ii ′ . (1), and SU(2) × SU(2) × U(1) for the cases I, II, III respectively. The naive expectation is that the 2d R-symmetry will be a product of G R with the S 1 isometry (or S q isometries in the general case). As we already saw in the N = 2 analysis, this is not the case. The mixture of the sphere isometries and the R-symmetry generators can modify the symmetry by central charges and U(1) factors. In some cases we will see that not all of the generators appear on the right-hand side of anti-commutators of supercharges, and therefore will not be part of the 2d R-symmetry. For this reason, a more careful analysis needs to be done. I. This case is similar to the N = (0, 4) case above, and the algebra takes the form Notice that all unitary traceless R-symmetry generators U j i appear in the algebra, and therefore the full SU(4) remains as a two dimensional R-symmetry. The additional generator P 3 − c 2 K 3 , which is the generator of the U(1) symmetry on the circle S 1 , also appears as an R-symmetry. This algebra is isomorphic to the N = (0, 8) superconformal algebra, where the N = 8 algebra appearing is the one with a U(4) R-symmetry. II. In this case it will be convenient to write the algebra in terms of generators Q 1 , Q 2 and Q ± = 1 √ 2 (Q 3 ± Q 4 ). We will use indices i, j = 1, 2, and write the ± explicitly. JHEP04(2016)066 The fermionic commutation relations read The bosonic commutation relations of the Lorentz group with Q i and Q i are as in case I, and with Q ± and Q ± as in (2.17). The R-generators that appear in the commutation relations form a SU( is the generator of the U(1) symmetry, and it commutes with the other eight generators. This U(1) generator is diagonal in the Q's with eigenvalues of − 1 2 for Q + and Q i and 3 2 for Q − ; 1 2 for Q + and Q i and − 3 2 for Q − . Together with the additional U(1) generator commutes with Q − , Q − , but not with the rest. To summarize, we get an algebra in which the generators Q 1 , Q 2 , Q + , Q 1 , Q 2 , Q + , together with U(3) R-symmetry generators, obey non-trivial commutation relations among themselves and (anti-)commute with Q − , Q − . The full algebra is isomorphic to the N = (2, 6) superconformal algebra in two dimensions, with R-symmetry group U(1) × U(3). III. In this case it will be convenient to use fermionic generators Q (1) , for which we get the following algebra: with the bosonic commutation relations as in (2.17). We see that the algebra splits into two commuting sectors with individual SU(2) R-symmetry generators. There is also a central charge, , appearing on the right-hand side, which commutes with all the algebra and with the SU(2) + ×SU(2) − R-symmetry. The supercharges Q ±(1) and Q ±(2) are doublets of SU(2) ± respectively. This algebra is isomorphic to the two dimensional 'small' N = (4, 4) superconformal algebra, with an additional central charge k θ (that cannot appear in the super-Virasoro extension of this algebra). N = 3 No superconformal field theories with N = 3 that do not have N = 4 are known, but we still include this algebraically consistent case for completeness. By straightforward generalizations of the previous cases, we can get here the N = (0, 6), N = (2, 4), N = (4,2) or N = (6, 0) two dimensional superconformal algebras. Five dimensional theories on AdS 4 × S 1 As classified by Nahm [28], the only possible five dimensional superconformal algebra has N = 1 supersymmetry and is called F (4). We will use the real form F 2 (4) as in [32] to write the algebra: (2.28) In this algebra the supersymmetry generators Q iα and the superconformal generators S iα (i = 1, 2, α = 1, 2, 3, 4) are symplectic Majorana spinors, with a total of eight real components. The U ij generators form a SU(2) R-symmetry algebra, and they are anti-hermitian and symmetric, JHEP04(2016)066 Unlike our conventions in four dimensions, here the indices i, j = 1, 2 are raised and lowered by ǫ ij and ǫ ij which satisfy The charge conjugation matrix C, and also the matrices Cγ a , are anti-symmetric. The algebra (2.28) is quite similar to the N = 2 superconformal algebra in four dimensions, where we saw that the algebra closes on the isometries of AdS d−1 × S 1 for the following choices of supercharges: We may expect to have the same two options here, but it turns out that only one of them is consistent: the twisted choice with As in the four dimensional case it will be convenient to work in the basis in which the algebra takes the form Our choice of supercharges breaks the SU(2) R-symmetry to a U(1) with the generator U 11 − U 22 . As in section 2.1.1, one linear combination of the isometry of S 1 and this unbroken U(1) R appears on the right-hand side algebra and acts as a 3d SO(2) R generator: The other combination is a global symmetry that may or may not be broken. The full algebra that we find is equivalent to the N = 2 three dimensional superconformal algebra OSp(2|2, R) [33]. Indeed, based on the amount of supersymmetry, this is the only possibility. Six dimensional theories on The largest space-time dimension consistent with superconformal symmetry is six [28]. Assuming no higher spin conserved charges, there are two possibilities in this case, which are both chiral: the minimal N = (1, 0) superconformal algebra, and the extended N = (2, 0) superconformal algebra. Putting them on AdS 5 × S 1 should lead to a four dimensional superconformal algebra with half of the number of fermionic generators. JHEP04(2016)066 The 6d algebra in this case is given in terms of symplectic Majorana-Weyl spinors by [34] M µν , where i, j = 1, 2, and α, β = 1, · · · , 8. As in the five dimensional case, the spinor indices are raised and lowered by ǫ ij and ǫ ij , respectively. The U j i 's are generators of the SU(2) R-symmetry. Again, this algebra is very similar in its structure to the N = 2 superconformal algebra in four dimensions. As in the five dimensional case of section 2.2, only the twisted combinations Q 1 = Q 1 + icγ 5 S 2 and Q 2 = Q 2 + icγ 5 S 1 form a consistent algebra. The R-symmetry that is preserved is U(1), and we also have the U(1) isometry of S 1 . The remaining subalgebra is isomorphic to the N = 1 superconformal algebra in four dimensions, SU(2, 2|1). One combination of U(1)'s P 5 − c 2 K 5 − 4ic(U 1 2 + U 2 1 ) appears in this algebra as the U(1) R generator, and the other one may or may not be a global symmetry. N = (2, 0) This algebra has an USp(4) ≃ SO(5) R-symmetry, generated by the symmetric 2-form U ij (i, j = 1, 2, 3, 4). The indices are raised and lowered by the antisymmetric invariant tensors Ω ij and Ω ij , which satisfy Ω ij Ω jk = δ i k . In order to perform computations, we choose a specific representation for these matrices: (2.38) The algebra then takes the form (2.39) JHEP04(2016)066 Naively there are three choices of combining the supercharges to get a superconformal algebra on AdS 5 × S 1 , just as in the N = 4 case in four dimensions. But it turns out that only for two of them the algebra closes; these are (2.40) and (2.41) These two options turn out to give the same algebra, in which the 6d R-symmetry breaks to a U(2) symmetry with (2.42) U + , U − and U z satisfy the SU(2) algebra, and T commutes with them. Defining combinations of the Q's of (2.40) as in case I of section 2.1.3, they act on the supercharges in the following way We also have the extra U(1) generator ∂ θ = P 5 − c 2 K 5 . One combination of this with T acts as the U(1) R-generator in the four dimensional superconformal algebra, and the other may or may not be a U(1) global symmetry. The algebra we find is equivalent to the four dimensional N = 2 superconformal algebra SU(2, 2|2); again this is the only possibility based on the counting of supercharges. Theories realizing this construction were discussed in [2]. Field theories on AdS in this case we wish our symmetries to commute with the SO(3) isometry group of the S 2 factor. Using similar manipulations to the AdS d−1 × S 1 cases, we can single out the last two space-time dimensions by using two gamma matrices in the form of the conserved supercharges on This ansatz turns out to give a superconformal algebra that preserves the isometries of AdS d−2 × S 2 . Since we are not studying AdS 2 here, we can choose d = 5 or d = 6; however d = 6 turns out to be impossible (see appendix D) so we only have one case. Five dimensional field theories on AdS 3 × S 2 As in (2.28), we consider the F (4) superconformal algebra. This time we choose as conserved supercharges Q i = Q i + icγ 34 S i , which turns out to be the only consistent choice giving a closed subalgebra. We will denote the coordinates µ = 3, 4 by A, B, · · · and JHEP04(2016)066 µ = 0, 1, 2 by a, b, · · · . The fermionic part of the algebra is The SO(2, 2) = SL(2, R) × SL(2, R) isometry group of AdS 3 is now generated by (2.45) Only the L a appear on the right-hand side of (2.44), therefore the two dimensional superconformal algebra is chiral. P A + c 2 K A and M 34 generate the SO(3) isometry group of the sphere, and U 2 1 , U 1 2 and U 1 1 = −U 2 2 generate an SU(2) R symmetry. In the two dimensional superconformal algebra, the sphere generators join with the R-symmetry generators to form an SU(2) × SU(2) R-symmetry. This algebra turns out to be the 'large' N = (0, 4) superconformal algebra [35]. 1 Field theories on AdS Here the only examples are the six dimensional ones. We use the same conventions for the 6d superconformal algebras as above. Following the previous sections we propose the conserved supercharges to be of the form (2.46) These obey the algebra (2.47) The generators (P A + c 2 K A ) + icǫ ABC M BC and U j i form an SU(2) × SU(2) R-symmetry in the two dimensional superconformal algebra. As in the previous case, the full algebra turns out to be the 'large' N = (0, 4) superconformal algebra in two dimensions. The other three S 3 rotation generators P A + c 2 K A − icǫ ABC M AB commute with the supercharges and may or may not be a global symmetry. In this case there are two options to form a consistent algebra. The first case is the diagonal case The R-symmetry in this case consists of an SU(2) subgroup of the sphere SO(4) isometries generated by P A + c 2 K A + icǫ ABC M BC , and of the full 6d R-symmetry USp(4) ∼ SO(5), and we have 8 chiral supercharges. We obtain a N = (0, 8) superconformal algebra that is different from the one we encountered before; this algebra is classified as case (III) in [36]. The other three SO(4) generators again commute with the supercharges and may or may not be a global symmetry. The second option is to split the generators into two pairs. For the representation of Ω that we chose in section 2.3.2 they are (2.49) Here the supercharges close on the entire SO(4) sphere isometries, and our choice preserves an SO(4) subgroup of the USp(4) ≃ SO(5) R-symmetry. Altogether we obtain the 'large' N = (4, 4) superconformal algebra with SO L (4)×SO R (4) R-symmetry. Each one of the Rsymmetry groups SO(4) L/R acts only on left/right-handed supercharges, and is generated by three out of the S 3 isometries and three out of the preserved six dimensional R-symmetry generators. Field theories on AdS d This is the final possibility, which is related by the conformal transformation discussed above to flat space with a codimension one boundary. In this case our ansatz for the conserved supercharges is simply Q = Q + icγ * S. The supersymmetry algebra should be a (d−1)-dimensional superconformal algebra. For the d = 6 N = (2, 0) case it is clear just by counting supercharges that this is not possible, and in fact it is easy to see (essentially by chirality arguments) that one cannot preserve half of the supersymmetry for any 6d theory with chiral supersymmetry on AdS 6 (see appendix D). So, we will analyze the three, four and five dimensional cases. JHEP04(2016)066 The extra U(1) R symmetry that we have in four dimensions is broken by the choice of the combination of supercharges that appears in (2.50); this must happen because a codimension one boundary reflects left-handed fermions into right-handed ones. 2.6.2 Four dimensional N = 2 As we saw in previous similar cases, also here there are two options, the diagonal case Q i = Q i + icγ * S i and the twisted case Q 1 = Q 1 + icγ * S 2 , Q 2 = Q 2 + icγ * S 1 . As can be seen by a change of basis, the two options turn out to be equivalent and obey the algebra This algebra is isomorphic to the three dimensional N = 2 superconformal algebra with an SO(2) R-symmetry, with U 2 1 − U 1 2 as its generator; the other generators of the 4d U(2) R-symmetry are broken. Four dimensional N = 4 Also in this case all three options of combining the supercharges Q i with the superconformal charges S i ′ turn out to give the same algebra (2.52) This is equivalent to the three dimensional N = 4 superconformal algebra with SO(4) R-symmetry. Again the 4d SU(4) R-symmetry is broken to SO(4). Similarly, the 4d N = 3 case leads to a 3d N = 3 superconformal algebra. Five dimensional theories on AdS 5 In the five dimensional case only one out of the two natural options is consistent. This is the twisted choice, Q 1 = Q 1 + icS 2 and Q 2 = Q 2 + icS 1 . The algebra is then the N = 1 four dimensional superconformal algebra, with a U(1) R-symmetry generated by U 11 − U 22 . Three dimensional theories on AdS 3 The three dimensional superconformal algebra is Here A ij are SO(N ) generators, i, j = 1, · · · , N . For more conventions see [33]. JHEP04(2016)066 As in previous cases, in order to close the algebra on the AdS 3 isometries, we define the supercharge Q a = Q a + cS a ′ that gives the commutator For some N , we can choose n diagonal and 2m twisted supercharges where n + 2m = N , such that we define Q i = Q i + cS i , i = 1, · · · , n Q a = Q a + cS a+m , a = n + 1, · · · , n + m Q a ′ = Q a+m + cS a , a = n + 1, · · · , n + m. (2.55) The algebra is then The N = (0, n + m) subgroup involves the n + m right handed spinors, the three P µ + c 2 K µ − icǫ µνρ M νρ SL(2) R isometries, and an SO(n + m) R symmetry made out of A i j , (A i a + A i a+m ), and (A a b+m + A a+m b + A a b + A a+m b+m ). The N = (m, 0) subgroup involve the m left handed spinors, the three P µ + c 2 K µ + icǫ µνρ M νρ SL(2) L isometries, and an SO(m) R-symmetry made out of (A a b+m + A a+m b − A a b − A a+m b+m ). The two subgroups (anti-)commute with each other. JHEP04(2016)066 We will show explicitly how to write the actions and transformation rules for different 4d multiplets. For N = 1 we use the simple notations of new minimal supergravity (SUGRA) and the results of [27]. For N = 2, we build the actions and transformation rules using the superconformal approach discussed in section 2, starting from an N = 2 superconformal theory and coupling it to superconformal gravity. The different choices for the supercharges Q i correspond to relations η(ζ), where ζ, η are parameters related to the Q and S transformations, respectively. By starting from superconformal field theories on flat space and plugging in the relations η(ζ), we get the correct Killing spinor equations, action, and transformation rules (see appendix B for more details). We focus on free theories for which we will explicitly construct the action and boundary conditions on AdS 3 × S 1 that preserve all of the supercharges, and study the spectrum of the 2d superconformal algebras. Unlike in the previous section, here we allow for different radii for AdS 3 and S 1 , in order to show that one can still preserve the same supersymmetry algebras also in this case. We also allow for 4d field theories that are not necessarily conformal, though most of our examples will be conformal. Four dimensional N = 1 theories on AdS 3 × S 1 In the previous section we analyzed the supersymmetry of N = 1 theories on AdS 3 × S 1 algebraically. Another general way to study such theories is to couple them to background fields of new minimal supergravity, and to use the results of [27]. We will show explicitly that the two consistent values for the background supergravity fields result in 2d N = (0, 2) and N = (2, 0) superconformal algebras. We use the metric where θ is the coordinate on S 1 with θ ∼ θ + 2π, R and L are the radii of the S 1 and AdS 3 respectively, and for r → 0 we reach the boundary of AdS. The curved space sigma matrices are related to the flat ones by σ t,x,r = L r σ 0,1,2 , σ θ = Rσ 3 . (3.2) For spinors and sigma matrices conventions, we follow [30]. The classification of geometries preserving different numbers of supercharges for four dimensional N = 1 theories on various manifolds can be found in [25,27]. Following their work we couple the theory to the new minimal supergravity multiplet [37] which contains, in addition to the physical graviton g µν and gravitino Ψ α µ , the following auxiliary fields: a U(1) R gauge field A µ and the 1-form V µ = 1 4 ǫ µνρλ ∂ ν B ρλ . The conditions for preserving all four supercharges are given by 3) JHEP04(2016)066 where W µνκλ , R µν are the Weyl and Ricci tensors of the metric g µν , respectively. When these conditions are satisfied, there are four independent solutions to the Killing spinor equations The superalgebra then will be where δ K is the R-covariant Lie derivative along the Killing vector K µ = ζσ µζ , andq is the generator of the U(1) R symmetry. We begin by determining the values of the background fields V µ and A µ . From (3.3) we find In order to preserve the isometries of our spacetime, V µ and A µ must take values in the S 1 direction. We then get two solutions From the Killing spinor equations and the requirement that the spinors should be singlevalued (see appendix C), we find that the allowed values for A µ are The parameter n here corresponds to a large gauge transformation of the background U(1) R field around the circle, which is essentially the same as shifting the momentum generator around the circle (normalized to be an integer) by n times the R-charge. The effect of this is discussed in appendix C. It has no effect on the supersymmetry algebra, so from here on we will set n = 0. Note that this construction only works when we have a U(1) R symmetry; for superconformal theories this is guaranteed, but for other theories it is a necessary condition for preserving all supercharges on AdS 3 × S 1 . We can now solve (3.4) to get an explicit form for the Killing spinors ζ andζ: Here a,ā, b andb are Grassmanian parametrizations of the components of the Killing spinors which correspond to the four independent supercharges, z ≡ x + t andz ≡ x − t are coordinates in the spatial directions of the boundary of AdS 3 , and the subscripts L, R denote left/right-handed solutions. From now on we will focus on the right-handed solution ζ R ,ζ R ; a similar analysis can be done for the left-handed one. Using the explicit form of the spinors, we can compute the Lie derivative δ K acting on different fields. For example, when acting on a scalar (which can have some non-zero R-charge as the eigenvalue ofq), the Lie derivative takes the simple form L K = K µ ∂ µ , resulting in the following commutators of the generators: (3.10) Taking r → 0 we can identify this with the two dimensional N = (0, 2) superconformal algebra, as in section 2.1.1, with generators (3.11) Here ∆ = h L + h R is the sum of the left and right dimensions (which are equal for scalars), andR is the U(1) R generator of the two dimensional superconformal algebra. We can repeat the procedure for higher spin fields and find also the spin, s = h R − h L . We find Thus, we find also in the explicit field theory language the same algebraic structure as in the previous section. Note in particular that, as in section 2.1.1, the 2d R-chargeR is a linear combination of the KK momentum and the 4d R-charge; the specific combination we had in section 2.1.1 arises here for L = R (which we assumed in the previous section). If we choose the opposite sign for V µ , we similarly get the N = (2, 0) superconformal algebra. Free field theory on AdS 3 × S 1 In this section we analyze the spectrum and boundary conditions of different fields on AdS 3 × S 1 . For related work see, for example, [38] and references within. We begin with free fields, and later join them into supersymmetry multiplets. Scalar The bulk action for a free massless scalar of R-charge q (which is the bottom component of a chiral multiplet) coupled to the new minimal supergravity auxiliary fields is We now expand φ in Kaluza-Klein (KK) modes around the S 1 by defining (3.14) In terms of these modes, the action (3.12) on AdS 3 takes the form The asymptotic solution near the boundary is given by the standard formula The physical modes must have ∆ > 0. We will fix the other modes on the boundary and find the correct boundary action to yield a well-defined variational principle. JHEP04(2016)066 The variation of the bulk action is where h is the induced metric on the boudnary of AdS space (say with some cutoff on the radial direction), d 2 x denotes an integration over the variables x and t which label this boundary, and n is a vector transverse to the boundary. The bulk terms vanish on shell. In order for the boundary term to vanish, we add a boundary action S bndy,φ to S φ such that the total variation vanishes when the physical boundary conditions are held. If we choose we get the variation The operators n µ g µν ∂ ν ± i R D θ annihilate φ k + ,φ k + for every k. Therefore, by fixing the φ k− modes, δφ k − = δφ k − = 0, the variation of the action vanishes. If, on the other hand, we take the boundary action to be the variation becomes In this case, the variation vanishes when fixing δφ k + = δφ k + = 0. In order to satisfy ∆ > 0 for all the fluctuating modes, we need to have a mixed kdependent boundary action. For the KK modes with 0 < ∆ < 2 we have two possibilities for the boundary action, while otherwise we have just one choice. The full boundary action should then be with k * such that for k < k * , ∆ k − > 0 and for k ≥ k * , ∆ k + > 0 (in some cases there may be more than one possible choice for this k * , giving different theories on AdS 3 × S 1 ). Fermion The bulk action for a fermion in a chiral multiplet is a kinetic term and a term that couples the fermions to the background field, (3.24) Here we took the fermion to have R-charge (q − 1), consistent with putting it in the same multiplet as the scalar with R-charge q. As in the case of the scalar, we expand in KK modes 25) and solve the equations of motion near the boundary. We take the fermion to be periodic around the circle, anticipating that this will be required for preserving supersymmetry. The asymptotic solution is given by As explained in section 3.1, the 2d conformal dimensions of ψ ± satisfy h L − h R = ± 1 2 . The spectrum can be written as (3.27) As before, in order to have a well defined variational principle, we need to split our boundary action. If we take the boundary action to be with S bndy,ψ± = 1 2 d 2 xdθ |h|ψ in µσ µ ± Rσ θ ψ , (3.29) then δS ψ + δS bndy,ψ± = 0, when we keep the modes ψ k,± . Note that because ∆ f,k ± = ∆ s,k ± ± 1 2 , the constraints on k * for the fermion are the same as we found for the scalar. This is of course important for SUSY, as will be shown in the next section. Gauge field The action for a free U(1) gauge field v µ is the Maxwell term When expanded in KK modes, it takes the following form where i, j go over the AdS 3 coordinates. We can choose a gauge where v = 0, for which the action simplifies to i . The normalizable modes are a scalar whose dimension in the 2d conformal algebra is ∆ = 2, and a U(1) gauge field on the boundary, while the nonnormalizable modes that couple to them are a scalar of dimension ∆ = 0 and a conserved current. Note that from the point of view of the 2d superconformal algebra this means that (for the action (3.30)) we do not get a conserved current representation, but rather a representation that contains the 2d field strength arising from the value of v µ on the boundary. For the k = 0 modes, we have a complex massive vector field v (k) i . The asymptotic solution to the equations of motion gives the following dimensions for the k'th KK mode of v i , Similar to the previous cases, we demand that the variation of the total action should vanish when the physical boundary conditions are satisfied. The boundary action that we need to add is where The generalization to non-Abelian gauge fields is straightforward. A free N = 1 chiral multiplet The on-shell supersymmetric chiral multiplet consists a complex scalar φ and a Weyl fermion ψ. Following the previous subsections, the action for the free massless chiral multiplet with R-charge q on AdS 3 × S 1 is given by 38) and the covariant derivatives are The SUSY variations of the fields are The full action S chiral accompanied with the boundary conditions specified in section 3.2 is invariant under all four supercharges. Each scalar k ± mode with dimension (3.17) has a superpartner fermion k ± mode with dimension (3.26), such that By comparing to the known N = (0, 2) multiplets, we see that the (φ k + , ψ k + ) form chiral multiplets, and (φ k − , ψ k − ) form Fermi multiplets. BF bound saturation According to Breitenlohner-Freedman [39,40], the minimal mass of a scalar on AdS 3 can be otherwise we get complex dimensions from the point of view of the 2d conformal algebra. Supersymmetry guarantees that m 2 ≥ m 2 BF (see (3.15)), but we should discuss the special case where this bound is saturated. This happens if there exists an integer k such that kL R + q = 1, and then for this k ∆ k± coincide. In this case, the asymptotic solution to the Klein-Gordon equation is (3.43) If we fix the non-normalizable mode φ − on the boundary, the analysis is similar to the one done in the previous sections and all supercharges are preserved. The other boundary condition breaks the conformal symmetry. A free massive chiral multiplet We can add a mass in a supersymmetric way by adding a superpotential W = 1 2 mφ 2 and taking the R-charge of the scalar to be q = 1. After integrating out the auxilliary field F , we get the bulk Lagrangian The spectrum is modified due to the mass. The equation of motion of the scalar is The dimension of the k'th KK mode is given by For the Fermion, we have the coupled equations Asymptotically, we get Plugging one into the other, we get (3.49) and the dimensions with the asymptotic expansions (3.52) JHEP04(2016)066 Now the SUSY transformations mix the fields φ k ,φ −k , ψ k ,ψ −k but they can be diagonalized such that they split into four multiplets with dimensions (3.53) The boundary conditions fix two of them -either the first and fourth, or the second and third -such that only two give operators in the 2d superconformal algebra, one chiral and one Fermi multiplet for every k. Breaking the S 1 isometry As was discussed in section 2, in order to have supersymmetry on AdS p × S q , some mixture of the R-symmetry and the S q isometries that appears on the right-hand side of the supercharges anti-commutator must be preserved. Specifically, for the case of N = 1 on AdS 3 × S 1 , from the algebraic analysis we know that we must preserve a specific combination of the S 1 isometry and the U(1) R generator, but can break each one of them separately. A simple field theory realization for this is to add a θ-dependent mass term to the free chiral multiplet. This is done by adding the superpotential W = me 2i(q−1)Aµx µ φ 2 ,W = me −2i(q−1)Aµx µφ 2 . (3.54) By taking q = 1 we get the regular massive chiral multiplet discussed in the previous section, but we will keep q arbitrary such that the Lagrangian breaks the S 1 isometry and the U(1) R symmetry. The theory is still supersymmetric as before, with the dimensions modified to but it no longer has an extra U(1) global symmetry. A free N = 1 vector multiplet The bulk action for a U(1) gauge multiplet is (3.57) and the transformation rules are The KK modes are defined as above, and we can compute their variations. We will do it explicitly for δ b , δb, which is enough to understand to structure of the multiplet. Some of the transformations are: For k = 0, the − modes are the physical ones. They contain a 2d gauge field v i , a dimension 1 2 , 1 fermion λ − and a dimension (1, 1) scalar v θ . Its dual multiplet (containing the couplings to these operators) contains a conserved current, a dimension 1 2 , 0 fermion and a dimension 0 scalar. For k > 0, the + modes are physical. They contain a vector of dimension 1 + kL 2R , kL 2R , and a fermion of dimension 1 2 + kL 2R , kL 2R . For k < 0, the − modes are physical. They contain a vector of dimension − kL 2R , 1 − kL 2R and a fermion of dimension 3.5 A free N = 2 hypermultiplet in the N = (0, 4) case For N = 2 theories we can either just guess the form of the Killing spinors and supersymmetric actions, based on the results for the N = 1 case and on requiring their consistency with the full N = 2 supersymmetry, or derive them by coupling to a background N = 2 superconformal gravity, as described in appendix B. We begin in this subsection by studying the N = 2 hypermultiplet using the supersymmetry that gives a N = (0, 4) superconformal algebra. Denoting the scalar fields by JHEP04(2016)066 φ 1,2 and the fermions by ψ 1,2 , the supersymmetry transformations of the different fields are (3.59) Here V µ is the same as in the N = 1 case above. The bulk action, equations of motion and Killing spinor equations are with the covariant derivatives defined as and their complex conjugates. A µ here is a background field for the U(1) R-symmetry of the N = 2 theory; this symmetry must exist in order to use this construction, based on our algebraic discussion. We take the R-charge of the scalars in the hypermultiplet to be q. As in our discussion of the N = 1 case in section 3.1, we can perform shifts in the background value of A µ corresponding to large gauge transformations. In our conventions of this section, in order for the Killing spinors to be periodic, we need to perform a specific shift which amounts to setting A µ = 0, and we will use this value below. The µ = θ part of the Killing spinor equations here seems different than the one we used for the N = 1 case. This happened because here we used the conformal supergravity approach, rather than coupling to regular supergravity. As discussed in the introduction, the two approaches must give the same answer. This is consistent because the N = 1 Killing spinor equations can be brought to the form we found here by redefinitions of the background fields that have no effect on the physics (they still describe AdS 3 × S 1 with the same parameters). The solution to the Killing spinor equation is The asymptotic dimensions of the fields are Each representation is characterized by the central charge −iD θ = k. At first sight this is confusing because in our discussion of section 2.1.2 we found that the central charge was a combination of the KK momentum and the U(1) R charge -this is related to the conventions we used above for defining A µ , which are equivalent to shifting the KK momentum by the R-charge. The supersymmetry algebra that we find from the transformations above is precisely the N = (0, 4) supersymmetry algebra with this central charge, as in section 2.1.2. We have two kinds of representations, the + and the −. The computations are the same as in the N = (0, 2) case. The + representations form a N = (0, 4) hypermultiplet, and the − representations form a N = (0, 4) Fermi multiplet. In particular, the scalars are charged under the SU(2) R symmetry in the N = (0, 4) superconformal algebra, as can be shown from 2 In this section we discuss the same theory when we put it on AdS 3 ×S 1 with the N = (2, 2) superconformal algebra. As opposed to the N = (0, 4) case, here the Killing spinors can have a relative phase (i.e. different θ-dependence). As discussed in appendix B, we will work with θ-independent Killing spinors, such that the SUSY transformations don't mix different KK modes and we can have a supersymmetric theory for every ratio R L . For that, we choose A µ = 0 and the Killing spinor equations for two spinors of opposite chirality are (3.66) The Lagarangian and the equations of motion are with the transformation rules (3.68) The asymptotic dimensions of the fields are (3.69) JHEP04(2016)066 The modes that sit in the same multiplets are Using (3.69), we find that in the same multiplet we have We see that we can have a consistent supersymmetric theory for any value of R L . For the two signs we find either an N = (2, 2) multiplet that is made out of a N = (0, 2) chiral multiplet and a N = (2, 0) Fermi multiplet, or vice versa. The bulk action is (3.73) The SUSY variations are (3.74) In this case the fermions are charged under SU(2) R , as can be seen for example from The dimensions of the scalar and fermions are This N = (0, 4) multiplet is the combination of N = (0, 2) chiral and vector multiplets. JHEP04(2016)066 3.8 A free N = 2 vector multiplet in the N = (2, 2) case The transformations in this case are for some constants γ 1,2 , and the Killing spinor equations are as before We can fix the constants γ 1,2 from the demand that We have where we used D µ (ζ 1 ζ 2 ) = 2iV ν ζ 1 σ µν ζ 2 . (3.81) The transformation is a pure gauge transformation iff The off-diagonal variations of the fermions are (3.83) Therefore the equations of motion arē JHEP04(2016)066 This is also in agreement with current conservation, and without any other conditions. These imply the equation of motion for the scalar (3.86) and the action (3.87) The dimensions of the fermions and scalar are then For k = 0, the N = (2, 2) multiplets contain a fermion of dimension 3 2 ± kL R , a scalar and a vector of dimension 1 ± kL R , and a fermion of dimension 1 2 ± kL R . For k = 0, the BF bound of φ is saturated. The physical multiplet contains a 2d gauge field, two dimension 3 2 fermions and two scalars of dimensions 1 and 2 (The last one comes from v θ ). The logarithmic scalar mode is part of the dual (non physical) multiplet and therefore the physical boundary conditions preserve the full superconformal symmetry. Chern Simons action from AdS 3 × S 1 As we saw in section 3.2.3, the three dimensional action coming from the KK reduction of the four dimensional Maxwell term gives a three dimensional Maxwell term for the k = 0 KK mode. It is well-known [38] that the physical boundary conditions for a gauge field on AdS 3 with just a Maxwell term give a 2d gauge field on the boundary, as we indeed found above. On the other hand, if there is a Chern-Simons term on AdS 3 , the physical boundary conditions will give a conserved current associated with a U(1) global symmetry in the 2d superconformal algebra. Therefore, it is an interesting question whether we can write a four dimensional theory that will give a Chern-Simons action on the three dimensional AdS space. One way to do this (for flat space times a circle) is to note that [41] showed how to couple 4d vector and two-form multiplets in a supersymmetric and parity-odd way. Denote the superfields whereG µ = 1 2 ǫ µνρτ ∂ ν B ρτ , B ρτ is a two form, and V, W a are the regular gauge field and field strength multiplets in the Wess-Zumino gauge. The Maxwell-Chern-Simons action in four dimensions is given by In components, the action reads When reducing the theory on S 1 , we get another (one form) gauge field from w i = B (k=0) iθ . The KK zero mode then contains two vector fields A i , w i with Maxwell terms and a mixed Chern Simons term with coefficient m. Therefore, when putting this theory on AdS 3 × S 1 , the physical boundary conditions will lead to conserved currents and their superconformal partners as part of the two dimensional spectrum. A.2 Spinors and superconformal algebra Here we specify our conventions for the superconformal algebras and spinors in different dimensions, used in section 2. In every dimension, we use the signature (−, d−1 +, . . . , +). In 4 and 6 dimensions, γ * denotes the chiral gamma matrix. For most of the algebras, we use the conventions of [29]. The only exception is the six dimensional N = (2, 0) superconformal algebra, for which we use [42]. The four dimensional N = 1 supercharges are Majorana spinors, while the four dimensional N = 2, 4 supercharges are taken to be chiral Dirac spinors, where the position of the R-symmetry index is used to distinguish between left and right spinors in the following way: The five dimensional supercharges are symplectic Majorana spinors. In this case the R-symmetry indices can be lowered and raised using the anti-symmetric tensor, The six dimensional supercharges are symplectic Majorana-Weyl spinors. They satisfy as in four dimensions For the N = (1, 0) theory, the i, j indices can be raised and lowered using the ǫ tensor as in five dimensions. For the N = (2, 0) theory, the i, j indices can be raised and lowered using the matrix Ω ij appearing in the definition of the symplectic spinors, We use a specific representation of Ω and all the other components are zero. The notations for raising and lowering spinor indices are where C is the charge conjugation matrix. For its properties in the different dimensions and more details, see [29,42]. B Construction of Killing spinors and supersymmetry transformations from conformal supergravity From the classification made in section 2, we find the constraints on the different superconformal transformation parameters, such that the transformations close on the desired algebra. Specifically, if we denote by ζ and η the parameters associated with the Q and S transformations respectively, our choice of Q leads to a constraint of the form η = η(ζ). By starting from the well-known superconformal multiplets in 4, 5, and 6 dimensions, and the conformal Killing spinor equations arising in conformal supergravity, and plugging in η(ζ), we find the correct Killing spinor equations and transformation rules of the studied curved spacetime. We will show here explicitly how it works for N = 1, 2 theories in four dimensions, but the procedure is the same for the other cases. The conformal Killing spinor equation in 4d N = 1 conformal supergravity is (see equation (16.10) in [29]) where b µ is the gauge field coupling to dilatations, ω ab µ the spin connection and A µ is the U(1) R gauge field. The AdS 3 × S 1 solution should be obtained by plugging in b µ = 0, η = icγ 3 γ * ζ. The equation becomes we reproduce (3.4) which describes AdS 3 × S 1 in the notations of new minimal supergravity that we used in section 2. The other decomposition rules follow automatically. For example, the supersymmetry variation of Φ µ , the fermionic gauge field associated with S transformations, is (after plugging in η(ζ)) where f a µ is the gauge field coupling to special conformal transformations. The variation vanishes if f a µ = c 2 e a µ for µ = θ, and f a µ = −c 2 e a µ for µ = θ. We see that, as expected, the specific linear combination of Q and S we chose gives the correct relation between the P and K generators in the supersymmetry algebra of section 2.1.1 on AdS 3 × S 1 . A similar computation can be done for N = 2. The relevant part of the N = 2 superconformal Killing spinor equations is 3 JHEP04(2016)066 From the algebraic analysis, we know that we have two options. The first is the diagonal choice in which η i = icγ 3 ζ i , leading to a N = (0, 4) superconformal algebra. We will take U i µ j = 0 such that the SU(2) R symmetry is conserved, and the Killing spinor equations become We can identify as before c = ± 1 2L and redefine A ′ µ = 1 2 A µ + ce µ3 = 1 2 (A µ − V µ ) such that the equations become The second option is the twisted one in which η 1 = icγ 3 ζ 2 , η 2 = icγ 3 ζ 1 , leading to a N = (2, 2) superconformal algebra. If we take U i µ j = 0, the Killing spinor equations are In the basis ζ ± = 1 √ 2 (ζ 1 ± ζ 2 ) the equations take the form By the same identification as before, we find The solutions to these equations admit a relative phase of e iR L θ between the spinors, and is therefore supersymmetric only if the radii satisfy the quantization condition R L ∈ N. The reason is that as explained in appendix C, the supersymmetry transformations are consistent only if the Killing spinors are single valued. In that case, ζ ± can both be single valued at the same time only if the relative phase between them is an integer multiple of θ. If we want the theory to be supersymmetric for general radii, we can turn on a specific U i µj and eliminate the relative phase between the Killing spinors. By doing so the Killing spinor equations take the form D µ ζ ± = ∓iV ρ σ µρ ζ ± . (B.11) In the same way we construct the supersymmetric multiplets from the superconformal multiplets. For example, we can take a superconformal hypermultiplet with the transformation rules JHEP04(2016)066 and plug in the diagonal decomposition rule We get the supersymmetry transformation rules on AdS 3 × S 1 (B.14) In the same way, this is done for the other multiplets and the other supersymmetries. C Comments about R-gauge transformations As we saw in section 3, there is a freedom in choosing the background R-symmetry gauge field, which results in the Killing spinor in a phase ζ ∼ e inθ . Above we always chose n = 0, but let us see what happens in the more general case. In the four dimensional N = 1 case on AdS 3 × S 1 , is we use such a more general background A µ and represent some scalar field with 4d R-chargeq using its KK modes around the S 1 , the U(1) R charge of the k'th KK mode in the two dimensional superconformal algebra is kL R +q + nqL R (this can be seen from (3.11)). We wish to understand if different choices for n lead to different physical theories. • Ifq is an integer, n can always be absorbed by shifting the definition of k, and therefore it has no consequences on the physics. • Ifq is irrational, different choices of n will lead to a different spectrum of the theory with different dimensions and two-dimensional R-charges, and thus to a different theory on AdS 3 × S 1 . • Ifq = M N is rational (where M and N are coprime integers), there are N different physical theories with different spectra that can be obtained by changing n, while two theories with n and n ′ = n + N are physically equivalent. The Killing spinors should clearly be periodic in θ for the supersymmetry generators to be well-defined. In particular, if the Killing spinors have a phase ζ ∼ e inθ , the supersymmetry transformations relate different KK modes with k f ermion − k boson = ±n, which exist only if n is an integer. JHEP04(2016)066 D Six dimensional superconformal algebras on AdS 6 and AdS 4 × S 2 We claim that the six dimensional N = (1, 0) and N = (2, 0) superconformal (SC) algebras do not have subalgebras that close on the isometries of AdS 6 and AdS 4 ×S 2 . One argument for this (say in the AdS 6 case) is that AdS 6 is conformally related to flat space with a boundary, and the boundary conditions on a spinor necessarily modify its chirality, which cannot be done in a supersymmetric way when the supercharges are chiral. We can also see this in a purely algebraic way. The six dimensional algebra on AdS 6 must be equivalent to the F (4) SC algebra in 5 dimensions which is the only 5 dimensional superconformal algebra. This algebra includes 16 supercharges and SU(2) as its R-symmetry. The N = (2, 0) 6 dimensional superconformal algebra has 32 supercharges, so the numbers fit (after breaking half) but there is no choice of supercharges that will break the R-symmetry to SU(2) (but to a larger group). If we discuss AdS 4 × S 2 , then by counting supercharges, the 6 dimensional N = (1, 0), (2, 0) should correspond to the 3 dimensional N = 2, 4 superconformal algebras with R-symmetries of SO(2), SO(4) respectively. These R-symmetry groups must be a mixture of the SO(3) S 2 isometries and a subgroup of the six dimensional R-symmetry group. For the two cases this cannot be done. More explicitly, in order to form such a subalgebra, there is a limited amount of options to connect the supercharges Q i and S j . Specifically, for AdS 6 , we can choose either Q i + icS i ′ or Q i + icγ * S i ′ . In both cases, the anti-commutators of these charges contain the dilatation operator D, which does not keep us in the bosonic subalgebra we want, but rather brings us back to the full superconformal algebra. The same arguments apply in the case of AdS 4 × S 2 , with the options Q i + icγ 45 S i ′ and Q i + icγ * γ 45 S i ′ . E Super-Virasoro algebra on AdS 3 × S d−3 In the analysis of section 2 we found the possible 2d superconformal algebras arising in field theories on AdS 3 × S d−3 . The conformal generators that are dual to the AdS isometries are just the global ones (which in the language of Virasoro generators are denoted by L ±1,0 , L ±1,0 ), and the superconformal algebras we found contain just the global superconformal algebra and not the full super-Virasoro algebra. As was shown in [43], when there is a fluctuating gravity theory on asymptotically AdS 3 spacetimes, the symmetry group includes the entire Virasoro group. Here we studied curved manifolds without discussing gravity at all, and we can ask whether our field theories can come from some G N → 0 limit of a gravitational theory. One basic requirement is that the superconformal algebras we found can be extended into a super-Virasoro algebra. We encounter problems in two cases, the N = (0, 4) and N = (4, 4) superconformal algebras that come from 4d N = 2, 4 supersymmetric field theories on AdS 3 × S 1 . These algebras contain a central extension beyond the known superconformal algebras. As far as we know, these central charges are inconsistent with the extension to a super-Virasoro algebra. Thus, we claim that such field theories cannot appear as decoupled sectors of gravitational theories on AdS 3 , though they could arise as decoupled sectors of gravitational theories in higher dimensions (e.g. on D3-branes wrapping
17,648.4
2016-04-01T00:00:00.000
[ "Physics" ]
Bridge: a GUI package for genetic risk prediction Background Risk prediction models capitalizing on genetic and environmental information hold great promise for individualized disease prediction and prevention. Nevertheless, linking the genetic and environmental risk predictors into a useful risk prediction model remains a great challenge. To facilitate risk prediction analyses, we have developed a graphical user interface package, Bridge. Results The package is built for both designing and analyzing a risk prediction model. In the design stage, it provides an estimated classification accuracy of the model using essential genetic and environmental information gained from public resources and/or previous studies, and determines the sample size required to verify this accuracy. In the analysis stage, it adopts a robust and powerful algorithm to form the risk prediction model. Conclusions The package is developed based on the optimality theory of the likelihood ratio and therefore theoretically could form a model with high performance. It can be used to handle a relatively large number of genetic and environmental predictors, with consideration of their possible interactions, and so is particularly useful for studying risk prediction models for common complex diseases. Background The translation of human genome discoveries into health practice represents one of the major challenges in the coming decades [1,2]. The use of emerging genetic knowledge for early disease prediction, prevention and pharmacogenetics will advance future genomic medicine and lead to more effective prevention and treatment strategies [3]. Among those, disease prediction based on genetic and environmental information is the first step in translating genomics into health [4]. It assesses an individual's risk of future disease, so that early preventive interventions can be adopted to reduce morbidity and mortality [5]. For this reason, studies to assess the combined role of genetic and environmental information in early disease prediction represent a high priority, as manifested in multiple risk prediction studies now underway [6][7][8][9][10][11][12]. The yield from these studies can be enhanced by adopting powerful and computationally efficient study design and analytic tools [13]. We have previously developed an optimal ROC curve (O-ROC) method to quickly evaluate new genetic and environmental findings for potential clinical practice by designing a new risk prediction model, estimating its classification accuracy, and calculating the sample size needed for evaluating the model [14]. If, in the design stage, a proposed risk prediction model appears to be superior to existing models, or if it reaches a desired accuracy level, it may worth developing further for clinical use. To evaluate the risk prediction model on a study sample, we developed a forward ROC curve (F-ROC) method [15]. F-ROC builds on the optimality theory of the likelihood ratio [16], and is thus powerful for risk prediction analysis. It adopts a stepwise selection algorithm to efficiently deal with a large number of predictors and their possible high-order interactions. To facilitate designing and analyzing risk prediction models, we have implemented the above two methods into the graphical user interface (GUI) software, Bridge. Bridge is comprised of two modules, Test Design and Test Build. The O-ROC approach has been implemented in the Test Design module, for designing a risk prediction model. The Test Design module uses the essential information (e.g., allele frequencies) of risk predictors from previously published studies or publically available resources to design a risk predictive model, calculating its estimated accuracy and the required sample size to further investigate the model. The F-ROC approach has been built into the Test Build module. The Test Build module is developed for risk prediction modeling on known risk predictors, as well as for highdimensional risk prediction based on a large number of potential risk predictors. Bridge is freely accessible online at https://www.msu.edu/~qlu/Software.html. Implementation R is open-source software used for statistical computing and graphics. With many built-in statistic functions and excellent scientific graphing capacity, R is now one of the most popularly used statistical software. Although R is widely used in statistics and related fields, it has a limited graphic interface, which makes it difficult for new R users. Bridge uses an R graphic user interface (GUI), providing an intuitive and interactive visualization experience for users. Instead of writing code in the R console window, which could be less convenient for new users, the user-friendly interface of Bridge allows users to load the datasets and run the program easily by simply clicking either the options from the menu or the buttons from the toolbar. Moreover, for users who prefer to use R console, Bridge also provides the access of its functions through R console. In this paper, we give an overview of the package. A detailed description of installation and use of the package can be found in the software vignette. Bridge is comprised of two independent modules, Test Design and Test Build, for the design and construction of a risk prediction model, respectively. The Test Design module serves as a tool for designing a risk prediction study. Given the disease prevalence of a disease of interest and essential information of the known risk predictors (e.g., relative risks) from previous studies and/or public resources, the Test Design module plots an estimated receiver operating characteristic (ROC) curve of the proposed predictive model, so that users can easily visualize the estimated discriminating ability of the model. If the model reaches desired level of discriminating ability and worth further investigation, a power analysis can be conducted to make sure sufficient power of the study. Given the power and type I error, the required sample size can be determined by the Test Design module to further investigate the proposed model and verify of its classification accuracy. At least two strategies can be used to select singlenucleotide polymorphisms (SNPs) for designing a risk prediction model. One strategy is to include only diseasesusceptibility SNPs that have been replicated in multiple studies and the other is to include as much potentially disease-susceptibility SNPs as possible into the model. Each strategy has its own advantages and disadvantages. Given the limited number of SNPs identified for most of common complex diseases and their small effect sizes, a risk prediction model formed by the former strategy likely has a low AUC value but could have robust performance across different studies. The later strategy could result a risk prediction model with high accuracy, especially when gene-gene interactions exist. Nevertheless, the formed risk prediction model tends to be less stable. If data is collected to investigate the proposed risk prediction model, we can then use the Test Build module of Bridge to form and evaluate the proposed model. The Test Build module can be used to assess combined effect of known risk predictors (i.e., those identified from previous association studies) in disease prediction, with the consideration of possible high-order interactions. In addition to risk prediction on known risk predictors, the Test Build module also allows the users to explore a large ensemble of potential risk predictors and their interactions for improved disease prediction. This strategy is particular useful for complex diseases where a majority of the genetic and environmental risk predictors are unknown. For this strategy, the potentially disease-susceptibility predictors can be chosen based on both biology knowledge and statistical evidence. For instance, we can follow a simple strategy previously used to evaluate different sets of SNPs based on their marginal p-values (i.e., 10 -1 , 10 -2 ,…, 10 -8 ) [8,17]. The Test Build module has a built-in forward selection algorithm to handle a large set of predictors. The algorithm is capable of searching for important risk predictors and interactions from a large number of environmental and genetic predictors to further improve the risk prediction model. In addition, the Test Build module has a build-in function for dealing with missing data and provides options for model building and validation (e.g., an option to control the maximum number of risk predictors to be included in the model). The Test Build module uses k-fold crossvalidation to provide internal validation, and can also provide external validation if an independent data is available. The summary results (e.g., the AUC values) for the risk prediction models built on the training and validation datasets are summarized in the Bridge output window. Users can also view the proposed model via ROC-curve plots and tree structure plots. The detailed selection process is available under the Test Build. Results tab in the output area. Results and discussion We used an empirical study of Crohn's disease (CD) as an example to illustrate how to use Bridge to design and form a risk prediction model. Use the Test Design module to design a risk prediction model For simplicity, we used three well-replicated CD genetic variants, rs3828309, rs4613763 and rs11465804, to design a CD risk prediction model. By using the Data input option from the Test Design menu, we entered the disease prevalence (ρ = 0.0004) and genotype frequencies of three markers information obtained from a previous study (see Additional file 1). Note that if such information is not available, other information (e.g., relative risk and population frequency) can also be used. By clicking on the Run command from the Test Design menu, the program estimated that the 3-locus CD risk prediction model had an AUC value of 0.61. Suppose that we are interested in knowing whether the accuracy of the model is significant above the level of 0.60, the Test Design module can also calculate the sample size needed to test this hypothesis. Assuming a type I error of 0.05, a power of 0.95, and an equal number of cases and controls, 8257 cases and 8257 controls were required to verify that the proposed model had an AUC value above 0.60. The detailed results related to this analysis were displayed in the Design Results tab under the Output area. The ROC curve for the estimated risk prediction model could also be viewed by clicking on the Plot ROC Curve option from the Test Design menu (Figure 1). Use the Test Build module to form a risk prediction model In order to further investigate the proposed 3-locus prediction model, we conducted a risk prediction study by using the case-control samples from the Wellcome Trust Crohn's disease genome-wide association study. From the available 500k SNPs, we selected these 3 CD-related SNPs, rs3828309, rs4613763 and rs11465804, and formed a 3-locus model using the Test Build module. We first Tables and Build. validaDataset Tables under the Input area. Using the samples from the first dataset (i.e., the training samples), we formed a 3-locus model with a fitted AUC value of 0.60. The model was further validated in the second dataset, which attained a predicted AUC value of 0.60. To visualize the formed risk prediction models, the ROC curves could be plotted by using the Plot ROC curve option from the Test Build menu. The detailed results of the model selection were summarized in the Test Build. Results tab under the Output area. In the analysis, rs3828309, rs4613763 and rs11465804 were sequentially entered into the model. In the first step, the module selected rs3828309, and split the samples into two distinct risk groups, a high risk group and a low risk group, comprising samples with different genotypes of rs3828309. In the sequential steps, it added new markers into the model, and gradually divided samples into more distinct risk groups. The selection process continued until a 3-locus model had been reached. The details of the model building process could be visualized via the tree structure plot under Clusters.Plot of Tree area (Figure 2). Note that, the risk prediction analysis could also be performed under R console. The detailed description of using the functions in R console could be found in the software vignette. The above analysis was limited to 3 well-established CD SNPs. In order to consider additional predictors to further improve the 3-locus model, we extended the analysis to 29 potential CD-related SNPs. Using the Wellcome Trust CD dataset (see Additional files 4 and 5), the Test Build module identified 5 SNPs and formed a five-locus model with an AUC value of 0.63. The fivelocus model was further validated in the testing sample with a predicted AUC value of 0.62. By considering 29 potential CD-related SNPs, the Test Build module was able to select 2 additional predictor, rs3764147 and rs4263839, into the model, and further improved the accuracy of the CD risk prediction model. Conclusion With increasing genetic findings from large-scale genetic studies, risk prediction studies are being conducted to evaluate the role of potential genetic and environmental predictors in early disease prediction. While there is increasing interest in such risk prediction research, new bioinformatics tools have not been well developed for this emerging area of research. We developed a GUI package, Bridge, to facilitate risk prediction modeling. The software will help an investigator design a study to evaluate a new risk prediction model. It could also be used to form a new risk prediction model based upon multiple genetic and environmental risk predictors, with the consideration of possible interactions. Bridge is developed based on a graphical user interface, which can be easily accessed by basic science and clinical researchers. Availability and requirements Project name: Bridge package, Project home page: https:// www.msu.edu/~qlu/Software.html. Operating system(s): Linux, Windows, Mac OS X, Programming language: R, Other requirements: R (≥3.0.0), License: GNU GPL, Any restrictions to use by non-academics: none except those posed by the license.
3,089.8
2013-12-01T00:00:00.000
[ "Computer Science" ]
Identification of the major synaptojanin-binding proteins in brain. Synaptojanin is a nerve-terminal enriched inositol 5-phosphatase thought to function in synaptic vesicle endocytosis, in part through interactions with the Src homology 3 domain of amphiphysin. We have used synaptojanin purified from Sf9 cells after baculovirus mediated expression in overlay assays to identify two major synaptojanin-binding proteins in rat brain. The first, at 125 kDa, is amphiphysin. The second, at 40 kDa, is the major synaptojanin-binding protein detected, is highly enriched in brain, is concentrated in a soluble synaptic fraction, and co-immunoprecipitates with synaptojanin. The 40-kDa protein does not bind to a synaptojanin construct lacking the proline-rich C terminus, suggesting that its interaction with synaptojanin is mediated through an Src homology 3 domain. The 40-kDa synaptojanin-binding protein was partially purified from rat brain cytosol through a three-step procedure involving ammonium sulfate precipitation, sucrose density gradient centrifugation, and DEAE ion-exchange chromatography. Peptide sequence analysis identified the 40-kDa protein as SH3P4, a member of a novel family of Src homology 3 domain-containing proteins. These data suggest an important role for SH3P4 in synaptic vesicle endocytosis. Synaptic vesicles are specialized organelles that neurons use to secrete nonpeptide neurotransmitters. Following neurotransmitter release, synaptic vesicle membranes are retrieved by a process thought to involve clathrin-coated pits and vesicles (1,2), and recent data suggest that this endocytic mechanism is active at both high and low rates of exocytosis (3). We have identified a 145-kDa protein, referred to as synaptojanin, which is enriched in nerve terminals and appears to function in synaptic vesicle endocytosis (4 -6). Synaptojanin is an inositol 5-phosphatase that dephosphorylates inositol 1,4,5-trisphosphate, inositol 1,3,4,5-tetrakisphosphate, and phosphatidylinositol 4,5-bisphosphate at the 5Ј position of the inositol ring (6). Inositol phosphate metabolism has been implicated in a variety of membrane trafficking events including endocytosis (7). In addition, synaptojanin has an N-terminal domain that is homologous to the cytosolic domain of the yeast SacI protein. SacI mutants show genetic interactions with actin as well as with the yeast secretory mutants sec6, sec9, and sec14 (8,9), and more recently SacIp has been demonstrated to mediate ATP transport into the yeast endoplasmic reticulum (10). Synaptojanin was initially identified based on its ability to bind to the Src homology 3 (SH3) 1 domains of Grb2 (4). Cloning of synaptojanin revealed a 250-amino acid proline-rich domain at its C terminus (6) that contains at least five sequences forming potential SH3 domain-binding sites (11). A second, 170-kDa isoform of synaptojanin is present in a wide variety of tissues including neonate brain but is not detected in adult brain (6,12). The 170-kDa synaptojanin isoform is generated by alternative splicing of the synaptojanin gene leading to the presence of an additional 266-amino acid proline-rich domain with at least three additional SH3 domain-binding consensus sequences as compared with the 145-kDa isoform (12). Synaptojanin also binds to the SH3 domain of amphiphysin (6). Amphiphysin was first identified in chicken synaptic fractions (13) and mammalian amphiphysin, which is concentrated in presynaptic nerve terminals, has been implicated in synaptic vesicle endocytosis (14). A role for amphiphysin in endocytosis is supported by studies on its yeast homologues, RVS 161 and RVS 167 (15)(16)(17). Mutations in these genes cause an endocytosis defect characterized in part by an impairment in ␣-factor receptor internalization (18). Further, amphiphysin is known to interact with AP2 (14,19), a component of the plasma membrane clathrin coat (20). Evidence of a role for the SH3 domain of amphiphysin in synaptic vesicle endocytosis is provided by its interaction with dynamin. A role for dynamin in endocytosis was first determined based on its identity with the gene product of the Drosphila shibire mutant (21,22). Mutations in Drosophila dynamin leads to a block in synaptic vesicle endocytosis (23), and recent data suggest that dynamin functions in the nerve terminal by mediating the fission of endocytic vesicles (24,25). Thus, it appears likely that SH3 domain-mediated interactions of amphiphysin with synaptojanin are important to the endocytic function of synaptojanin in vivo. SH3 domain interactions involving Grb2 have also been recently demonstrated to be important for clathrin-mediated endocytosis of the epidermal growth factor receptor in non-neuronal cells (26). Specifically, disruption of Grb2 interactions with the epidermal growth factor receptor blocks receptor endocytosis, and epidermal growth factor can stimulate a transient association of Grb2 with dynamin (26). Thus, SH3 domain-mediated interactions appear to function widely in clathrin-mediated endocytosis. Here, we have used purified synaptojanin in overlay assays to identify its preferred binding targets in brain. In addition to amphiphysin, we identified a 40-kDa synaptojanin-binding protein that is highly enriched in brain, is concentrated in soluble synaptic fractions, and co-immunoprecipitates with synaptojanin. Purification and peptide sequence analysis revealed the 40-kDa protein as SH3P4, a novel SH3 domaincontaining protein that was identified from a mouse library screened with a Src SH3 ligand peptide (27). SH3P4, along with SH3P8 and SH3P13, define a family of similar proteins of unknown function (27). Our data strongly implicate SH3P4, and perhaps other family members, in synaptic vesicle endocytosis. Expression and Purification of Synaptojanin Baculovirus Constructs-Spodoptera frugiperda (Sf9; Invitrogen) cells were grown at 27°C in suspension cultures in Sf-900 II SFM optimized serum-free medium (Life Technologies, Inc.) supplemented with gentamycin. The baculovirus transfer vectors were co-transfected with linear baculovirus into Sf9 cells, and recombinant baculovirus was selected by plaque assay as described (28). Positive colonies were confirmed by protein purification and Western blot, and high titer stocks (10 8 -10 9 plaque forming units/ml) were generated as described (28). For purification of synaptojanin constructs, 200-ml cultures of Sf9 cells (1.5 ϫ 10 6 cells/ml) were infected with ϳ1 ϫ 10 9 plaque forming units of baculovirus. After 72 h of growth, cells were washed with 4°C phosphate-buffered saline (20 mM NaPO 4 monobasic, 0.9% NaCl, pH 7.4) and lysed in 30 ml of buffer A (300 mM NaCl, 0.83 mM benzamidine, 0.23 mM phenylmethylsulfonyl fluoride, 0.5 g/ml aprotinin, 0.5 g/ml leupeptin, 50 mM HEPES-OH, pH 8.0) by homogenization in a glass Teflon homogenizer and two passes through a 255 ⁄8 gauge needle. Homogenates were spun at 12,000 ϫ g for 10 min, and Triton X-100 (0.1% final) and imidazole (20 mM final) were added to the supernatant before the addition of 0.5 ml of Ni-NTA-agarose (Qiagen Corp.). The samples were incubated overnight at 4°C and washed three times in 20 ml of ice-cold buffer A with 0.1% Triton X-100 and 20 mM imidazole, and bound proteins were eluted with 4 ϫ 1-ml incubations in buffer A with 0.1% Triton X-100 and 200 mM imidazole. Dynamin was purified from Sf9 cells after baculovirus-induced expression as described (29) and was a generous gift of Dr. Sandra Schmid (Scripps Research Institute). Overlay Assays-Overlay assays using a glutathione S-transferase/ amphiphysin SH3 domain fusion protein were performed as described (4). For synaptojanin overlay assays, protein fractions on nitrocellulose membranes were blocked for 1 h in blotto (phosphate-buffered saline with 5% (w/v) nonfat dry milk), rinsed in water, and incubated overnight at 4°C in buffer B (150 mM NaCl, 3% bovine serum albumin, 0.1% Tween 20, 1 mM dithiothreitol, 0.83 mM benzamidine, 0.23 mM phenylmethylsulfonyl fluoride, 20 mM Tris-Cl, pH 7.4) containing approximately 10 g/ml of purified synaptojanin or the synaptojanin R-2 deletion construct. Transfers were then washed and incubated with affinity purified anti-synaptojanin antibody (Milo) (5) or 1852 (described below) in buffer B without dithiothreitol for 1 h at room temperature. After washing, transfers were incubated in goat anti-rabbit secondary antibody conjugated to horseradish peroxidase for 1 h in blotto and devel-oped using the ECL kit (Amersham Corp.). Preparation of Membrane Fractions-Various tissues were dissected from adult male rats and were homogenized at 1:10 (w/v) in buffer C (0.83 mM benzamidine, 0.23 mM phenylmethylsulfonyl fluoride, 0.5 g/ml aprotinin, 0.5 g/ml leupeptin, 20 mM HEPES-OH, pH 7.4) with a polytron or glass Teflon homogenizer, followed by centrifugation for 5 min at 800 ϫ g max . The supernatant fractions were separated on SDS-PAGE on 5-16% or 3-12% gradient gels. Subcellular fractionation of brain homogenates to generate synaptic fractions was performed as described (5). Immunoprecipitation Analysis-Amphiphysin immunoprecipitations were performed as described (14). For synaptojanin immunoprecipitations, a rat brain was homogenized at 1:10 (w/v) in buffer D (0.3 M sucrose, 0.83 mM benzamidine, 0.23 mM phenylmethylsulfonyl fluoride, 0.5 g/ml aprotinin, 0.5 g/ml leupeptin, 10 mM HEPES-OH, pH 7.4) with a polytron. The homogenate was centrifuged at 800 ϫ g max for 5 min, and the supernatant was then centrifuged at 180,000 ϫ g max for 2 h. The soluble supernatant was precleared with protein G-agarose for 4 h at 4°C and the precleared extracts were incubated overnight at 4°C with a monoclonal antibody against synaptojanin or a control monoclonal antibody raised against p75 LNGFR precoupled to protein G-agarose. Samples were washed five times in 1 ml of buffer D and were eluted with SDS sample buffer. Purification and Identification of the 40-kDa Synaptojanin-binding Protein-Adult rat brains (typically four for each preparation) were homogenized at 1:5 (w/v) in buffer C with a polytron, and the extracts were centrifuged at 180,000 ϫ g max for 1 h. Ammonium sulfate powder was added slowly to the soluble supernatant with stirring until 20% saturation. After 45 min on ice, the sample was centrifuged at 2700 ϫ g max for 30 min, and the supernatant was removed and precipitated with ammonium sulfate to 40% saturation. The 20 -40% ammonium sulfate precipitate was resuspended in 16 ml of buffer C and was loaded on four 40-ml 2.5-15% linear sucrose gradients prepared in buffer C. The gradients were centrifuged in a Beckman VTi 50 rotor for 6 h at 45,000 rpm with slow acceleration and no brake. Gradient fractions (20 ϫ 2 ml) were analyzed by synaptojanin overlay assay, and peak 40-kDa synaptojanin-binding protein fractions were pooled and passed over a 5-ml column of DEAE-Sephacel (Pharmacia Biotech Inc.) equilibrated in buffer C. Samples were recirculated over the column at a flow rate of 0.2 ml/min for 16 h, and the column was then eluted into 20 4-ml fractions at 2 ml/min with an 80-ml linear gradient of 0 -0.5 M NaCl prepared in buffer C. Eluted fractions (80 l/fraction) were analyzed for the 40-kDa synaptojanin-binding protein by overlay assay. Alternatively, proteins from eluted fractions (1 ml/fraction) were precipitated with 50% ice-cold trichloroacetic acid with 0.03% sodium deoxycholate as a carrier and analyzed by Coomassie Blue staining of protein gels. The peak 40-kDa synaptojanin-binding protein fraction was concentrated, run on SDS-PAGE, and transferred to PVDF membranes. The 40-kDa protein was excised and subjected to Edman degradation but was found to have a blocked N terminus. Therefore, the sample was treated with cyanogen bromide (CNBr) to affect peptide bond cleavage at the C-terminal side of methionyl residues (30). The PVDF membrane was immersed in 100 l of freshly prepared CNBr cleavage solution (70 mg CNBr/ml in 70% formic acid), flushed with argon, sealed, and left 18 h at room temperature. The cleavage mixture was then dried in a speed vacuum centrifuge, and the PVDF pieces were subjected to sequence analysis in an Applied Biosystems model 470A protein sequencer equipped with an on-line Applied Biosystems model 120A phenylthiohydantoin analyzer (31) according to procedures as recommended by the manufacturer. Sequence analysis revealed multiple sequencing signals which were manually analyzed by overlaying sucessive high pressure liquid chromatography traces. The strengths of the multiple signals were ranked, and probable sequences were searched against protein data bases employing the Blastp algorithm. Antibodies-A polyclonal anti-synaptojanin antibody (1852) was prepared by injection of a rabbit with 200 g of synaptojanin R-2 deletion construct in Titer-Max adjuvant (CytRx Corporation) using standard protocols. Serum was tested for immunoreactivity by Western blot against brain extracts and purified synaptojanin. Antibodies were affinity purified from serum against purified synaptojanin on PVDF membranes as described (5). A polyclonal antibody against synaptojanin purified from rat brain (Milo) was described previously (5). Polyclonal antibodies against amphiphysin were prepared as described (15) and were a generous gift of Drs. Carol David and Pietro De Camilli (Yale University). The monoclonal antibody against synaptojanin was raised against a glutathione S-transferase fusion protein encoding amino acids 1156 -1286 of synaptojanin in the laboratory of Dr. Pietro De Camilli and was a generous gift of Drs. Amy Hudson and Pietro De Camilli (Yale University). The monoclonal antibody against p75 LNGFR was prepared as described (32) and was a generous gift of Dr. Phil Barker (Montreal Neurological Institute) and Dr. Eric Shooter (Stanford University). Expression and Purification of Synaptojanin Constructs in Sf9 Cells-To study the SH3 domain-binding properties of synaptojanin, we generated full-length synaptojanin and a synaptojanin deletion construct (synaptojanin R-2) lacking the proline-rich C terminus for expression in Sf9 cells using baculovirus. The constructs had six histidine residues introduced at the N terminus to allow for their purification with nickel-agarose. Purification of nickel-binding proteins from Sf9 cell cultures infected with the full-length synaptojanin construct leads to the isolation of a 145-kDa protein (Coomassie, Fig. 1) that is strongly reactive with a polyclonal antibody against synaptojanin (Milo Western, Fig. 1). Infection of cultures with the synaptojanin R-2 construct leads to the production of a 120-kDa protein (Coomassie, Fig. 1) that does not react with the polyclonal antiserum raised against full-length synaptojanin (5) (Milo Western, Fig. 1), indicating that the antibodies are directed entirely against the last 231 amino acids of the proline-rich C terminus of synaptojanin. However, a rabbit antiserum raised against synaptojanin R-2 reacted strongly with both synaptojanin constructs (1852 Western, Fig. 1). To further characterize the baculovirus expressed synaptojanin, various dilutions of dynamin and synaptojanin, both purified from baculovirus infected Sf9 cells, were run on SDS-PAGE and overlaid (4) with a glutathione S-transferase fusion protein encoding the SH3 domain of amphiphysin. The synaptojanin construct strongly binds the SH3 domain of amphiphysin in this assay. Interestingly, when equal amounts of the two proteins are compared directly, amphiphysin demonstrates a greater relative affinity for synaptojanin than dynamin (Fig. 2). Overlay of Brain Extracts with Synaptojanin-To identify the major binding partners for synaptojanin in brain, we used purified synaptojanin in overlay assays. Nitrocellulose transfers containing proteins from brain extracts were incubated with purified synaptojanin, and bound synaptojanin was detected with the antibody raised against full-length synaptojanin (Milo). Control overlay assays were performed with purified proteins from control infected Sf9 cells. Synaptojanin is seen to bind to two major (stars, Fig. 3) and two minor (arrowheads, Fig. 3) proteins in crude rat brain extracts. In addition, synaptojanin itself is detected (diamond, Fig. 3) and is the only protein seen in control overlays (Fig. 3). Identification of a Major Synaptojanin-binding Protein as Amphiphysin-Based on previous results (6,14), we predicted that amphiphysin would be detected in the synaptojanin overlay assay. One of the major synaptojanin-binding proteins migrates at approximately 125 kDa, consistent with the molecular mass of amphiphysin (13,14). In fact, amphiphysin and the 125-kDa synaptojanin-binding protein have an identical mobility on SDS-PAGE (Fig. 4A). To confirm the identity of this protein, we performed an immunoprecipitation assay using two different amphiphysin antibodies (CD5 and CD6). As seen in Fig. 4B, and in agreement with previous data (14), both amphiphysin antibodies immunoprecipitate amphiphysin from a rat brain extract, although CD5 is much more effective than CD6. A synaptojanin overlay assay of the amphiphysin immunoprecipitates demonstrates that the 125-kDa synaptojanin- FIG. 2. Amphiphysin overlay of synaptojanin and dynamin. Purified synaptojanin and purified dynamin (500 -20 ng as indicated) were separated on SDS-PAGE, transferred to nitrocellulose, and overlaid with glutathione S-transferase/amphiphysin SH3 domain (amphiphysin overlay) as described (4). Transfers were stained with ponceau S to ensure even electrophoretic transfer. The arrows on the right indicate the migratory position of the proteins detected on the blots . FIG. 3. Synaptojanin overlay of a rat brain extract. Rat brain post-nuclear supernatant fractions were separated on SDS-PAGE, transferred to nitrocellulose, and overlaid with synaptojanin (synaptojanin overlay) or with protein purified from mock infected Sf9 cells (control overlay). The symbols on the right denote the migratory positions of the two major (stars) and two minor (arrowheads) synaptojaninbinding proteins detected on the blot. The diamond denotes the migratory position of synaptojanin, which is also detected in this assay. FIG. 1. Purification of baculovirus expressed synaptojanin constructs. Sf9 cells were infected with baculovirus encoding fulllength synaptojanin (synaptojanin) or a synaptojanin deletion construct lacking the proline-rich C terminus (synaptojanin R-2), and the synaptojanin proteins were purified with nickel-agarose. Approximately 4 g of protein from each sample were separated on SDS-PAGE and stained with Coomassie Blue (Coomassie) or were transferred to nitrocellulose and blotted with a polyclonal antibody against synaptojanin purified from rat brain (5) (Milo Western) or with a polyclonal antibody raised against synaptojanin R-2 (1852 Western). binding protein is amphiphysin. Characterization of the 40-kDa Synaptojanin-binding Protein-A protein at approximately 40 kDa is the strongest synaptojanin-binding protein in brain (Figs. 3 and 4A). As determined by overlay, the 40-kDa protein is enriched in brain, although it is also detected in extracts from rat testis (Fig. 5A). Synaptojanin is also enriched in adult brain (Fig. 5A) although lower levels are seen in a wide variety of tissues (12). Synaptojanin, which is concentrated in presynaptic nerve terminals (5), is enriched in synaptic membrane fractions (Fig. 5B, LP 2 ). As determined by overlay, the 40-kDa synaptojanin-binding protein is enriched in soluble fractions and is concentrated in the LS 2 fraction that corresponds to cytosol isolated from lysed synaptosomes (Fig. 5B). Co-immunoprecipitation of Synaptojanin and the 40-kDa Synaptojanin-binding Protein-We used a monoclonal antibody against synaptojanin to immunoprecipitate the protein from soluble fractions of rat brain. Synaptojanin is enriched in the precipitated material (Fig. 6). The 125-kDa synaptojaninbinding protein, which we identified as amphiphysin, does not co-immunoprecipitate with synaptojanin ( Fig. 6; see "Discussion"). However, the 40-kDa synaptojanin-binding protein does co-immunoprecipitate with synaptojanin and is enriched in the synaptojanin immunoprecipitate as compared with the starting material (Fig. 6). These data confirm the interaction between synaptojanin and the 40-kDa synaptojanin-binding protein in the brain. Identification of the 40-kDa Synaptojanin-binding Protein-Rat brain cytosol was fractionated using various concentrations of ammonium sulfate, and the 40-kDa synaptojanin-binding protein was found exclusively in the 20 -40% ammonium sulfate precipitate (data not shown). This fraction was then subjected to size fractionation on 2.5-15% linear sucrose density gradients, and the 40-kDa protein was found in a narrow peak near the top of the gradient (data not shown). Peak A soluble fraction from rat brain was subjected to immunoprecipitation with a monoclonal antibody against p75 LNGFR (anti-p75) or a monoclonal antibody against synaptojanin (anti-synaptojanin). Precipitated proteins were separated on SDS-PAGE along with an aliquot of the soluble extract (starting material, SM), transferred to nitrocellulose and subjected to a synaptojanin Western blot (top panel) or a synaptojanin overlay (middle and bottom panels). The arrows on the right indicate the molecular masses of the proteins detected on the blots. sucrose density gradient fractions were pooled and subjected to anion exchange chromatography on DEAE-Sephacel. The column was eluted with a linear gradient of NaCl from 0 to 0.5 M. A Coomassie Blue-stained gel of the proteins eluted from the DEAE column is shown in Fig. 7A. A band at 40-kDa was apparent that was strongly reactive in the synaptojanin overlay assay (Fig. 7A, synaptojanin overlay). To further characterize the 40-kDa protein, partially purified samples were overlaid with synaptojanin or synaptojanin R-2 deletion mutant (Fig. 7B) using the antiserum that recognizes the N-terminal domain of synaptojanin (1852 Western, Fig. 1). Synaptojanin, but not synaptojanin R-2, binds to the 40-kDa synaptojanin-binding protein. This demonstrates that the interaction of the 40-kDa protein with synaptojanin is mediated through synaptojanin's proline-rich C terminus and suggests that the 40-kDa protein contains an SH3 domain. To identify the 40-kDa protein, fraction 14 from the DEAE column elution (Fig. 7A) was concentrated and transferred to PVDF membranes, and the 40-kDa protein band was subjected to peptide sequence analysis. The sample was refractive to automated Edman degradation, suggesting a blocked N terminus. Therefore, the sample was cleaved at methionyl residues with CNBr and resubjected to sequence analysis. The mixture resequencing revealed 2-3 major sets of sequencing signals. The strengths of the multiple signals were ranked, and the best guess sequence (Met 0 -Glu 1 -Val 2 -Phe 3 -Gln 4 -Asn 5 -Phe 6 -Ile 7 -Asp 8 -Pro 9 -Asp 10 -Gln 11 -Asn 12 -Gln 13 -His 14 -His 15 -Ala 16 -Asp 17 -Leu 18 -Arg 19 ) was searched against the protein data base and was found to align to internal sequences of mouse SH3P4, SH3P8, and SH3P13, three members of a novel family of SH3 domain-containing proteins (27) (Fig. 7C). SH3P4 is the likely homologue of the 40-kDa synaptojanin-binding protein because its predicted protein sequence contains the required methionyl residue as well as 15 of 20 identities found (Fig. 7C). Further, a second major peptide that was sequenced from the 40-kDa protein aligns with a peptide from SH3P4 that contains three of the five mismatches from the best guess sequence in the proper position in relationship to the methionyl residue (Fig. 7C). In contrast, SH3P8 and SH3P13 lack the methionyl residue at the start of the sequence, and only 12 and 9, respectively, of the above 20 residues are identical (Fig. 7C). Furthermore, the major signals in the mixture resequencing data are accounted for by CNBr cleavage at Met 96 , Met 121 , Met 133 , Met 201 , and Met 207 of mouse SH3P4 (data not shown) whereas the predicted sequences of the two SH3P4 homologues SH3P8 and SH3P13 (27) bottom panel). B, the partially purified 40-kDa synaptojanin-binding protein was overlaid with synaptojanin or with the synaptojanin construct lacking the proline-rich C terminus (synaptojanin R-2 overlay). C, the best guess sequence predicted from the major mixture sequencing data is shown in bold. The aligned sequences from SH3P4, SH3P8, and SH3P13 (27) are indicated and matches to the best guess sequence are in bold. The sequence of a second region of SH3P4, which was also identified in the sequencing mixture, is indicated, and amino acids that align with three of the five mismatches from the best guess sequence are in bold. (Met 96 and Met 201 ) of the required methionyl residues, respectively. On this basis, the 40-kDa protein is the rat homologue of mouse SH3P4. DISCUSSION Synaptojanin is an inositol 5-phosphatase implicated in synaptic vesicle endocytosis (4 -6, 14). Synaptojanin was initially isolated based on its ability to bind to the SH3 domains of Grb2 (4), and a role for Grb2 in endocytosis was recently demonstrated (26). Synaptojanin also binds to the SH3 domain of amphiphysin (6,14), and several pieces of evidence implicate amphiphysin in endocytosis, including its SH3 domain-dependent interaction with dynamin (14). Thus, it appears that SH3 domain-mediated interactions play a general role in endocytosis. In an effort to better characterize the nature of SH3 domain-mediated protein-protein interactions with synaptojanin, we generated synaptojanin and a synaptojanin deletion construct in Sf9 cells using the baculovirus system. The proteins were then purified on nickel-agarose using a His 6 tag engineered into the N terminus of the constructs. To characterize the baculovirus expressed synaptojanin, we compared the affinity of amphiphysin binding to synaptojanin versus dynamin. When amphiphysin or Grb2 are used as substrates for the purification of SH3 domain-binding proteins from brain extracts, greater amounts of dynamin than synaptojanin are isolated (5,14), likely owing to higher levels of dynamin expression in brain. However, as shown here, when equal amounts of purified dynamin and synaptojanin are analyzed, amphiphysin shows stronger binding to synaptojanin than dynamin. It has been proposed (14) that amphiphysin may serve to target dynamin to sites of synaptic vesicle endocytosis via its dual interactions with AP2 (14,19) and dynamin. Amphiphysin may also play a role in targeting synaptojanin to endocytic sites. The higher affinity of synaptojanin than dynamin for amphiphysin binding may be important to allow for synaptojanin targeting in the presence of high dynamin concentrations in the nerve terminal. We used synaptojanin purified from Sf9 cells in a gel overlay assay to identify two major synaptojanin-binding proteins with molecular masses of approximately 125 and 40 kDa. The 125-kDa synaptojanin-binding protein was identified as amphiphysin based on its co-migration with amphiphysin on SDS-PAGE and its precipitation with amphiphysin antibodies. The identification of amphiphysin as a major synaptojanin-binding protein strongly suggests that the assay is effective in identifying relevant synaptojanin-binding partners in vitro and further suggests that amphiphysin and the 40-kDa protein are the major synaptojanin-binding proteins in vivo. Further characterization of the 40-kDa synaptojanin-binding protein demonstrates that it is highly concentrated in brain and is predominantly a soluble protein that is enriched in cytosol isolated from lysed synaptosomes. Proteins that function in clathrin-mediated endocytosis are often expressed at levels 10 -50-fold higher in neuronal versus non-neuronal cells (33). For example, both dynamin and synaptojanin are highly expressed in neurons, whereas these proteins or related isoforms are expressed at lower levels in non-neuronal cells (12, 34 -36). The 40-kDa synaptojanin-binding protein is concentrated in brain but is also detected in testis, a tissue with little or no expression of the 145-kDa isoform of synaptojanin (12). However, the testis does express the 170-kDa synaptojanin isoform (12), and this protein also binds strongly to the 40-kDa synaptojanin-binding protein (data not shown). An important role for the 40-kDa synaptojanin-binding protein is also supported by the observation that it co-immunoprecipitates with synaptojanin from rat brain cytosol. This is in contrast to amphiphysin, which does not co-immunoprecipitate with syn-aptojanin (Fig. 6). The reason for the lack of amphiphysin/ synaptojanin co-immunoprecipitation is unclear, but it may be due to a technical reason such as steric interference of the synaptojanin antibody with the site of amphiphysin binding. A more interesting explanation may be that the binding of synaptojanin to the 40-kDa synaptojanin-binding protein excludes amphiphysin binding. Thus, it is possible that the 40-kDa synaptojanin-binding protein could regulate the ability of synaptojanin to bind to amphiphysin, and this could play a key role in regulating the targeting of synaptojanin to sites of endocytosis. To identify the 40-kDa protein, we purified it from rat brain cytosol and subjected it to peptide sequence analysis. The sequence analysis identifies the 40-kDa synaptojanin-binding protein as SH3P4, a novel SH3 domain-containing protein with a predicted molecular mass of 39,880 Da (27). The identification of the 40-kDa synaptojanin-binding protein as an SH3 domain-containing protein is consistent with our observation that the 40-kDa protein does not bind to a synaptojanin deletion construct lacking the proline-rich C terminus. Further, the predicted isoelectric point of 5.3 for SH3P4 (27) is consistent with its elution from the DEAE ion exchange column in high salt. SH3P4, which was identified from a mouse library screened with a Src SH3 ligand peptide, is 75 and 63% identical to SH3P8 and SH3P13, respectively, two other proteins identified in the same screen (27). These three proteins define a novel protein family of unknown function. Our data strongly implicate SH3P4, and perhaps other family members, in synaptic vesicle endocytosis. It will be of interest to determine if the interaction of SH3P4 can regulate the ability of synaptojanin to bind to amphiphysin and thus regulate the targeting of synaptojanin to sites of endocytosis.
6,307
1997-03-28T00:00:00.000
[ "Biology" ]
Increase of group delay and nonlinear effects with hole shape in subwavelength hole arrays We investigate the influence of hole shape on the group delay of femtosecond laser pulses propagating through arrays of rectangular subwavelength holes in metal films. We find a pronounced dependence of the group delay on the aspect ratio of the holes in the arrays. The maximum group delay occurs near the cut-off frequency of the holes. These experimental results are found to be in good agreement with calculations. The slow propagation of light through the array gives rise to enhancement of the second harmonic generated in the structures. The observed behavior is consistent with the presence of a resonance at the cut-off frequency of the rectangular holes. Introduction Ever since its discovery by Ebbesen et al [1], extraordinary transmission (EOT) of light through arrays of subwavelength holes has been an intensively studied subject [2]- [4]. Previous research on the physics behind the effect of high transmission through such arrays has contributed to many important developments in nano-optics [5]- [7]. Also, EOT has led to the use of surface plasmon polaritons in ingenious optical designs, for instance to realize light beaming from a subwavelength aperture [8], and in novel spectroscopic devices [9]. An intriguing aspect of EOT is the large influence of hole shape [10,11] on the transmission of light through rectangular holes [12,13]. Changing the aspect ratio of a rectangular hole considerably redshifts the transmission peak of light polarized parallel to the short axis of the hole. Also the total transmission, normalized to the open fraction, increases for holes with larger aspect ratio. Recent theoretical work suggests that close to the cut-off frequency of a single rectangular hole, a Fabry-Perot-like resonance exists [14]- [16]. The importance of this resonance to the transmission of hole arrays was pointed out by Garcia-Vidal and co-workers [17]. The resonance occurs because at cut-off the propagation constant in the hole becomes very small. Thus, reflections within the hole add up constructively, leading to a Fabry-Perot-like resonance. Most studies of hole arrays have been performed by analysing the spectrum of the transmitted or reflected light. Time-domain measurements can be used to obtain information complementary to these frequency-domain methods. The propagation of light pulses through subwavelength holes has been studied to investigate the group velocity of light propagation through these structures [18]- [21], showing for instance large negative delays. The role of hole shape in the propagation of light pulses through hole arrays has been investigated in the THz regime [22]. To the best of our knowledge, the role of hole shape at optical frequencies is yet to be explored. The high transmission occurring in EOT leads to high electromagnetic fields in the holes. This has led to research interest in hole arrays for enhancing nonlinear effects [23] and sensing applications [24]. An investigation of the role of the aspect ratio of a rectangular hole in the second harmonic generation (SHG) efficiency of hole arrays [25] showed that a specific aspect ratio exists for which the SHG efficiency is an order of magnitude higher than for other rectangular hole shapes. It was found that this enhancement could not be explained by the linear transmission properties at the fundamental wavelength. Recent investigations of nonlinear effects occurring in photonic crystal structures have shown the important role played by the group velocity (v g ) in the efficiency of nonlinear processes [26,27]. In the case of a second order nonlinear process such as SHG, we expect that the efficiency will scale as v −2 g . In this paper, we investigate the connection between the nonlinear effects and the pulse propagation through hole arrays. The group delay of a femtosecond pulse through an array of subwavelength holes is measured for different aspect ratios of the holes. Additionally, the amount of second harmonic generated by the same femtosecond pulses in the array is determined. We observe an increased group delay and concomitantly larger nonlinear effects near the cut-off frequency of a rectangular hole. Experimental The samples under investigation are arrays of 34 × 34 holes milled with a focused ion beam (FIB) in a 200 nm thick gold layer on a glass substrate (figure 1). Each structure is milled with the same conditions of the FIB, which should lead to a comparable surface roughness in each structure. The period of the square lattice array is fixed at 410 nm. The dimensions of the holes vary from 205 × 141 to 328 × 88 nm. The aspect ratios of the holes range from 1.46 to 3.73. This range is chosen so that for wavelengths around 800 nm the holes with the smaller aspect ratio are in cut-off, while the holes with larger aspect ratio are not [28]. The total open surface of the arrays is kept constant at 33 µm 2 . For calibration purposes every hole array is accompanied by a large reference hole with the same outer dimensions as the whole array, i.e. 14 × 14 µm. The structures and reference holes are placed 70 µm apart. To determine the group delay we use an interferometric technique. Light from a 80 MHz Ti:sapphire oscillator (Spectra Physics Tsunami) is sent into a Mach-Zehnder interferometer (see figure 2). The center wavelength of the oscillator is tunable from 760 to 830 nm and the pulse duration is ∼100 fs. The sample branch of the interferometer contains the sample, and the reference branch contains two acousto-optic modulators and a delay line. The light of both branches is combined with a fiber coupler and the interferometric signal is measured with an Si photodiode. In the sample branch the light is focused on the sample and collected using two lenses with a numerical aperture of 0.4. The focus is determined to be smaller than 2 µm by imaging the sample and the focus on a camera. To compensate for the dispersion Femtosecond pulses from a Ti:sapphire laser are split into the two branches of a Mach-Zehnder interferometer. In the reference branch two acousto-optic modulators are placed to shift the frequency of the light by 9 MHz. A delay line is used to measure the interferogram. In the sample branch a sample is placed, plus a dispersion compensating crystal to balance the dispersion introduced by the two acousto-optic modulators in the reference branch. Both signals are coupled into a 2-to-1 fiber coupler. of the two acousto-optic modulators that are mounted in the reference branch, a dispersion balancing crystal is mounted in the sample branch. The two acousto-optic modulators in the reference branch shift the frequency in the reference branch by 9 MHz. This leads to a temporal 9 MHz beat signal in the detected interferometric signal. With a lock-in technique we obtain the amplitude and phase of the interference signal simultaneously. By moving the delay line an interferogram is measured as a function of the time delay. We compare interferograms through a structure to interferograms through a reference hole to obtain what we will refer to as a relative group delay. The real group delay can be derived from this relative group delay by simply adding the well-known delay of a pulse traveling through a layer of air with the same thickness as the gold film. The delay line used is a pair of mirrors mounted on a linear motor (Newport XMS50) with a linear encoder (Heidenhain LIF 481). The stage is moved at a constant velocity and a single interferogram is acquired in half a second. The polarization of the light in the experiment is rotated with a λ/2 wave plate to orient it parallel to the short axis of the holes. The sample is mounted on an X-Y piezo arm that moves the sample perpendicularly to the impinging laser beam. We collect 100 pairs of interferograms by quickly alternating between measurements on the hole array and on the reference hole. This procedure renders the method insensitive to long-term drift in the optical path lengths of the set-up. All the measured interferograms are numerically filtered in the frequency domain to remove low-frequency noise sources of electronic origin. To determine the group delay the light experiences when traveling through the structure, a Gaussian is fitted to the amplitude of each filtered interferogram. From the positions of the fitted Gaussians, an average group delay is determined, as well as a spread in measured delays. The second harmonic generated in a sample is measured in a separate experiment. The light is focused on the sample with an NA of 0.17 and collected with a higher NA. The sample is tilted under a small angle of 2.5 • to prevent back reflections from reaching the mode-locked pulsed laser. We verified in a separate experiment that the results depend only weakly on this angle. The transmitted fundamental wavelength is attenuated by a combination of colored glass and interferometric filters. The generated light is detected with a spectrometer (Acton SpectraPro 2300i) equipped with a cooled CCD camera (Princeton Instruments Spec10-B/XTE). Spectra are typically collected in 200 s. We verify that the signal is the second harmonic by checking that the signal at the doubled frequency depends quadratically on the input power. Experimental results for the measured pulse delay In figure 3 the group delay is shown as a function of aspect ratio for four different wavelengths. For all wavelengths and aspect ratios a positive delay is found with respect to transmission through a reference hole. The observed group delays, caused by the propagation of the pulse through the hole arrays, range from 1.5 to 5.2 fs. This is large compared to the 0.66 fs delay of a 6 pulse propagating through the same thickness (200 nm) of air. Figures 3(a) and (b) for 760 and 780 nm illumination both show a decreasing delay as the aspect ratio increases. The delay for pulses of 810 nm ( figure 3(c)) shows a weak maximum near aspect ratio 2. Figure 3(d) shows the results for pulses with a center wavelength of 830 nm. Here a clear peak can be observed at aspect ratio 2. The maximum delay τ max we find in our measurements is 5.2 fs. Thus the observed delay corresponds to a slow propagation velocity of the pulse through the 200 nm thick array of c/8. By simply altering the shape of the rectangular holes, the group delay through the hole array can therefore be tuned by a factor of three. Modeling calculations of the group delay We used finite integration technique (FIT) calculations [29] to complement our experimental findings regarding the group delay. The transmission of an ultrashort femtosecond pulse through an array of subwavelength holes was calculated for various aspect ratios of the holes. To calculate the group delay efficiently, periodic boundary conditions were used in the two in-plane directions. Thus the results of the calculation correspond to the transmission through an infinitely large array. Note that in the experiment a focused beam is used while the calculation is based on a plane wave. The spread in wavevectors caused by the focusing in the experiment thus leads to a slight averaging effect in the observed delay that is not included in the calculation. The dielectric constant of gold in this calculation was described with a Drude model, the parameters of which were obtained by fitting to measured values of the dielectric constant. The calculated fields before and after the structure, respectively E in (t) and E out (t), are Fourier transformed and used to determine the complex transfer function of the structures via T (ω) = F[E out ]/F[E in ]. From this the group delay is determined using τ g = ∂arg[T (ω)]/∂ω. When determining the group delay from either measured or calculated data, one has to be careful when spectral filtering takes place. The reason for this caution is that a combination of group velocity dispersion and spectral filtering can cause a delay of the pulse envelope in addition to the delay caused by a reduced group velocity. We can estimate the magnitude of this effect on the observed delay, using the results obtained in the FIT calculation. We determine the delay of a 100 fs Fourier-limited pulse E in (ω) using the transfer function T (ω) and the normalized transfer function N (ω) = T (ω)/|T (ω)| that excludes the effect of spectral filtering. We calculate the delay of the output pulse shapes E out (t) and find that the difference in delay is always smaller than 150, as for our experimental conditions, which is well within the experimental error. We therefore consider the effects of spectral filtering to be minimal in this system. The result of these calculations is shown in figure 4, where the calculated group delays for arrays of holes is plotted for several wavelengths as a function of aspect ratio. The negative delays in the calculation are caused by the anomalous dispersion in the transfer function of the hole array and have been experimentally observed in previous work [19]. In the calculation, we see that the group delay exhibits a maximum, which shifts to higher aspect ratios as the wavelength is increased. This is very well reproduced by the measurements. For the 760 nm data ( figure 3(a)), we see a monotonic decrease of the group delay as a function of aspect ratio. In figure 4, we see that this is because the peak actually lies at an aspect ratio smaller than that used in our measurements. It can also be seen from figure 4 that for wavelengths larger than 780 nm, the maximum in group delay lies in our window of aspect ratios. That is why in figures (b)-(d) we indeed observe a maximum in group delay. There are, however, some discrepancies between the calculation and the experiment. For instance, the maximum group delay that is found in the experiment is roughly 20 per cent smaller than in the calculation. Furthermore, the maxima in group delay for 810 and 830 nm appear at a larger aspect ratio in the experiment (approximately 2) than in the calculation (approximately 1.7). We attribute these differences to a combination of uncertainty in the experimental determination of the geometry of the holes and the difference between the optical properties of gold used in the calculation-which for technical reasons is described by a Drude model-and the actual optical properties of the gold film. Experimental results on the second harmonic generation (SHG) Figures 5(a)-(c) show the SHG signal normalized to the incoming fundamental power squared as a function of the group delay that was observed. Due to the low detection efficiency of the Silicon CCD at 380 nm, no results were obtained for the measurements of the SHG with 760 nm fundamental wavelength. In each graph, the order of magnitude of the SHG per Watt squared is comparable. Most strikingly, for the other three wavelengths, a strong increase of the SHG signal with group delay is observed. From the observed trend in the SHG as a function of observed group delay, we infer that the resonance leading to an increase in group delay also leads to an increase in nonlinear effects. The data are consistent with quadratic dependence on group delay. Unfortunately, the data quality does not allow us to reliably fit a power law. A few data points deviate from the trend observed between SHG and group velocity, most notably the point with the highest SHG count per squared Watt in figure 5(c). This suggests that besides group velocity, additional factors play a role in the generation of the second harmonic. Discussion and conclusion We investigated the influence of hole shape on the group delay of femtosecond pulses propagating through hole arrays. We observed a maximum delay that shifts to larger wavelengths for holes with larger aspect ratio. This is in agreement with finite integration calculations and theoretical work that predicts the presence of a resonance at the cut-off frequency of subwavelength holes [17]. Measurements of the light generated by SHG in hole arrays show a clear correlation with the enhanced group delay present at the cut-off resonance. This suggests that the enhanced group delay gives rise to larger nonlinear effects. We have to consider, however, that the observed correlation between SHG and group delay does not prove a unique causal relation between both. In fact, we anticipate that at least two additional factors play a role in this phenomenon. As the hole shape is varied, the linear transmission and the mode profile in the holes change for both the fundamental and the second harmonic frequency. Previous work [25] did, however, show that the linear transmission of the fundamental frequency cannot fully account for the observed enhancement of the SHG. The measurements in this paper indicate that the group delay is an important factor. Both experimentally and theoretically, it is very difficult to identify the role that the group velocity, linear transmission and mode profile play. The results presented here are an important step forward as they show for the first time the striking correlation between the SHG and the group delay in these structures. To know the exact mechanism of the SHG in these structures, however, will require further study. Gaining deeper insights into the role of surface roughness is of particular importance.
4,016.6
2010-01-01T00:00:00.000
[ "Physics" ]
Determinants and outcomes of access-related blood-stream infections among Irish haemodialysis patients; a cohort study Background Infections are the second leading cause of death and hospitalisation among haemodialysis (HD) patients. Rates of access-related bloodstream infections (AR-BSI) are influenced by patient characteristics and local protocols. We explored factors associated with AR-BSI in a contemporary cohort of HD patients at a tertiary nephrology centre. Methods A retrospective cohort of 235 chronic HD patients was identified from a regional dialysis programme between Jan 2015 and Dec 2016. Data on demographics, primary renal disease, comorbid conditions and dialysis access type were obtained from the Kidney Disease Clinical Patient Management System (KDCPMS). Data on blood cultures were captured from the microbiology laboratory. Poisson regression with robust variance estimates was used to compare infection rates and relative risk of AR-BSI according to the site and type of vascular access. Results The mean age was 65 (± 15) years, 77% were men, and the median follow up was 19 months (IQR: 10–24 months), accumulating 2030 catheter-months and 1831 fistula-months. Overall rates of AR-BSI were significantly higher for central venous catheter (CVC) compared to arteriovenous fistula (AVF), (2.22, 95% (CI): 1.62–2.97) versus 0.11 (0.01–0.39) per 100 patient-months respectively), with a rate ratio of 20.29 (4.92–83.66), p < 0.0001. This pattern persisted across age, gender and diabetes subgroups. Within the CVC subgroup, presence of a femoral CVC access was associated with significantly higher rates of AR-BSI (adjusted RR 4.93, 95% CI: 2.69–9.01). Older age (75+ versus < 75 years) was not associated with significant differences in rates of AR-BSI in the unadjusted or the adjusted analysis. Coagulase negative Staphylococcus (61%) and Staphylococcus aureus (23%) were the predominant culprits. AR-BSIs resulted in access loss and hospitalisation in 57 and 72% of events respectively, and two patients died with concurrent AR-BSI. Conclusions Rates of AR-BSI are substantially higher in CVC than AVF in contemporary HD despite advances in catheter design and anti-infective protocols. This pattern was consistent in all subgroups. The policy of AVF preference over CVC should continue to minimise patient morbidity while at the same time improving anti-infective strategies through better care protocols and infection surveillance. Electronic supplementary material The online version of this article (10.1186/s12882-019-1253-x) contains supplementary material, which is available to authorized users. Background Patients on haemodialysis (HD) endure infection rates that are more than 26 times higher than that of the general population [1], and more than 100 to 200-fold higher for specific organisms [2]. They are the second leading cause of hospitalisation and mortality in the dialysis population [3][4][5]. National and international guidelines along with national policy initiatives [6][7][8][9] recommend the use of arteriovenous fistula (AVF) whenever possible, as the risk of infections and other complications is highest among patients using central venous catheters (CVCs) [3,10,11]. Despite the dangers associated with CVC use, these devices remain the principal type of access in the majority of HD patients in Ireland [12,13] and internationally [14]. The alarmingly high rates of access-related bloodstream infections (AR-BSI) in patients undergoing dialysis with a CVC has forced changes in clinical practices that include better anti-infective protocols, increasing adoption of catheter lock solutions, and better antimicrobial surveillance protocols in order to reduce CVC-related infection rates [15][16][17][18]. It is unclear, however, to what extent these changes have curbed the high rates of AR-BSI in the context of an increasing elderly HD phenotype with a high burden of complex health problems. It is also uncertain whether any benefit derived from these measures extends to very high-risk groups especially the elderly, patients with diabetes and those dialysed with a femoral CVC. While the formation of a functioning AVF is the preferred vascular access, this is not easily attainable in all individuals, especially elderly patients on HD [19]. Furthermore it remains controversial whether CVCs are superior to AVFs among elderly patients undergoing dialysis with a recent study finding lower rates of catheter-related bacteraemia in elderly patients compared to younger patients [18,[20][21][22]. Within the Irish health system, data is lacking on the on the frequency and impact of AR-BSI in HD. The availability of such data along with clinical outcomes will help inform healthcare providers and policy-decision makers on access type, and will drive quality improvement initiatives to improve patient outcomes. We determined the rates of AR-BSI in a contemporary cohort of HD patients dialysed with a CVC or AVF and explored the relative contribution of demographic and clinical factors to overall rates of AR-BSI. Study design and setting We conducted a retrospective observational study to explore AR-BSI in a contemporary cohort of HD patients. We identified all adult patients receiving chronic HD during 2015 and 2016 under the care of a tertiary nephrology centre. Patients were observed from the first to the last dialysis they received during the period between 1/1/2015 and 31/12/2016. Primary access type and changes from CVC to AVF or vice versa during the observation period were recorded. All AR-BSI events were captured during the observation period and outcomes of these events were recorded. The rates of BSIs were calculated using standard definitions described below. As this study aimed to examine rates of bacteraemia associated with access types used over prolonged periods of time in outpatient settings, temporary dialysis catheters were not included in the analysis. Description of local practice Patients received dialysis at a unit attached to the main hospital or at an affiliated outpatient dialysis unit. All tunnelled CVCs were inserted by interventional radiologists. CVC type used is ProGuide™, produced by Merit Medical Systems®. An access care bundle was in place to reduce risk of infection. This included protocols for hand hygiene and use of protective equipment during connection and disconnection of dialysis lines. BioPatch® (Ethicon©) dressings were applied to the exit site, and were changed on a weekly basis. Catheters were locked with 46.7% citrate. Disposable catheter hubs were used. Before connection, catheter hubs and fistula needle insertion sites were decontaminated with 10% iodinated povidone. When an access infection was suspected, two sets of blood cultures were taken from each port of a catheter or from a peripheral vessel in case of a fistula. Empiric antimicrobial therapy was commenced when there was strong clinical suspicion after collection of culture samples. Dialysis catheters were removed if the causative organism was Staphylococcus aureus, fungal, or if the infection is difficult to clear. The HD unit protocol mandates regular screening for methicillinresistant S. aureus (MRSA), vancomycin-resistant Enterococcus species (VRE), extended spectrum beta lactamase (ESBL)-producing organisms and carbapenemresistant Enterobacteriaceae (CRE). Colonization with MRSA is treated with mupirocin nasal disinfection and chlorhexidine wash for skin disinfection followed by rescreening. All patients colonized with MRSA or CRE receive dialysis in isolation rooms. All inpatient and outpatient microbiology samples from the healthcare region are sent to a single central microbiology laboratory located within the main hospital. Participants and data sources Patients were identified using data from the Kidney Disease Clinical Patient Management System (KDCPMS), a national multi-domain electronic health record system that tracks clinical care of HD patients in the Irish health system. Patients who received acute dialysis or holiday dialysis treatments were excluded. Study entry age, access type and access site were defined for each patient at the date of first dialysis during the study period. Baseline data were captured on age, sex, primary cause of End Stage Kidney Disease (ESKD), comorbid conditions, the type and site of the dialysis access. Blood culture results from patients during the observation period were retrieved from the microbiology laboratory database. All changes in dialysis access type and site were recorded during the study period. Access site was recorded as upper extremity or femoral location. The internal jugular vein (IJ) was the most common access at the centre, with subclavian access only reserved for situations where IJ access was not attainable. Due to the small number of non-IJ sites, comparisons between different non-femoral CVC's was not reliable or informative. The determination of infection rates and rate ratios was based on the current access in use at time of infection. Definition of AR-BSI and calculation of rates Access-related bloodstream infection (AR-BSI) was defined as growth of a typical organism with either a documented exit site or tunnel infection, or with no other identified source of infection. Patients with atypical organisms who received antimicrobial treatment for 2 weeks or more were also considered to have AR-BSI. Blood cultures that were positive with the same organism within 21 days of a previous positive culture were considered part of the initial event and not counted as a separate event. The definition of AR-BSI in our study did not include sampling from a peripheral vein (as recommended by CDC), however, previous reports suggest that peripheral blood culture results add little to the sensitivity and specificity of cultures blood obtained from the HD circuit and the venous catheter hub [23]. We followed guidelines issued by the Centers for Disease Control and Prevention (CDC) -National Healthcare Safety Network (NHSN) for calculation of event rates [24,25]. The number of chronic HD patients under the care of the tertiary centre on the first day of each month was used as the denominator for that month. Vascular access type at the start of the month was used to identify subgroups for catheter patient-months and fistula patient-months. The numerator for AR-BSI event rates for each month was the number of identified AR-BSI events during that month. All recorded CVCs were tunnelled catheters (none were temporary dialysis catheters). Only two arteriovenous grafts were in use during the observation period. For the purposes of this analysis, these were grouped with AVF. Ethical approval Ethical approval was not sought this study as the surveillance of infections in dialysis patients is part of regular clinical audit and the hospital's quality improvement programme [26]. Statistical analysis Baseline characteristics were presented for the whole group and for subgroups of study entry access type. Continuous variables were presented as mean ± standard deviations and categorical variables were presented as percentages. Comparisons between groups according to vascular access type were performed by analysis of variance for continuous variables and Chi-square test for categorical variables. Poisson regression employing the Huber-White sandwich variance estimator was used to compare the infection rates and determine the risk of infection according to vascular access type. Rates of AR-BSI were presented as events per 100 patient-months with robust 95% confidence intervals (CIs). To determine factors associated with bacteraemia among patients using CVC, univariable and multivariable models were constructed to examine the association of demographic and clinical factors, and access insertion site with the risk of AR-BSI. Model development progressed using a manual strategy taking into consideration known associations from published literature, and statistical significant univariable associations. A final model was constructed to determine the association of age, sex, diabetes, and access type with the outcome of AR-BSI in patients receiving dialysis by CVC. Goodness of fit was assessed using the Pearson and Deviance statistics. All analyses were conducted using R version 3.4. Baseline characteristics of study population A total of 281 adult patients received dialysis between 1/ 1/2015 and 31/12/2016. Of those, 46 were excluded (Acute HD: 26, Visitor/Holiday HD: 20) leaving 235 patients eligible for inclusion in the final study sample. During the observation period, the monthly census increased from 134 patients to 181 from Jan 2015 to Dec 2016, and the percentage of patients receiving dialysis by CVC ranged from 48 to 57% (Additional file 1: Table S1). Median patient follow up was 19 months (IQR: 10-24 months) resulting in 3861 patient months of observation, 2030 catheter-months and 1831 fistulamonths (Additional file 1: Table S2). There were 74 bloodstream infections detected during the observation period, of which 47 were related to HD access (Fig. 1). Table 1 illustrates the baseline characteristics of participants at study entry. The average age was 65 (± 15) years, 28.5% were 75 years of age or older, and the majority were men (77%). Diabetes was the most common cause of ESKD, while hypertension and diabetes were the most prevalent recorded comorbid conditions. The distribution of baseline characteristics was similar for patients dialysed by either AVF or CVC with the exception of sex and primary cause of ESKD. The CVC group had significantly higher proportion of women and fewer patients who had diabetes, hypertension and renal cystic disease as their primary cause of renal disease. The baseline characteristics of patients dialysed through a CVC is shown in Table 2. Patients age ≥ 75 years had significantly more hypertension and congestive heart failure than younger counterparts while the distribution of other characteristics was similar. Distribution of baseline characteristics in the whole group (any access) by age group is shown in Additional file 1: Table S3. Rates of access-related bloodstream infection (AR-BSI) in CVC versus AVF The rate of AR-BSI in patients dialysed with a CVC was significantly higher than in patients with an AVF [2.22 (95% 1.62-2.97) versus 0.11 (95% CI: 0.01-0.39) per 100 patient months respectively, p < 0.001], with an unadjusted incidence rate ratio of 20.29 (95% CI 4.92-83.66) as shown in Fig. 2. Among all specified subgroups of age, sex, diabetes and access site, the incidence rates of AR-BSIs were significantly and substantially higher among those using a CVC compared to AVF. By far the highest rate of AR-BSI was observed with a femoral CVC access, 8.50 (95% CI 4.52-14.53) events per 100 patient months. Among patients receiving dialysis by tunnelled catheter, the rate ratio of AR-BSI for femoral versus non-femoral CVC was 4.98 (95% CI 2.71-9.15), p < 0.001. In multivariable analysis adjusting for age, sex, and diabetes, use of a femoral CVC was associated with 4.93 times (95% CI 2.69-9.01) the risk of infection compared to a non-femoral CVC site - Fig. 3. There were no significant differences in the rates of AR-BSI by age (75+ versus < 75 years) in the unadjusted or the adjusted analyses. Type of organism The distribution of organisms isolated from blood cultures is shown in Table 3. The most common isolates were Staphylococci identified in 85% of positive blood cultures. The majority of these were Coagulase-negative Staphylococcal (CoNS) infections with S. aureus contributing 23.4%. No fungal infections were recorded during the study period. AR-BSI outcomes AR-BSIs resulted in hospitalisation and access loss in 34 (72%) and 27 (57%) of the events respectively. Two patients died with AR-BSI events; one with poly-microbial infection. Sensitivity analysis To address whether a number of individuals prone to risk in the CVC group may be leading to an exaggeration of risk, we conducted a sensitivity analysis by excluding patients with more than 1 and more than 2 recorded CRBSIs (Catheter-Related Bloodstream Infections). No variables were found to be statistically significant when excluding patients with more than 1 case of CBRSI (n = 10). However, excluding patients with more than two cases (n = 2) yielded similar results to that of the primary analysis. The median duration between CRBSI's among these patients was 117 days with a minimum of 50 days and a maximum of 465 days between events. There were a few patients with recurrent infections (more than 2 cases of CBRSIs) in this study and omission of these patients did not alter the primary findings of this study. Discussion In this large centre-based study, we emphasise the significant risk of bloodstream infections associated with use of tunnelled dialysis catheters. Compared to patients who were using an AVF, patients with a CVC experienced a 20-fold higher risk of access-related bacteraemia. The risk associated with CVC use was independent of age and comorbid disease measured at baseline. Subgroup analysis confirmed that the pattern of risk from CVC was present in younger and in older patients, men and women, and in patients with and without diabetes. These results would suggest that despite advances in anti-infective protocols, innovative catheter designs, and the implementation of national guidelines, CVCs remain a major source of serious morbidity in HD patients. The adverse impact of CVC over AVF on catheterrelated bacteraemia rates was overwhelmingly apparent from this analysis. Our observed rates of AR-BSI events were 2.22 and 0.11 events per 100 patient-months for CVC and AVF respectively, a 20-fold difference. Our findings are concordant with reports from other parts of the world. AR-BSI rates in patients with CVC and AVF were 3.1 and 0.6, and 3.5 and 1.7 per 100 patientmonths from Greece and Brazil respectively [4,18]. A study from the National Healthcare Safety Network (NHSN) in the US reported pooled rates of 2.55 and 0.23 events per 100 patient-months for CVC and AVF respectively from 2007 and 2011 [27], while a more recent report suggested improvements with estimates of 1.83 and 0.16 for CVCs and AVF respectively [24]. The patient-months distribution in this last report was 19% for CVC and 63% for AVF, reflecting a much lower dependence on tunnelled catheters than in our Irish cohort. The rates of AR-BSI in our cohort compare favourably with those from the CDC report in the US [24] in that rates were at the 50th percentile for CVC-related BSIs, and below the 25th for AVF-related infections. Despite these reassuring statistics, there is emerging evidence that further improvements are ❖Column % unless specified otherwise *p < 0.05 possible. Hymes et al, reported significant reductions in AR-BSI to 0.67 per 100 patient-months with the introduction of antimicrobial barrier caps [28]. A further study by the CDC Dialysis BSI Prevention Collaborative, demonstrated a sustained reduction in CVC-related BSI's from 2.26 to 1.08 events per 100 patient-months using a bundle of BSI-preventative interventions [29]. These encouraging findings suggest that there is further scope to reduce infection rates associated with CVC use and emphasise the need for sustained quality improvement initiatives. Controversy currently exists as to whether tunnelled dialysis catheters should be considered a satisfactory access type for dialysis in older patients [30]. A lower rate of complications in older patients would support this approach. In support of this hypothesis, Murea et al found lower rates of catheter-related bacteraemia in patients above 75 years versus younger patients (1.67 versus 5.99 events per 100 patient-months respectively, HR 0.33 (95% CI 0.20-0.55) [17], citing lower rates of nasal colonisation, less sweating, and less mechanical stress on the catheter as potential reasons. Wang et al showed similar results [16]. However, several studies have found no association between age and AR-BSIs [18,[20][21][22]. Furthermore, mortality risks (infectionrelated, cardiovascular-related, and all-cause) are higher in patients on dialysis by CVC even in elderly patients [31,32]. The findings from our study are in direct contrast with those of Murea et al in that elderly patients experienced risks that were similar to those of younger patients. The most significant factor associated with increased catheter-related BSIs in our population was site of the tunnelled dialysis catheter. In univariate and multivariable analysis, femoral access was associated with a fivefold increase in the rate of AR-BSI when compared to non-femoral access. This observation may relate to greater levels of skin contamination at the femoral area, relatively more difficult access for cleaning and observation, or may relate to some patient characteristics such as vintage or poor health. Femoral access is known to have higher rates of complications overall, including infection and malfunction [16,17]. Quality improvement programs need to focus on this high-risk group of patients. We did not observe a difference in AR-BSI rates between patients with and without diabetes in univariate or multivariable analysis. Diabetes was found to be a risk factor for bacteraemia in some [16,18] but not all studies [17]. Similarly, gender did not have an effect on infection rates, and this is consistent with multiple prior studies [16][17][18]. Gram positive organisms were the predominant microbes from positive blood cultures in our cohort, with only a smaller proportion of AR-BSIs attributable to Gram negative (GN) organisms. Whereas studies from the US, Brazil, Greece and Singapore reported GN bacterial growth in 15 to 26% of positive cultures [17,18,24,27,33,34], GN bacteria were identified in less than 5% of specimens in our study. CoNS were identified in more than 60% of cases, which is a higher rate than compared to published literature. CoNS, despite being common constituents of the normal flora of the skin, can be major nosocomial pathogens and cause significant morbidity in patients with CVCs [35]. Their spread is facilitated by poor hand hygiene and inadequate disinfection or sterilisation of instruments or surfaces [35]. It is difficult however to compare frequencies between different studies because of different study inclusion and exclusion criteria, and different definitions used. Our study also highlights the high burden of these events on both the patient and the health system. Two bacteraemia-related fatalities were identified. The majority of patients with AR-BSIs required hospitalisation, and catheter replacement was required in more than 50% of patients. A few limitations are worth mentioning. The study was retrospective in design and thus not all known risk factors were measured at baseline. Patients receiving dialysis by CVC may be inherently different to those with AVF. Our data did not enable us to characterise patients beyond the variables used in our models. We did not capture exit-site and tunnel infections in our study. However, it should be noted that there is subjectivity in the definition of these events and these may or may not be associated with bacteraemia. The study reflects a single centre experience, which may limit generalisability. In addition, we did not differentiate between incident or prevalent HD patients in our study. Therefore, we must acknowledge we were unable to assess whether dialysis/ catheter vintage modifies the relationship with infection risk in those with CVC's. Finally, our unit's policy on management of access-related bacteraemia did not have a standardised protocol to check for clearance of bacteraemia prior to or shortly after discontinuation of the antimicrobial agent. This precluded conducting a reliable comparison of clearance duration. These limitations, however, were counterbalanced by several strengths. First, our study included all chronic HD patients who received dialysis at a large centre, and none of the patients had missing data. Second, all microbiology test results were available from a single central laboratory ensuring consistency and reliability of reporting. This was of concern in data reported from dialysis facilities in the US, as blood cultures were analysed at several different laboratories, particularly for samples obtained after hospitalisation [24,36]. We reported on BSI, a measure that is based on an objective test, using standard definitions. Finally, the period of follow-up was long relative to other published studies in the literature. Conclusions AR-BSI remain a significant complication particularly among contemporary cohorts of patients undergoing haemodialysis by tunnelled catheters. The risk is present for all subgroups including the elderly. Access-related bloodstream infections impose a huge burden on patients and on health systems. Active surveillance of BSI linked to quality improvement initiatives should remain an integral part of all dialysis programmes to reduce catheter-associated infections and improve patient outcomes. Additional file Additional file 1: Table S1. Number of patients and type of access in each month in the observation period.
5,335.4
2019-02-26T00:00:00.000
[ "Medicine", "Biology" ]
Automating Quality Metrics in the Era of Electronic Medical Records: Digital Signatures for Ventilator Bundle Compliance Ventilator-associated events (VAEs) are associated with increased risk of poor outcomes, including death. Bundle practices including thromboembolism prophylaxis, stress ulcer prophylaxis, oral care, and daily sedation breaks and spontaneous breathing trials aim to reduce rates of VAEs and are endorsed as quality metrics in the intensive care units. We sought to create electronic search algorithms (digital signatures) to evaluate compliance with ventilator bundle components as the first step in a larger project evaluating the ventilator bundle effect on VAE. We developed digital signatures of bundle compliance using a retrospective cohort of 542 ICU patients from 2010 for derivation and validation and testing of signature accuracy from a cohort of random 100 patients from 2012. Accuracy was evaluated against manual chart review. Overall, digital signatures performed well, with median sensitivity of 100% (range, 94.4%–100%) and median specificity of 100% (range, 100%–99.8%). Automated ascertainment from electronic medical records accurately assesses ventilator bundle compliance and can be used for quality reporting and research in VAE. Introduction Patients who receive mechanical ventilation are at high risk of complications and poor outcomes including death [1]. To effectively manage these high risk patients, providers are encouraged to put in place best practice "bundles" addressing the use of deep vein thrombosis (DVT) prophylaxis, peptic ulcer prophylaxis, oral hygiene, elevation of the head of the bed, daily sedation holiday, and daily spontaneous breathing trial [2]. The ventilator bundle has formed the backbone of many quality improvement efforts and metrics for intensive care units, though its impact on patient outcomes remains uncertain [3]. In 2011, CDC/NHSN proposed a new approach to surveillance including a broader range of ventilator complications termed ventilator-associated events (VAEs) [4]. We sought to investigate if compliance with ventilator bundle practices effectively reduces the risk of the broader set of VAEs and evaluate the relative contribution of each bundle element to patient outcomes. In order to accomplish this, we needed to develop a reliable strategy for assessing bundle compliance for a large number of patients in an efficient manner. Manual chart review is the "gold standard" of retrospective studies. However, it is time-consuming, inaccurate, resource intense, and not feasible for large sample sizes. The recent development of information technology and the widespread use of electronic medical record (EMR) systems [5] signature also can be translated into a real time automated algorithms or "sniffers, " where the same rules that were used to retrospectively search charts electronically can give real time or near real time reports and alerts to improve patient care [6]. This study aimed to develop and validate digital signatures for each part of the ventilator bundle, including DVT prophylaxis, peptic ulcer prophylaxis, oral care, head of bed elevation, and sedation breaks. Materials and Methods We designed this study as a retrospective study with both derivation and validation cohorts ascertained from intensive care unit patients. The Mayo Clinic Institutional Review Board approved the study as minimal risk research with waived informed consent. Study Population. We used a retrospective cohort of 1000 randomly selected patients who were admitted to the intensive care unit (ICU) for at least two consecutive days during 2010 to form our derivation cohort. Of these, 542 met our study inclusion criteria including two consecutive days of mechanical ventilation and research authorization. Our derivation cohort included both ventilated and nonventilated patients to ensure we would have an adequate number of both "true positive" and "true negative" compliance for each element of the bundle while adjusting our search strategy. We then validated the electronic data extraction strategy in an independent cohort of 100 randomly selected patients who were mechanically ventilated for at least two consecutive days in 2012. The purpose of the selection of mechanically ventilated patients from two different years was to better assess the performance of the strategy. Patients aged < 18 year or without research authorization were excluded. Electronic Data Extraction. To develop the electronic data extraction strategy, we utilized data from a custom integrative relational research database that contains a near real-time copy of clinical and administrative data from the electronic medical record (EMR). The Multidisciplinary Epidemiology and Translational Research in Intensive Care (METRIC) datamart accumulates pertinent vital signs, fluid input/output, and medication administration record data within an average of 15 minutes from its entry into the EMR and serves as the main data repository for data rule development. More detailed structures and contents have been previously published [7]. For each bundle element, we iteratively improved the accuracy of our electronic query using the derivation cohort ( Figure 1: flow chart). In all iterations, we calculated and analyzed sensitivity and specificity compared to the reference standard and examined discordant pairs for data which could be used to improve the electronic search accuracy. Once we achieved acceptable sensitivity and specificity, we validated our queries in another independent cohort and calculated final sensitivity and specificity of our digital signatures. The final electronic queries for each ventilator compliance bundle were presented in Table 1. Reference Standard. The reference standard was defined as the agreement between manual and electronic data extraction. A trained investigator (LH), who was blinded to electronic data extraction result, performed comprehensive medical record review to identify the presence or absence of each component of ventilator compliance bundle according to predefined definition (Table 1) between 00:00 to 23:59 on ICU day 2 in the derivation cohort and mechanical ventilator day 2 in the validation cohort. In case there was a disagreement between manual and electronic data extraction, a third independent investigator (JCO), who was blinded Table 1: Bundle components and definitions. The "medical definition" refers to the objective of the bundle element. The "EMR definition" is how we operationalized this for our digital signature. The EMR section used refers to portions of the patient chart searched with the digital signature for the bundle element. Ventilator compliance bundle element Medical definition EMR definition EMR section used DVT prophylaxis The presence of an appropriate anticoagulant within 24-hour period The systemic administration of one of the following medications within 24 hours regardless of dosage use: Medication administration record, fluid data Peptic ulcer prophylaxis The presence of an appropriate acid-inhibitory drug or sucralfate within 24-hour period The systemic administration of one of the following medications within 24 hours regardless of dosage use: to both results, would make the final adjudication; this definition has been previously used [8]. Statistical Analyses. We summarized clinical characteristics of derivation and validation cohorts using mean ± SD for continuous variables and using counts with percentages for categorical variables. We calculated sensitivity and specificity of each electronic data extraction based on the comparison of the test result and reference standard in the two cohorts. The 95% confidence intervals were calculated using an exact test for proportions. JMP statistical software (version 9.0, SAS Institute Inc.©) was used for all data analysis and randomization. Results and Discussion The derivation subset included a total of 542 ICU patients randomly selected from January 2010 to December 2010. The validation subset included a total of 100 randomly selected patients from January 2012 to December 2012. There were no differences in age, gender, and race between the two groups. The demographic characteristics and baseline of the derivation and validation subset are summarized in Table 2. The sensitivities of five ventilator bundle components were from 92% to 100% in the derivation subset in our final iteration. The specificities ranged from 50% to 99.8% after modification. Elevation of the head of the bed was the bundle element that could not be improved to an adequate sensitivity or specificity because of variable and inconsistent charting. We thus decided not to validate this query and did not test in our validation cohort. When examining the validation cohort, the sensitivities of our digital signatures ranged from 94.4% to 100%, and specificity was 100% for each (Table 3). Manual chart review was slow, requiring our reviewers to access two or more programs to abstract the relevant data from the EMR, taking an average of 10 minutes/patient. We achieved comparable results with electronic data abstraction, BioMed Research International 5 which will allow us to scan compliance of thousands of patients in a reasonable time frame for the second part of our study, an assessment of ventilator bundle compliance on the risk of developing a VAE. With the widespread adoption of EMRs, the digital signature is an increasingly attractive alternative to manual chart review. Digital signatures have several advantages. First, they are more efficient, making larger-scale cohort studies practical without significant personnel or time expenditure. Second, in developing them, we can look for markers of specific activities that correlate with actual patient outcomes and thus mitigate some types of reporting bias. For example, our DVT prophylaxis signature looks for times where one of the commonly used agents is actually administered, as opposed to asking staff to fill out a checkbox saying that "DVT prophylaxis has been addressed. " More broadly, this kind of search allows automated searching beyond simple billing codes and administrative data, which are notoriously variable in accuracy [9][10][11][12]. Finally, digital signatures have the potential to be translated into real-time electronic search algorithms, or "sniffers, " to provide near real time data. For example, the same rules that we used to develop our peptic ulcer prophylaxis signature could provide real time data on compliance, use and misuse. Sniffers are increasingly prevalent, though a recent systematic review highlighted issues with variable performance and accuracy owing in part to inadequate validation [13]. As we noted in our effort to derive and validate a signature for head of bed elevation, variability in documentation practice may limit the ability to derive a clinically useful digital signature; however, an emerging automatic documentation technology could help overcome this limitation. An interesting feature we noted in our validation cohort was higher diagnostic performance than in our derivation cohort. As our derivation cohort was what we used to derive the search, we expected to be "overfitted" to that set and lose both sensitivity and specificity as we moved to another cohort. However, we instead noted improvement. This probably owes to improvements made in the ICU datamart's accuracy over time, as our derivation cohort was from archived data in 2010, and validation used the same rules in 2012. We noted better agreement between datamart and EMR data in the more recent set and thus better improvement with our rules-based signatures. With reasonable search algorithms, this allows us to move forward and evaluate the efficacy of specific ventilator bundle elements in preventing VAE. A previous study at our institution using pre-and postbundle implementation measures found no effect, but that study was an ecological design and was not able to evaluate individual patient bundle compliance [3]. With these signatures, we will be able to give a higher resolution evaluation of the effect of the ventilator bundle. We can also work towards developing real time compliance monitoring of the ventilator bundle for both quality improvement purposes, aiming to indirectly improve care and reduce costs with passive monitoring of valueadding practices. Our study also has several limitations. First, as noted above, we are limited by what is electronically documented and the accuracy of initial inputs. Second, preferred medications and formularies differ between hospitals, and while our digital signature may be a starting point for other hospitals attempting something similar, calibration and validation would be necessary to generalize this elsewhere. Finally, the single-center, academic nature of our institution could raise the concern of referral bias and further limit generalizability of our approach. Conclusion The digital signatures used to extract and screen the usage of ventilator-associated pneumonia bundle elements were both sensitive and specific for DVT prophylaxis, peptic ulcer prophylaxis, daily sedation break, and oral care. We were not able to derive a similarly useful signature for head of bed elevation. These signatures have acceptable sensitivity and specificity for use in our larger study of the impact of the ventilator bundle on risk of VAE.
2,790.2
2015-06-08T00:00:00.000
[ "Computer Science", "Medicine" ]
Can we make sense out of"Tensor Field Theory"? We continue the constructive program about tensor field theory through the next natural model, namely the rank five tensor theory with quartic melonic interactions and propagator inverse of the Laplacian on $U(1)^5$. We make a first step towards its construction by establishing its power counting, identifiying the divergent graphs and performing a careful study of (a slight modification of) its RG flow. Thus we give strong evidence that this just renormalizable tensor field theory is non perturbatively asymptotically free. Introduction Recently Hairer [Hai14] solved a series of stochastic differential equations such as the KPZ equation or the φ 4 3 equation. An advantage of such equations is that they are better suited to Monte Carlo computations than functional integrals. Since then, in a systematic series of impressive articles, Hairer and his collaborators [BHZ18; CH16; Bru+17] extended their initial programme to cover the BPHZ renormalization [BP57;Hep66;Zim69]. In contrast to dimensional renormalization, BPHZ renormalization is adapted to the program of constructive field theory. It incorporates the multi-scale expansion, a main constructive tool [Fel+85], and a more up-to-date mathematical formulation of renormalization based on Hopf algebras [CK98]. To the attentive observer, constructive field theory, namely the point of view which Hairer called static, is rapidly merging into the regularity structures and corresponding models of Hairer, which he called the dynamic point of view. In the language of quantum field theory, it happens that the equations which Hairer solved were all Bosonic super-renormalizable. Now is time for advancing the next step: the bosonic just-renomalizable quantum field theories. The BPHZ renormalization was initially designed to cover theories such as QED in dimension four, the main theory at the time. But a profound objection were raised, initially by Landau. Now we have a name for that obstacle: QED is not asymptotically free. Fortunately for the future of quantum field theory, the discovery that electroweak and strong interactions are asymptotically free were instrumental in its "rehabilitation" as a fundamental theory. A famous theorem due to Coleman states that any local Bosonic asymptotically free field theory must include non-Abelian gauge theories. Non-Abelian gauge theories lead to an additional severe problem: the presence of Gribov ambiguities [Gri78] due to gauge fixing. The way out of these difficulties is a main reason for considering the stochastic quantization [PW81], since in this method there are no need to fix the gauge, so no need to solve Gribov ambiguities. But it remains still a tough programme. On the road to this lofty goal, we propose an intermediate step which might be worth the effort in itself. It escapes Coleman's theorem by being a non-local theory. We have in mind the tensor field theory. Born in the quantum gravity craddle [ADJ91; Sas91; Gro92], random tensor models extend random matrix models and therefore were introduced as promising candidates for an ab initio quantization of gravity in rank/dimension higher than 2. However their study is less advanced since they lacked for a long time an analog of the famous 't Hooft 1/N expansion for random matrix models. Their modern reformulation [Gur17a; Gur16;GR11b] considers unsymmetrized random tensors 1 , a crucial improvement which let the large N limit appear [Gur10b; GR11a;Gur11]. The limit of large matrix models is made of planar graphs. Surprisingly perhaps, the key to the 1/N tensors is made of a new and simpler class of Feynman graphs that we called melonic. They form the dominant graphs in this limit [Bon+11;BGR12b] Random tensor models can be further divided into fully invariant models, in which both propagator and interaction are left invariant by the symmetry (such as U (N ) ⊗d ), and non-local field theories where the propagator is for example the ordinary Laplacien on the torus U (1) ⊗d (which breaks the symmetry) but in which the interaction is left invariant by the symmetry. To our own surprise, such just-renormalizable models turn out to be asymptotically free [BG16;Riv15]. In particular the simplest such model in this category, nicknamed T 4 5 theory is asymptotically free! It made them an ideal playground for advancing the mathematics both in the static sense of constructive theory and in the sense of Hairer's stochastic quantization. This fact now many years old was perhaps overlooked by the theoretical and mathematical physics community. Also the tensor methods and models in quantum gravity that one of us baptized the tensor track [Riv12a; Riv12b; Riv13; Riv16b; DR18; DR20] was given a big boost from an unexpected corner. Since the advent of the SYK model [Kit15;PR16;MS16] it appears that 1-dimensional quantum random tensor is even richer than the 0-dimensional ordinary random tensor theory [Wit16;Gur17b;CT16;KT17]. It is approximately reparametrization invariant (i.e. conformal), includes holography and it saturates the MSS bound [MSS16]. In fact the real applications, as it often happens, might be elsewhere. Today we probe reality by multiples sensors. That is, we represent that reality by multidimensional big arrays which are, in the mathematical sense, nothing but big tensors. Hence we need to develop better and more versatile algorithms to probe tensors in this limit. Such algorithms could benefit of the modern formulation of random tensors. This is especially true for those separating signal to noise. One example is tensor PCA [RM14; BA+17; BAGJ18; Zar+18], which extends classical matrix PCA to tensors. Such algorithms could be applied in a variety of domains, high energy physics (detection of particule trajectories), spectral imaging or videos, neuroimaging, chemometrics, pharmaceutics, biometrics, social networks and many more. In fact the analysis of big tensors form a bottleneck in such a dazzling kaleidoscope that it is no exaggeration to say that any main progress in this field may create a revolution in artificial intelligence. Now let us come down to earth. The tensor theory new constructive program [Riv16a] is well advanced in the super-renormalizable case [Gur13;DGR14]. In [DR16] the U (1) rank-three model with inverse Laplacian propagator and quartic melonic interactions, which we nickname T 4 3 , was solved. In [RVT18] the U (1) rank-four model T 4 4 was solved. This model looks comparable in renormalization difficulty to the ordinary φ 4 3 theory, but non-locality and the graphs are more complex hence requires several additional non-trivial arguments. The next goal is to treat just-renormalizable asymptotically free Bosonic T 4 5 . In 1979, G. 't Hooft gave a series of lectures entitled Can we make sense out of "Quantum Chromodynamics"? [Hoo79] 3 . He presented there arguments and strategies to control QCD via the study of its singularities in the Borel plane. To this aim, he had to control the flow of the coupling constant in the complex plane. The tensor field theory T 4 5 is a perfect playground for constructive purposes as its flows can be controled precisely thanks to its simple and exponentially bounded divergent sector. In the present paper we make a further step by connecting it, modulo certain hypotheses, to an autonomous non-linear flow of the theory of dynamical systems. The T 4 5 tensor field theory is precisely defined in Section 1. In particular, we present the cut-offs we use and an alternative representation of the model in terms of an intermediate matrix field. Section 2 is devoted to the three different representations of Feynman graphs we need (tensor graphs, coloured graphs and intermediate field maps) as well as related concepts thereof. In Section 3 we derive the power-counting, identify the families of divergent graphs and give the recursive definitions of the melonic correlation functions. For constructive purposes, we will employ none of the bare, renormalized or even fully effective perturbative expansions. In fact, it will be preferable to fully mass renormalize the correlation functions but use effective wavefunctions and coupling constants. We define all these objects in Section 4. We also prove there that effective wave-functions and coupling constants are analytic functions of the bare coupling. The main result of Section 4 is Theorem 4.3 which consists in a non perturbative definition of the RG flow for the coupling constant. A careful study of an approximation of this flow is carried out in Section 5 using tools and concepts from discrete and continuous holomorphic local dynamical systems. We identify in particular "cardioid-like" domains of the complex plane invariant under this modified RG flow, see Theorems 5.12 and 5.15 and Corollary 5.13. Solving this T 4 5 model means defining its correlation functions non perturbatively in the coupling constant g. More precisely it requires to prove the existence of holomorphic functions of g in a (probably cardioid-like, with a cut on the negative real axis) domain of the complex gplane such that their Taylor expansions coincide with the perturbative expansions of the (formal) correlation functions of the model. Moreover these functions should very probably be proven Borel summable. To achieve that goal, one expresses the regularized and renormalized correlation functions as series of analytic functions, normally convergent in a domain the size of which is uniformly bounded in the ultraviolet cutoff. The infinite cutoff limit is then well-defined and analytic. These expansions consist in partial resummations of the perturbative series, either expressed in an intermediate field representation ( theory. An update of all currently known approaches to constructive tensor field theory seems necessary [FRVT21]. Throughout this paper, we always use bold characters to denote tuples of at least two variables. We introduce the normalized Gaussian measure where the covariance C is C n,n = δ n,n C(n), C(n) = 1 n 2 + m 2 , n 2 := n 2 1 + n 2 2 + n 2 3 + n 2 4 + n 2 5 . (1.1) This defines the free tensor fields as random distributions on Z 5 , namely on the dual of rapidly decreasing sequences on Z 5 . But as we are interested in interacting tensor fields, we need to regularise the free measure. Ultraviolet cutoff. - In practice we want to restrict the index n to lie in a finite set rather than Z 5 in order to have a well-defined proper (finite dimensional) tensor model. This restriction is an ultraviolet cutoff in quantum field theory language. A colour-factorized ultraviolet cutoff would restrict all previous sums over tensor indices to lie in [−N, N ]. However it is not well adapted to the rotation invariant propagator of (1.1) below, nor very convenient for multi-slice analysis as in [GR14]. Therefore we introduce a rotation invariant cutoff but in contrast with [RVT18] it will be smooth. Let a, be two positive numbers such that < a. Let χ be a smooth positive function with support [− , ]. We denote by 1 [−a,a] the indicator function of [−a, a]. In order to prepare for multiscale analysis (see Section 4.3), we fix an integer M > 1 (as ratio of a geometric progression M j ) and choose a large integer j max . Our ultraviolet cutoff is defined as It is smooth, positive, compactly supported, and satisfies It is convenient to choose a = 5/2 and = 3/2 so that the UV cutoff κ effectively restricts each colour index to lie in [−N, N ] with The normalized bare Gaussian measure with cutoff j max is where the bare covariance C b is, up to a bare field strength parameter Z b , the inverse of the Laplacian on T 5 with momentum cutoff j max plus a bare mass term n 2 := n 2 1 + n 2 2 + n 2 3 + n 2 4 + n 2 5 . A random tensor T distributed according to the measure µ C b is almost surely a smooth function on U (1) 5 . The bare model. - The generating function for the moments of the model is where the scalar product of two tensors A·B means n A n B n , g b is the bare coupling constant (which depends on the cutoff N ), the source tensors J and J are dual respectively to T and T and N is a normalization factor. To compute correlation functions it is common to choose which is the sum of all vacuum bare amplitudes. However following the constructive tradition, we shall limit N to be the exponential of the (infinite) sum of the divergent connected vacuum amplitudes. Remark the Z 2 b scaling factor multiplying g b in (1.2). To make the interaction c V c (T, T ) in eq. (1.2) explicit, we recall first some notation. Tr, I and , mean respectively the trace, the identity and the scalar product on H ⊗ . I c is the identity on H c , Tr c is the trace on H c and , c the scalar product restricted to H c . The notationĉ means "every color except c". For instance, Hĉ means c =c H c , Iĉ is the identity on the tensor product Hĉ, Trĉ is the partial trace over Hĉ and , ĉ the scalar product restricted to Hĉ. T and T can be considered both as vectors in H ⊗ or as diagonal (in the momentum basis) operators acting on H ⊗ , with eigenvalues T n and T n . An important quantity in melonic tensor models is the partial trace Trĉ [T T ], which we can also identify with the partial product T, T ĉ . It is a (in general non-diagonal) operator in H c with matrix elements in the momentum basis The main new feature of tensor models compared to ordinary field theories is the non-local form of their interaction, which is chosen invariant under independent unitary transformations on each color index. In this paper we consider only the quartic melonic interaction [DGR14], which is a sum over colors 5 This model is globally symmetric under colour permutations. It is just renormalizable like ordinary φ 4 4 but unlike ordinary φ 4 4 it is asymptotically free and using this crucial difference, we aim, in a future work, at making rigorous sense of it. Mainly in order to prepare for the constructive study of the T 4 5 model, we present here its intermediate field representation [Gur13]. We put g b =: λ 2 b and decompose the five interactions V c in eq. (1.2) by introducing five intermediate Hermitian N × N matrix 4 fields σ t c acting on H c (here the superscript t refers to transposition) and dual to Trĉ [T T ], in the following way where dν is the usual GUE measure, that is dν(σ t c ) = dν(σ c ) is the normalized Gaussian independently identically distributed measure of covariance 1 on each coefficient of the Hermitian matrix σ c . It is convenient to consider C b as a (diagonal) operator acting on H ⊗ , and to define in this space the operator Performing the now Gaussian integration over T and T yields where dν(σ) := c dν(σ c ), and R is the resolvent operator on H ⊗ R(σ) := 1 4 The indices of σ cannot be bigger than the maximal value N of each tensor index. Feynman graphs Perturbative expansions in quantum field theory are indexed by graphs called Feynman graphs. Their properties reflect analytical aspects of the action functional. Here we will deal with three different graphical notions. 2.1. Tensor graphs. -The first one corresponds to the Feynman graphs of action (1.2) in which the fields are tensors of rank five. As for random matrix models, Feynman graphs are stranded graphs (so-called ribbon graphs in the matrix case) where each strand represents the conservation of one tensor index. The corresponding Feynman rules are recalled in fig. 1 where an example of such a Feynman graph is also given. Such graphs will be called tensor graphs in the sequel and denoted by emphasized letters such as G. The power-counting, i.e. the behaviour at large N of the amplitude, of a tensor graph G depends on the number F (G) of its cycles, also called faces. Open tensor graphs also have non cyclic strands which we call external paths, see fig. 1b. It will be convenient to express the number of faces in terms of the (reduced) Gurau degree [GS16] of the coloured extension of G. We now explain these notions. Coloured graphs. - Strands of a tensor graph correspond to indices of the original tensor fields T and T . Each such index is labelled by an integer from 1 to 5 recalling that T is an element of H 1 ⊗ H 2 ⊗ · · · ⊗ H 5 . We can then associate bijectively to any tensor graph G a bipartite 6-regular properly edge-coloured graph G called its coloured extension. See fig. 2 for a pictorial explanation of the bijection as well as an example. Such edge-coloured graphs, with or without the constraint of being 6-regular, will be called coloured graphs for simplicity and their symbols will be written in normal font. A (D + 1)-regular coloured graph will simply be called (D + 1)-coloured graph. We will need several different notions associated to coloured graphs. The coloured extension of a closed (resp. open) tensor graph will also be considered closed (resp. open). In this work, edges of coloured graphs will bear a "colour" in [5] := {0, 1, . . . , 5}. We will also write [5] * for the set {1, 2, . . . , 5}. Let G be a coloured graph. We let col(G) be the set of colours labelling at least one edge of G. Let c be an element of [5]. We will often writeĉ for [5] \ {c}. We denote by E c (G) the set of edges of G of colour c. The elements of E c (G) are the c-edges of G. If C is a subset of [5], we denote c∈C E c (G) by E C (G). Let E be any subset of the edges of G, we let 5 What we call external edges are actually half-edges and open graphs are in fact pre-graphs. But we do not insist on being so precise with our terminology. An n-bubble B is a bubble such that col(B) has cardinality n. A 2bubble of a closed coloured graph is therefore a cycle whose edges bear two alternating colours. Cyclic 2-bubbles of G whose colour set belong to {0, i} , i ∈ [5] * correspond to the faces of the corresponding tensor graph G. By extension, cyclic 2-bubbles are often also called faces and their number denoted F (G). The number of {0, c}-bubbles will be written F 0c and we define . Similarly, we denote by F ∅ (G) the total number of faces of G, both colours of which are different from 0. Non cyclic 2-bubbles of G represent the external paths of G. The interaction vertices of a tensor graph G are in bijection with thê 0-bubbles of its coloured extension G. The (reduced) Gurau degree δ(G) of a closed (D + 1)-coloured graph 6 G is defined as follows [GS16]: is the number of vertices of G and C(G) its number of connected components. It is a non negative integer. One can indeed show that it is the sum of the genera of some maps associated to G [Gur11]. Let S (c) [D] be the set of cyclic permutations of [D] and τ be such a permutation. Let J τ (G) (J τ if the context is clear) be the map whose underlying graph is G and whose cyclic ordering of the edges around vertices is given by τ . Such maps are called jackets in the tensor field literature [BG+10], see fig. 3 for an example. Then, we have g Jτ . In order to classify the divergent graphs of tensor models, one needs an extension of the Gurau degree to open coloured graphs and the notion of boundary graph of a coloured graph. To start with, we need a slightly generalised version of jacket 7 . where E(G) is the number of external edges of G. ♦ Proof. -Let τ be a cyclic permutation of [D] and J τ (G) be the corresponding jacket of G. By Euler relation, where e(G) denotes the number of internal edges of G. Faces of J τ can be divided into two parts: the ones which are faces of G, the other ones which are not. The latter will be called external and their number will be denoted by F ext (J τ ). What are these external faces exactly? Let us look at the upper left graph of fig. 4, considered as a map. Recall that in the definition of a jacket, we removed external edges. If one does so for this map, it will contain an external face which goes all around it. This face is not a face of G because it is bordered by edges of three different colours, 0, i and j with i, j = 0. It will be convenient to define external faces of G. Non-cyclic 2-bubbles b of G are bordered by two external vertices, namely its two vertices incident with external edges. If the edges of b bear colours 0 and i, we call b an external path of G of colour i. An external face of G of colour ij is then defined as a cyclic and alternating sequence of adjacent external paths of colour i and j respectively. The important point to notice is that the external faces of G are in bijection with the faces of ∂G. External faces of G of colour ij are faces of J τ if and only if τ contains the sequence i0j or j0i. Then, a given external face of G belongs to exactly 2(D − 2)! jackets. Each face of G belongs to exactly 2(D − 1)! jackets so that Similarly, using that the total number of jackets of ∂G is (D − 1)!, that 2e(∂G) = DV (∂G) and . This concludes the proof. Coloured graphs of vanishing degree are said to be melonic. They form the dominant family of the 1/N -expansion of coloured tensor models [Gur11]. Intermediate field maps. - The third graphical notion we will deal with corresponds to the Feynman graphs of action (1.3) viz., Feynman graphs of the intermediate field representation of our model. As the intermediate field representation is a multi-matrix model, its Feynman graphs are ribbon graphs or maps. As each field σ c bears a colour index c (and the covariance is diagonal in this colour space), the edges of these maps bear a colour too. The Tr log interaction term implies that there is no constraint on the degrees of the vertices of these maps nor on the properness of their edge-colouring. Such maps will be called coloured maps. As for coloured graphs, we let col(G) be the set of colours labelling at least one edge of G. There is a bijection between the Feynman maps of the intermediate field representation and the Feynman graphs of the original tensorial action (1.2). A precise description of this bijection can be found in [BLR17]. Let us remind the reader of its most salient features. Firstly, note that we will in fact explain a bijection between coloured graphs and coloured maps. Let G be a 6-coloured graph of the T 4 5 model and let G be the corresponding coloured map. In eacĥ 0-bubble of G, there are two sets of four parallel edges. Each set will be called a partner link. Edges of G are in bijection with the0-bubbles (or interaction vertices) of G. Each such bubble has a distinguished colour, namely the colour common to the two edges which do not belong to a partner link. We label the corresponding edge of G with it. Partner links of G are in bijection with half-edges of G. Let us now describe the vertices of G. They form cycles of half-edges. But there is a subtlety due to external edges of G. Each maximal alternating sequence of adjacent 0-edges and partner links in G form either a cycle or a (non cyclic) path in case of external (0-)edges. In any case, we represent such a sequence as a vertex in G. If a sequence is not cyclic, we add a cilium, i.e. a mark, to the corresponding vertex of G. See fig. 5 for an illustration of this bijection. In the sequel, we will use (at least) two features of this bijection between coloured graphs of our model and coloured maps: we also used V (G) = 4V (G) and set D = 5. The original proof of eq. (3.1) can be found in [OSVT13]. After substituting the combinatorial relation 2L + E = 4V , the divergence degree of G can be written as • divergent four-point graphs are trees such that the unique path between their two cilia is monochrome, • Proof. -Gurău and Schaeffer [GS16] defined two very convenient coloured graphs we will need. The first one is a chain. Chains can be broken or unbroken. In our case, chains of an intermediate field map G are paths of the form (e 1 , v 1 , e 2 , v 2 , . . . , e n−1 , v n−1 , e n ) where the e i 's are edges of G, the v i 's are vertices of G such that for all i between 1 and n − 1, the degree of v i in G is two. Such a chain is unbroken if all its edges bear the same colour. It is broken otherwise. The second simple but very useful object is that of trivial coloured graphs or ring graphs. They consist in a single loop and no vertex. This loop bears a colour. In our case, this will always be the colour 0. Ring graphs are melonic by convention and are represented by an isolated vertex in the intermediate field representation. According to eq. (3.2), the divergence degree of a 4-point graph G is bounded above by zero. It vanishes if and only if C(∂G) = 1 and δ(G) = 0. Divergent four-point graphs are thus trees with two cilia in the intermediate field representation. Now, recursively remove all degree one vertices of this tree which do not bear a cilium. One gets a non trivial path P with a cilium at each end. This path has the same power counting as the initial tree G. It is melonic and its boundary graph is connected if P is monochrome, disconnected otherwise. Thus, according to eq. (3.2), G is superficially divergent if and only if the unique path between its two cilia is monochrome. Let us now consider a Feynman graph G such that E(G) = 2. The divergence degree of such a graph is bounded above by two. The coloured extension of its boundary graph is the unique 6-coloured graph with two vertices. It is thus connected i.e. C(∂G) = 1. Then ω(G) = 2 if and only if δ(G) = 0. Moreover, as proven in [BGR13], if δ(G) > 0 then δ(G) D−2 where D + 1 is the number of colours of G. In our case, D equals five and the smallest possible positive degree is three. Consequently the only superficially divergent 2-point graphs have vanishing degree. Let us finally treat the case of a closed (E = 0) superficially divergent Feynman graph and work in the intermediate field representation. Note that the divergence degree of such a graph is bounded above by five. As a consequence, it has excess at most one. Indeed, adding an edge to a connected graph G increases the number of its corners by two (hence the number of edges of G increases by two) while the number of faces of G can at most increase by one. Thus the divergence degree decreases by at least three. A connected closed graph G with maximal divergence degree (five) is melonic and corresponds, in the intermediate field representation, to a tree. According to the argument above, a superficially divergent closed graph has an excess smaller or equal to one. Let us focus on divergent graphs G of excess one. They are maps with exactly two faces i.e. maps with a unique cycle and trees attached to the vertices of this cycle. In order to further classify such divergent graphs, as in [GS16], we first remove recursively all vertices of degree one. This does not change the degree of the graph. The result is a cycle C i.e. a ring graph into which a maximal proper chain Ch has been inserted. According to [GS16,p. 288] the Gurau degree of the coloured graph C is 3 if C is monochrome (the chain Ch is then non-separating and unbroken with a single resulting face). It is 5 otherwise (Ch is then a non-separating broken chain). The T 4 5 model (1.2) has the power counting of a just renormalizable theory (and can be proven indeed perturbatively renormalizable by standard methods). However the structure of divergent subgraphs is simpler both than in ordinary φ 4 4 or in the Grosse-Wulkenhaar model [GW04] and its translation-invariant renormalizable version. Melonic graphs with zero, two and four external legs are divergent, respectively as N 5 , N 2 and log N . In the sequel we will only consider 1PI (i.e. one particle-irreducible or 2-edge connected) graphs as they represent the only necessary renormalizations. Melonic graphs are trees in the intermediate field representation. The condition that they are 1PI exactly corresponds to the ciliated vertices being of degree one in the tree (cilia do not count). Melonic vacuum graphs are always 1PI. The divergent melonic graphs of the theory are obtained respectively from the fundamental melonic graphs of fig. 6, by recursively inserting the fundamental 2-point melon on any bold line, or, in the case of the four-point function, also replacing any interaction vertex by the fundamental 4-point melon so as to create a "melonic chain" of arbitrary length (see fig. 7 for a chain of length two), in which all vertices must be of the same colour (otherwise the graph won't be divergent). Melonic correlation functions. - Let us call G mel E,b and Γ mel E,b respectively the connected and one-particle irreducible melonic functions (i.e. sum over the melonic Feynman amplitudes) of the theory with E external fields. With a slight abuse of notation, the bare melonic twopoint function G mel 2,b (n, n) = δ n,n G mel 2,b (n) is related to the bare melonic self-energy Σ mel b (n, n) = δ n,n Σ mel b (n) by the usual equation of the single integer n c : is uniquely defined by the last two equations and the following one (see fig. 9) . . At fixed cutoff N = 2M jmax , these equations define Σ mel b , G mel 2,b and Γ mel 4,b (hence also G mel 4,b ) at least as analytic functions for g b Z 2 b sufficiently small, because the species of melonic graphs is exponentially bounded as the number of vertices increases, see Section 4.4. However this does not allow to take the limit N → ∞ since the radius of convergence shrinks to zero in this limit. In short we need to now renormalize. Renormalized 1PI functions. - The renormalization consists in a melonic BPHZ scheme which is given by BPHZ-like normalization conditions at zero external momenta, but restricted to the divergent sector, namely melonic graphs 8 . The full melonic two-point function is therefore Mass renormalization. -Let us start by performing the mass renormalization, and postpone the wave-function and four-point coupling constant renormalization to the next section. Indeed mass renormalization is simpler as it does not involve renormalons [Riv91]. So throughout this section we keep the bare coupling constant g b , and the bare wave-function normalization Z b . The mass renormalization subtracts recursively the value of all subinsertions at 0 external momentum. Hence, recalling eq. (3.3), the monochrome melonic mass-renormalized self-energy Σ mel mr obeys the closed equation . The sum over p in the equation above diverges only logarithmically as j max → ∞. The total mass counterterm is where we used that Σ mel mr (0) = 0. Remark that δ c m is independent of c, so that in fact Effective renormalization. - We want to perform only the effective (or "useful") part of the coupling constant and wave-function renormalizations, that is when the inner loop slice is higher than the external one. Provided M 2 > 2, η j (j > 0) is positive, smooth, and satisfies As a consequence, we define The decomposition of each propagator in the amplitude A mr (G) of any Feynman graph G allows to write A mr (G) = Definition 4.1 (Effective wave-function). -The effective wave-function constant Z j is where Σ mel mr; j (n) = c Σ mel mr; j (n c ) is the sum of mass-renormalized amplitudes of all 1PI melonic 2-point graphs, all internal scales of which are greater than or equal to j, namely Σ mel mr; j (n c ) : . ♠ Note that with these notations, Z jmax = Z b and Z −1 = Z r = 1. Analyticity. - This section is devoted to proving that the effective wave-functions and coupling constants are analytic functions of the bare coupling g b (in a disk of radius going to 0 as j max → ∞). According to fig. 9, the generating function for the number of 1PI divergent 2-point graphs is This can be proven either by solving the associated equation for Σ mel , of divergent 4-point graphs is given by which, from eq. (4.1), gives are asymptotically equal to 20 n 64 √ πn 3 . Remembering the definitions of Z j and g j Z 2 j (see Section 4.3.2), we have .3) A n is the sum of the derivatives of the mass-renormalized amplitudes of the 1PI divergent melonic 2-point graphs of order n. B n is the sum of the mass-renormalized amplitudes of the 1PI divergent melonic 4-point graphs of order n. According to their generating functions, the number of such graphs is bounded by a constant to the power n. Moreover there certainly exist p, q ∈ N such that the amplitudes of these graphs are bounded by (j max ) pn M 2qnjmax . Recall that Then, by the implicit function theorem, Z b is an analytic function of g b in a neighbourhood of 0 (which shrinks to {0} as j max → ∞). Let us now define F and G on Ω 1 × Ω 2 where Ω 1 (resp. Ω 2 ) is a complex neigbourhood of 0 (resp. of 1) such that The amplitude of any divergent graph is a finite sum (because our UV cutoff is compactly supported) of analytic functions of Z b in Ω 2 . A n and B n are thus analytic functions of Z b . Series in eqs. (4.2) and (4.3) converge normally so that F and G are analytic on Ω 1 × Ω 2 . Finally Z j and g j Z 2 j are holomorphic functions of g b around 0, by composition of F and G respectively with Z b (g b ). This proves that, at fixed UV cut-off j max , both g j Z 2 j and Z j are analytic functions of g b in a neighbourhood of 0. Note also that g j is an invertible function of g b in a neighbourhood of 0. Asymptotic freedom. -Our aim is to prove where β j = β 2 + O(M −j ), β 2 is a negative real number and O(g 3 j ) = g 3 j f (g j ) where f is analytic around the origin (in a domain which shrinks to {0} as j max → ∞). ♦ Proof. -Let us define α (j) 1 , α (j) 2 and γ (j) 1 as coefficients of the Taylor expansions of g j Z 2 j and Z j : We thus have Inserting the previous equation into the Taylor expansion of g j+1 at order 2, we get Let us now compute the coefficients α (j) 1 , α (j) 2 and γ (j) 1 : where C j+1 (p) := η j+1 (p 2 )/(Z b p 2 + m 2 r ) and A (j) 4,2 (n c , n c ) is the mass-renormalized amplitude, "down to scale j", of the rightmost graph of fig. 6. To get α (j) 1 and α (j) 2 , we need the Taylor expansion of Z b at first order: 4,2 equals A (j) 4,2 evaluated at Z b = 1. We have thus 4,2 (0, 0). Before computing the flow equation for g j , we need the first order Taylor coefficient γ (j) 1 of Z j : where, once again,à (j) 4,2 (resp. K j ) equalsà (j) 4,2 (resp. K j ) evaluated at Z b = 1. Finally, we get . We now prove that K j (like K j+1 ) is of order M −2j and that the sum of the other terms in β j equals a positive constant plus O(M −j ). First, we note that η j (p 2 ) = h(M −2j p 2 ) where h(p 2 ) = κ(p 2 ) − κ(M 2 p 2 ). Remark also that the support of h is [M −2 , 2]. The above sum is a Riemann sum of the compactly supported C 1 function h(p 2 )/p 2 . Its difference with the corresponding integral (which vanishes) is of order of the mesh, that is M −j . Thus where we used η j+1 = η j+1 + η j+2 and η i η j = 0 if |i − j| > 1. We get The analyticity of g j+1 − g j − β j g 2 j as a function of g j follows from the analyticity of g j and Z j as functions of g b (see Section 4.4). We have proven that for all j, g j is a holomorphic function of g b in a neighbourhood of 0 which goes to {0} as j max → ∞. This defines g j+1 as a holomorphic function of g j , in a neighbourhood of 0 which goes to {0} as j max → ∞. Moreover the first two coefficients of the expansion g j+1 in powers of g j have a finite limite as j max → ∞. Holomorphic RG flow In Section 4.5 we proved that where f is holomorphic on a neighbourhood Ω jmax of the origin and β j = β 2 + O(M −j ), β 2 < 0. Note that a priori Ω jmax → {0} as j max → ∞. But the first two Taylor coefficients of h jmax,j have in fact finite limits as the ultraviolet cutoff is removed. In order to know if such a result holds true at all orders, which would prove that h jmax,j is holomorphic in a domain uniform in j max , we need a better understanding of the series g j+1 (g j ). In the sequel, we assume it. Assumption 1. -The series g j+1 (g j ) is holomorphic in a domain uniform in j max . The dynamics defined by h jmax,j is not autonomous, its Taylor coefficients depend on j. Nevertheless, far from the infrared cutoff (here m 2 r ), the behaviour of β j suggests that the dynamics becomes autonomous. In the sequel, we assume it. Assumption 2. -The discrete RG flow g j+1 = h(g j ) is defined by the iteration of a (unique) holomorphic map h, tangent to the identity, and such that h(z) = z + β 2 z 2 + O(z 3 ), β 2 < 0. (5.1) Throughout this section, we will be interested in Cauchy problems with complex initial data. In particular, we will prove appropriate uniform boundedness of their solutions with respect to their initial data. In other words, we would like to approach results such as "for all > 0, there exists a complex neighbourhood Ω of 0 such that g r = g −1 ∈ Ω implies for all j 0, |g j | < ". Parabolic holomorphic local dynamics. - Our first objective is to understand the qualitative behaviour of the approximate RG flow (5.1) by invoking the theory of holomorphic dynamical systems. To this aim, we need to recall some classical definitions and theorems, see [Aba10] for example. In other words, the stable set of f is the set of all points z ∈ U such that the orbit {f •k (z) : k ∈ N} is well-defined. If z ∈ U \ {K f } , we shall say that z (or its orbit) escapes from U . Clearly, p ∈ K f and so the stable set is never empty (but it can happen that K f = {p}). The RG flow we consider here is thus a parabolic dynamical system (λ = 1). Definition 5.5 (Multiplicity) . -Let f ∈ End(C, 0) be a holomorphic local dynamical system with a parabolic fixed point at the origin. Then we can write: with a r+1 = 0. r + 1 is called the multiplicity of f . ♠ Definition 5.6 (Directions). -Let f ∈ End(C, 0) be tangent to the identity of multiplicity r + 1 2. Then a unit vector v ∈ S 1 is an attracting (resp. repelling) direction for f at the origin if a r+1 v r is real negative (resp. real positive). ♠ Clearly, there are r equally spaced attracting directions, separated by r equally spaced repelling directions: if a r+1 = |a r+1 |e iα , then v = e iθ is attracting (resp. repelling) if and only if It turns out that to every attracting direction is associated a connected component of K f \ {0}. Leau-Fatou flower). -Let f ∈ End(C, 0) be a holomorphic local dynamical system tangent to the identity with multiplicity r + 1 2 at the fixed point. Let v ± 1 , . . . , v ± r ∈ S 1 be the attracting (resp. repelling) directions of f at the origin. Then, 1. for each attracting (resp. repelling) direction v ± j there exists an attracting (resp. repelling petal) P ± j , so that the union of these 2r petals together with the origin forms a neighbourhood of the origin. Furthemore, the 2r petals are arranged cyclically so that two petals intersects if and only if the angle between their central directions is π/r. 2. K f \ {0} is the (disjoint) union of the basins centered at the r attracting directions. 3. If B is a basin centered at one of the attracting directions, then there is a function ϕ : Furthermore if P is the corresponding petal, then ϕ| P is a biholomorphism with an open subset of the complex plane containing a right-half plane -and so f | P is holomorphically conjugated to the translation z → z + 1. ♦ As a consequence of Theorem 5.9, if z belongs to an attracting petal P of a holomorphic local dynamical system tangent to the identity, then its entire orbit is contained in P and moreover f •n (z) goes to 0 (as n → ∞), tangentially to the corresponding attracting direction. A typical trajectory can be seen on fig. 11. Note that Theorem 5.9 asserts the existence of attracting Figure 11. Attracting (green) and repelling (red) petals of a dynamics of multiplicity 4, and a typical trajectory. and repelling petals whose union with the origin forms a neighbourhood of the origin. The intersection properties of these petals implies that their asymptotic opening angle (i.e. their opening angle close to 0) is strictly bigger than π/r for a system of multiplicity r. But in fact, with a bit more work, one can construct petals whose asymptotic opening angle is 2π/r, see [CG93]. Such attracting petals are tangent at 0 to their two neighbouring repelling directions. In case of the system (5.1), we have a parabolic dynamical system of multiplicity 2 so that there is only one attracting (resp. repelling) petal corresponding to the attractive (resp. repelling) direction (1, 0) (resp. (−1, 0)). The asymptotic opening angle of the attracting petal is 2π which makes it very similar to cardioid-like domains obtained by Loop Vertex Expansion In the next sections, we get more quantitative results on the RG trajectories in case g r is real, on the shapes of attracting petals, and on the size of the Nevanlinna-Sokal disk they contain. To this aim, we will study continuous dynamical systems, more precisely linear ODEs, rather than iterations of holomorphic maps. This is justified by the following argument. As we saw in Section 4.5, there is evidence that the discrete RG flow of the T 4 5 tensor field is well approximated by the dynamical system (5.1), at least in the deep ultraviolet. From Theorem 5.9 a trajectory starting in the unique attracting petal P + remains forever in this petal. Moreover, in P + , the dynamics is conjugated to the translation z → z + 1. But this translation is the time 1 flow of the constant vector field Y = ∂ z . Thus there exists a holomorphic vector field X such that h in eq. (5.1) is the time 1 flow of X. The Taylor coefficients of X can be computed recursively via the equation e X (z) = h(z). One finds X = β 2 z 2 + (β 3 . As a consequence we will consider, in the next sections, ODEs of the form g = β 2 g 2 + β 3 g 3 + O(g 4 ), β 2 ∈ R − keeping in mind that the above β 3 corresponds in fact to β 3 − 2β 2 2 in the notation of eq. (5.1). We now prove a uniform bound on |g| for g r in the following compact domain Ω of the complex plane. Definition 5.10 (Domain of uniform boundedness of a quadratic flow). -Let be a positive real number. In polar coordinates (z = ρe iθ , ρ ∈ R + , θ ∈ [−π, π]), The set Ω is made of three parts. On {Re z 0}, Ω is a closed half-disk of radius , centered at 0. On {Re z 0} ∩ {Im z 0}, Ω is a closed half-disk of radius 2 centered at i 2 . On {Re z 0} ∩ {Im z 0}, Ω is a closed half-disk of radius 2 centered at −i 2 . See fig. 13 for a picture of Ω . -Ω contains the cardioid domain C := ρ cos 2 ( θ 2 ) which is the typical domain of analyticity of correlation functions predicted by Loop Vertex Expansion. Remark. -In fact, by the holomorphic (on C * ) change of coordinate z → 1/z, one can even prove that g r ∈ Ω implies g(t) ∈ Ω for all t > 0. -The partial derivatives of the right-hand side of eq. (5.5a) with respect to x and y are continuous so that the Cauchy-Lipschitz theorem applies. As a consequence, for any complex initial data g r , there exists a unique maximal continuously differentiable solution g defined on [0, T ) for some T > 0. Moreover if Im g r is positive (resp. negative), then for all t ∈ [0, T ), Im g(t) is positive (resp. negative). In particular g(t) = 0. It is easy to check that the partial derivatives of h are continuous on D so that h is continuously differentiable on D. Then, as (0, 0) ∈ D, by the Cauchy-Lipschitz theorem, there exists a unique (continuously differentiable) solution φ to φ = h(t, φ) such that φ(0) = 0. In particular, φ is defined on [0, T ) for some T > 0. Differential flow of higher degree. - Let us now consider more general complex differential equations and prove that for sufficiently small initial conditions, their solutions are uniformly bounded as well. Let U be a complex neighbourhood of 0. Let f be the following function: f : R + × U → C (t, z) → β 2 z 2 + β 3 z 3 + z 4 h(z) (5.14) where h is holomorphic on U . We are interested in the following Cauchy problem: g = f (t, g) (5.15a) g(0) = g r ∈ C. (5.15b) Definition 5.14 (Disks). -Let r be real and positive. We will denote by D r the open disk of radius r centered at 0. An open disk S r of radius r centered at r will be called a Nevanlinna-Sokal disk. S r is the set of complex numbers z such that Re 1 z > 1 2r or equivalently |z| < 2r cos(arg z). ♠ Proof. -By the Cauchy-Lipschitz theorem, for all g r ∈ U ∩ R, there exists a unique solution of the Cauchy problem (5.15) defined on [0, T ) for some T > 0. Let a : U ∩ R → R be defined as f (t, x) := β 2 x 2 a(x), so that a(x) = 1 + β 3,2 x + 1 β 2 x 2 h(x). Let g c be the smallest positive zero of a in U ∩ R if it exists and sup U ∩ R otherwise. As f (t, x) is negative if x ∈ (0, g c ), by unicity of the solutions of the Cauchy problem (5.15), for all t ∈ [0, T ), f (t, g(t)) is negative. As a consequence, 0 < g(t) < g r and g is in fact defined on R + (and decreasing).
11,716.6
2021-01-13T00:00:00.000
[ "Mathematics" ]
Suppression of Structural Phase Transition in VO2 by Epitaxial Strain in Vicinity of Metal-insulator Transition Mechanism of metal-insulator transition (MIT) in strained VO2 thin films is very complicated and incompletely understood despite three scenarios with potential explanations including electronic correlation (Mott mechanism), structural transformation (Peierls theory) and collaborative Mott-Peierls transition. Herein, we have decoupled coactions of structural and electronic phase transitions across the MIT by implementing epitaxial strain on 13-nm-thick (001)-VO2 films in comparison to thicker films. The structural evolution during MIT characterized by temperature-dependent synchrotron radiation high-resolution X-ray diffraction reciprocal space mapping and Raman spectroscopy suggested that the structural phase transition in the temperature range of vicinity of the MIT is suppressed by epitaxial strain. Furthermore, temperature-dependent Ultraviolet Photoelectron Spectroscopy (UPS) revealed the changes in electron occupancy near the Fermi energy EF of V 3d orbital, implying that the electronic transition triggers the MIT in the strained films. Thus the MIT in the bi-axially strained VO2 thin films should be only driven by electronic transition without assistance of structural phase transition. Density functional theoretical calculations further confirmed that the tetragonal phase across the MIT can be both in insulating and metallic states in the strained (001)-VO2/TiO2 thin films. This work offers a better understanding of the mechanism of MIT in the strained VO2 films. S2 1. The temperature-dependent XRD θ-2θ scans of 60 nm (001)-VO 2 /TiO 2 Figure S1 shows the temperature-dependent XRD θ-2θ scans of 60 nm (001)-VO 2 /TiO 2 and the obvious peak shifts at low (~30 o C) and high temperature (~90 o C) are observed. Based on this result, the lattice constant c was calculated and shown in Figure 3 in the main text. The lattice constant c has a sharp jump across the MIT, indicating an existing structural phase transition in the thicker films. Figure S1 The temperature-dependent XRD θ-2θ scans of 60 nm (001)-VO 2 /TiO 2 . Figure S2 shows the R-T curve and its differential curve of 60 nm (001)-VO 2 /TiO 2 . It can be seen from Figure S2 that the resistance jumps sharply about four orders at about 65 °C across the MIT, which indicates the high quality of the VO 2 film grown on the TiO 2 substrate by magnetron sputtering techniques. From the inset of differential curve, the MIT ranges from 50 °C to 65 °C, which was also labeled in Figure 3 in the manuscript. Figure S2 The R-T curve and its differential curve of 60 nm (001)-VO 2 /TiO 2 . S4 3. (001)-VO 2 /TiO 2 epitaxial films both at heating and cooling process. Figure S3 shows the complete temperature-dependent Raman spectrum of 13-nm-thick (001)-VO 2 /TiO 2 epitaxial film both at heating and cooling process. There are no strong sharp peaks belong to the monoclinic VO 2 phase and the 358 cm -1 and 413 cm -1 Raman peaks belong to tetragonal VO 2 phase are existed both at low and high temperature. Furthermore, the positions and intensities of them are not changed anymore, which suggests that the structural phase transition should be absent across the MIT both at heating and cooling process. Figure S3 The complete temperature-dependent Raman spectrum of (001)-VO 2 /TiO 2 epitaxial films in 13-nm thick both at (a) heating and (b) cooling process. The temperature-dependent Raman spectrum of bare TiO 2 substrate. For comparison, Figure S4 shows the temperature-dependent Raman spectrum of bare (001)-TiO 2 substrate and the characteristic peaks (140 cm -1 , 242 cm -1 , 446 cm -1 and 609 cm -1 ) are labeled. It can be seen that the Raman peaks are not changed no matter for the positions or intensities. This result is useful for eliminating the substrate effect on the Raman peaks for the (001)-VO 2 /TiO 2 epitaxial thin films. Figure S4 The temperature-dependent Raman spectrum of bare TiO 2 substrate. S6 5. The normalized intensities of V 3d electron state at Fermi energy E F as a function of temperature. Figure S5 shows the temperature-dependent intensities of V 3d electron state at Fermi energy E F . The intensities were normalized to the integrated intensity from Binding energy from -0.2 to 0.5 eV as shown in Figure 5b in our manuscript. The intensities shows a change across MIT, which implies that the VO 2 should undergo an electronic phase transition across the MIT, which was well consistent with the previous studies (see references in the main text). This may further verifies that the electronic phase transition should lead to the occurrence of the MIT of the strained 13-nm (001)-VO 2 /TiO 2 epitaxial thin films. We adopted DFT+U method to study the DOS spectra in the 13-nm VO 2 film in the present work. The crystal structure was determined by our experiments and did not change for the cases of U=0.0 eV and U=4.5 eV. This assumption was intended to show the dominant role of electron-electron correlation. Here, we adopted the generalized gradient approximation (GGA) for the exchange correlation along with double-ζ-double polarized basis set for the electron wave function. The computed projected density of states (PDOS) is shown in Figure S6. It is seen that the DOS spectra with U=0.0 eV that the VO 2 was metallic due to the electron occupancy at Fermi energy E F in Figure S6 (a). On the other hand, the DOS spectra with U=4.5 eV in Figure S6 (b) demonstrated the VO 2 was insulating state because there was a band gap of 0.606 eV near Fermi energy E F . The band gap (~0.606 eV) with U=4.5 eV, agrees well with the orbital distributions theoretically calculated by Quackenbush et al. [ Nano Lett. 2013, 13, 4857. ] and Gabriel Kotliar et al. [Phys. Rev. B 2010, 81, 115117.] Moreover, the open band gap is in excellent agreement with the experimental results of 0.6 V. [Rev. B 2013, 87, 115121. Phys. Rev. B 2013.] Therefore, with the addition of electron-electron correlation interaction U term (~4.5 eV), the V d-orbitals split into a lower band and an upper band and eventually produce a band gap without changing the crystal structure. Consequently, the electron-electron correlation may induce the MIT, where the structural phase transition is not the mandatory requirement for the MIT. Figure S6 The calculated DOS spectra of with (a) U=0.0 eV and (b) U=4.5 eV. S8 S9 7. The temperature-dependent XRD of the 24-nm-thick VO 2 /TiO 2 film. Figure S7(a) and (b) shows the temperature-dependent XRD θ-2θ scans of 24-nm (001)-VO 2 /TiO 2 and there is no obvious peak shifts during both heating and cooling process. Moreover, in Figure S7(c), the lattice constant c has no jumping behavior and has a linear relationship with respect to the temperature. These results indicate that there is also no structural phase transition in the 24-nm-thick (001)-VO 2 /TiO 2 film. Comparing the XRD results of the 13-nm and 60-nm VO 2 films, we can conclude that the VO 2 films thinner than the critical thickness (~26.5 nm) may have no structural phase transition across the MIT. S10 Figure S7 The temperature-dependent XRD θ-2θ scans of (002) peaks of the 24-nm-thick VO 2 /TiO 2 film. The heating and cooling processes are shown in (a) and (b), respectively. (c) The lattice constant c is as function of the temperature. The dotted lines show that region of the MIT. S11 8. The temperature-dependent Raman spectrum of the 24-nm-thick VO 2 /TiO 2 film. Figure S8 shows the complete temperature-dependent Raman spectrum of the 24-nm-thick (001)-VO 2 /TiO 2 epitaxial film during MIT. There are no strong sharp Figure S8 The temperature-dependent Raman spectrum of the 24-nm-thick VO 2 /TiO 2 film. peaks belong to the monoclinic VO 2 phase and the 358 cm -1 and 413 cm -1 Raman peaks belong to tetragonal VO 2 phase are existed both at low and high temperature. This suggests that the structural phase transition should be absent across the MIT. Comparing the Raman spectrum of the 13-nm and 60-nm VO 2 films, we can conclude that the VO 2 films thinner than the critical thickness (~26.5 nm) maintain tetragonal phase even at room temperature by epitaxial strain.
1,906.8
2016-03-15T00:00:00.000
[ "Physics", "Materials Science" ]
ANALYTIC INTEGRABILITY OF HAMILTONIAN SYSTEMS WITH EXCEPTIONAL POTENTIALS We study the existence of analytic first integrals of the complex Hamiltonian systems of the form Introduction and statement of the main results Ordinary differential equations in general and Hamiltonian systems in particular play a very important role in many branches of the applied sciences. The question whether a differential system admits a first integral is of fundamental importance as first integrals give conservation laws for the model and that enables to lower the dimension of the system. Moreover knowing a sufficient number of first integrals allows to solve the system explicitly. Until the end of the 19th century the majority of scientists thought that the equations of classical mechanics were integrable and finding the first integrals was mainly a problem of computation. In fact, now we know that the integrability is a nongeneric phenomenon inside the class of Hamiltonian systems (see [3]), and in general it is very hard to determine whether a given Hamiltonian system is integrable or not. In this work we are concerned with the integrability of the natural Hamiltonian systems defined by a Hamiltonian function of the form where V (q 1 , q 2 ) ∈ C[q 1 , q 2 ] is a homogeneous polynomial potential of degree k. As usual C[q 1 , q 2 ] is the ring of polynomial functions over C in the variables q 1 and q 2 . To be more precise we consider the following system of four differential equations Let A = A(q, p) and B = B(q, p) be two functions, where p = (p 1 , p 2 ) and q = (q 1 , q 2 ). We define the Poisson bracket of A and B as (2) is completely or Liouville integrable if it has 2 functionally independent first integrals H and F . As usual H and F are functionally independent if their gradients are linearly independent at all points of C 4 except perhaps in a zero Lebesgue measure set. Let PO 2 (C) denote the group of 2 × 2 complex matrices A such that AA T = α Id, where Id is the 2 × 2 identity matrix and α ∈ C \ {0}. The potentials V 1 (q) and V 2 (q) are equivalent if there exists a matrix A ∈ PO 2 (C) such that V 1 (q) = V 2 (Aq). Therefore we divide all potentials into equivalent classes. In what follows a potential means a class of equivalent potentials in the above sense. This definition of equivalent potentials is motivated by the following simple observation (for a proof see [1]). Let V 1 and V 2 be two equivalent potentials. If the Hamiltonian system (2) with the potential V 1 is integrable, then it is also integrable with the potential V 2 . It was shown in [2] that among all equivalent potentials one can always choose a representative V such that the polynomial V has one root in an arbitrary point of CP 1 \ {[1 : +i], [1 : −i]}. This is always possible except for cases when all linear factors of V have the form q 2 ± iq 1 , that is, if the potential V is of the form These potentials are called exceptional. It was proved in [1] that the exceptional potentials V 0 , V 1 , V k−1 , V k and V k/2 when k is even are integrable . It is easy to find that for these exceptional potentials the additional polynomial first integral is: and when k is even I k/2 = q 2 p 1 − q 1 p 2 . It is also claimed in [2] and [1] that nothing is known about the integrability of the remaining exceptional potentials. In this paper we focus on these remaining exceptional potentials. We restrict to the potentials V l with l = 2, . . . , k/2 − 1, k/2 + 1, . . . , k − 2 and k even. Note that if k ≤ 4 all the exceptional potentials are integrable with polynomial first integrals. So we will focus on the case k ≥ 5. System (2) becomeṡ Our main results are the following. The proof of Theorem 1 is given in section 3. We state the following conjecture. In the case in which l = 2 or l = k − 2 with k being either even or odd, we can also prove with different techniques the non-existence of rational first integrals. The proof of Theorem 2 is given in section 2. 2. Weight-homogeneous polynomial differential system and proof of Theorem 2 We consider polynomial differential system of the form x 4 ] for i = 1, 2, 3, 4. As usual N, R and C denote the sets of positive integers, real and complex numbers, respectively; and C[x 1 , x 2 , x 3 , x 4 ] denotes the polynomial ring over C in the variables x 1 , x 2 , x 3 , x 4 . Here t can be real or complex. We say that system (4) is weight-homogeneous if there exist s = (s 1 , s 2 , s 3 , s 4 ) ∈ N 4 and d ∈ N such that for arbitrary a ∈ R + = {a ∈ R, a > 0} we have for i = 1, 2, 3, 4. We call s = (s 1 , s 2 , s 3 , s 4 ) as the weight exponent of system (4) and d as weight degree with respect to the weight exponent s. We say that a polynomial F ( The following well-known proposition (easy to prove) reduces the study of the existence of analytic first integrals of a weight-homogeneous polynomial differential system (4) to the study of the existence of a weight-homogeneous polynomial first integrals. Proposition 3. Let H be an analytic function and let H = ∑ i H i be its decomposition into weight-homogeneous polynomials of weight degree i with respect to the weight exponent s. Then H is an analytic first integral of the weight-homogeneous polynomial differential system (4) with weight exponent s if and only if each weight-homogeneous part H i is a first integral of system (4) for all i. We introduce the change of variables In these new variables system (3) becomeṡ (5) From Proposition 3 and the observation above it follows that for proving the existence of non-existence of analytic first integrals of system (5) it is sufficient to show the existence or non-existence of weighthomogeneous polynomial first integrals with weight exponents given in (6). We recall that in the case in which k is even, we can be more precise and it is clear that system (5) is a weight-homogeneous polynomial differential system with weight exponent (1, 1, k/2, k/2) and weight degree d = k/2. Proof of Theorem 2. Instead of Proving Theorem 2 we will prove the following theorem which is equivalent to Theorem 2. Theorem 4. System (5) with l = 2 or with l = k − 2 does not admit an additional rational first integral. We will only prove the case l = k − 2 because the proof of the case l = 2 is exactly the same interchanging the roles of y 1 with y 2 and of x 1 with x 2 . The proof follows directly from the following theorem which is Theorem 2.4 in [2]. Proof of Theorem 1 In this section we will prove the following equivalent result to Theorem 1. We first observe that we only need to prove Theorem 6 for the cases l = 2, . . . , k 2 − 1, because the proof of the cases l = k 2 + 1, . . . , k − 2 is exactly the same interchanging the roles of x 1 with x 2 , and y 1 with y 2 . Before going into the technicalities of the proof of Theorem 6, we would like to highlight the main idea behind the proof. First we shall restrict system (5) to the zero level of the first integral H, which is a polynomial function. The restriction to this level set gives rise to a nontrivial rational first integralF of the restricted system. To be more precise,F (y 1 , y 2 , x 1 ) is a polynomial in the variables y 1 , y 2 , x 1 and x −1 1 . So, it can be written in the following form: We recall again that system (5) is a weight-homogeneous polynomial differential system with weight exponent (1, 1, k/2, k/2) and weight degree d = k/2. From section 3 it follows that for proving Theorem 6 it is sufficient to show that this system has no weight-homogeneous polynomial first integrals with weight exponent (1, 1, k/2, k/2). Let F = F (y 1 , y 2 , x 1 , x 2 ) ∈ C[y 1 , y 2 , x 1 , x 2 ] be a weight-homogeneous polynomial first integral of system (5) with weight exponent (1, 1, k/2, k/2) and weight degree d = k 2 n with n ≥ 1. We can express it as The function F cannot depend only on y 1 and y 2 . Indeed, if F = F (y 1 , y 2 ) then from (5) we get and consequently F is a constant. So F depends on x 1 or x 2 , and thus n ≥ 2. We study the first integral F on the level set H = 0 by eliminating, for example x 2 as follows: Thus, we end up with the following system: Note that the restriction of the polynomial first integral F to the level set H = 0 can be written as where eachF j (y 1 , y 2 ) is a homogeneous polynomial of weight degree M := k 2 (n − j). Indeed, the degree ofF j (y 1 , y 2 ) is l 1 + l 2 + ll 4 + (k − l)l 4 = l 1 + l 2 + kl 4 . using that l 1 + l 2 = n − k 2 (l 3 + l 4 ) and l 3 − l 4 = j we can rewrite the above expression as Note that system (8) is completely integrable with the first integrals Using thatF must satisfy (9), we must have that for any m = 0, . . . , j 2 , which yields ) . Note thatF must be a polynomial in the variables y 1 and y 2 . Thus This implies that Moreover using again (9) we have that j 1 ≤ n. Therefore, with β j 1 ,n,k,l ∈ C. To conclude the proof of Theorem 1 it is sufficient to show that F = 0. Indeed ifF = 0 then any weight homogenous polynomial first integral with weight exponent (1, 1, k/2, k/2) and weight degree d = kn/2 restricted to H = 0 is zero and thus system (5) cannot have a weight homogenous polynomial first integral F with weight exponent (1, 1, k/2, k/2) and weight degree d = kn/2 independent with H since otherwise when restricted to H = 0 this first integral would not be zero.
2,683.8
2015-10-09T00:00:00.000
[ "Mathematics" ]
Roles of the Wnt Signaling Pathway in Head and Neck Squamous Cell Carcinoma Head and neck squamous cell carcinoma (HNSCC) is the most common type of head and neck tumor. It is a high incidence malignant tumor associated with a low survival rate and limited treatment options. Accumulating conclusions indicate that the Wnt signaling pathway plays a vital role in the pathobiological process of HNSCC. The canonical Wnt/β-catenin signaling pathway affects a variety of cellular progression, enabling tumor cells to maintain and further promote the immature stem-like phenotype, proliferate, prolong survival, and gain invasiveness. Genomic studies of head and neck tumors have shown that although β-catenin is not frequently mutated in HNSCC, its activity is not inhibited by mutations in upstream gene encoding β-catenin, NOTCH1, FAT1, and AJUBA. Genetic defects affect the components of the Wnt pathway in oral squamous cell carcinoma (OSCC) and the epigenetic mechanisms that regulate inhibitors of the Wnt pathway. This paper aims to summarize the groundbreaking discoveries and recent advances involving the Wnt signaling pathway and highlight the relevance of this pathway in head and neck squamous cell cancer, which will help provide new insights into improving the treatment of human HNSCC by interfering with the transcriptional signaling of Wnt. INTRODUCTION Head and neck squamous cell carcinoma (HNSCC) is the sixth most common malignant tumor in the world (Alamoud and Kukuruzinska, 2018). HNSCC causes over 330,000 deaths worldwide, and more than 650,000 HNSCC cases are reported each year (Xi and Grandis, 2003). In the United States, the overall incidence of HNSCC is 11 per 100,000 people, and HNSCC is more common among black populations than white populations. It originates from the mucosa of various organs that have a squamous epithelial lining. These organs include the mouth, nasopharynx, and throat. Oral squamous cell carcinoma (OSCC) is the main type of HNSCC, which is characterized by poor prognosis and low survival rate. Local recurrence of the primary site and cervical lymph node metastasis are the main reasons for the failure of treatment in patients with OSCC. Therefore, elucidating the molecular mechanisms that regulate the occurrence and development of OSCC will help to understand the etiology of these diseases, allow the design of more effective strategies for the treatment of OSCC, and possibly improve treatment. In 1982, Nusse found an oncogenic gene in mouse models of mammary cancer, named int1, and which has homology to the wingless gene of drosophila reported later by Sharma, and the two were collectively called Wnt (Nusse et al., 1991). The Wnt signaling pathways play important roles in embryonic development, tissue regeneration, cell proliferation, and cell differentiation and is abnormally activated in many types of cancers, such as colon cancer (Zheng and Yu, 2018;Flores-Hernández et al., 2020), liver cancer , lung cancer (Ji et al., 2019), breast cancer (Ma et al., 2016), and childhood T-cell acute lymphoblastic leukemia (Ng et al., 2014). Previous studies have shown that dysfunction of the Wnt signaling pathway can promote the development of oral cancer (González-Moles et al., 2014) and that abnormalities in this pathway affect the prognosis of patients with HNSCC. More and more research highlights the importance of the Wnt signaling pathway for the prognosis of HNSCC patients and suggests the possibility of actively developing new gene therapy methods that target this pathway in HNSCC. Thus, this review summarizes recent research findings regarding the Wnt signaling pathway in HNSCC to improve our understanding of the mechanisms underlying the roles of this important signaling pathway in cancer cell activity. CANONICAL WNT SIGNALING PATHWAY The hallmark of the canonical Wnt signaling pathway is the accumulation and transport of β-catenin proteins associated with adhesion junctions into the nucleus (Dawson et al., 2013). In an experimental analysis of the axial development of Xenopus laevis and the segmental polarity and wing development of Drosophila, researchers first clarified the role of this canonical pathway in embryonic development (Ng et al., 2019). glycogen synthase kinase 3 (GSK3)β is a central participant in the canonical Wnt pathway. The activity of the Wnt/β-catenin signaling pathway depends on the amount and cellular location of β-catenin (Lustig and Behrens, 2003). Wnt ligands interact with the Fzd receptors. When the Fzd receptors are unoccupied, cytoplasmic β-catenin is degraded by its destruction complex, which includes Axin, APC protein, GSK3, casein kinase 1α (CK1α), and β-catenin (Tejeda-Muñoz and Robles-Flores, 2015). Once the complex is formed, β-catenin begins to phosphorylate sequentially. The first phosphorylation is at Ser45 by CK1α, and subsequently at Thr41, Ser37, and Ser33 by GSK3β. Phosphorylated β-catenin is released from the complex allowing for its ubiquitination at the N-terminal end of the protein and subsequent degradation by E3. Axin and APC can also be phosphorylated by GSK3β and CK1α, resulting in the enhancement of β-catenin phosphorylation (Hagen and Vidal-Puig, 2002). This continuous degradation prevents the accumulation and translocation of β-catenin to the nucleus (MacDonald et al., 2009). When the Wnt/β-catenin signaling is activated, Wnt ligand binds to Fzd receptors and its co-receptor, low-density lipoprotein receptor-related protein 5/6 (Lrp5/6) (Gordon and Nusse, 2006). This complex leads to the recruitment of the scaffold protein (Disheveled, Dvl) to the receptors which are then phosphorylated. Subsequently, Axin, GSK3β, and CK1 migrate from the cytoplasm to the plasma membrane, which contributes to the inactivation of the destruction complex, resulting in β-catenin stabilization through dephosphorylation. Stable β-catenin translocates into the nucleus and interacts with T-cell factor (TCF) transcription factors to induce the expression of Wnt target genes such as c-Myc, cyclin D1, Axin-2, Lgr5, ITF-2, PPAR-δ, and matrix metalloproteinase 1 and 7 (MMP-1, MMP-7) (Wu and Pan, 2010;Velázquez et al., 2017). A variety of Wnt/β-catenin target genes have been identified, including cell proliferation regulation genes, development control genes, and genes related to tumor progression. Wnt1 class ligands (Wnt2, Wnt3, Wnt3a, and Wnt8a) play main roles through the canonical Wnt/β-catenin signaling pathway. NON-CANONICAL WNT SIGNALING PATHWAY Non-canonical Wnt signaling is mediated through Fzds but Lrp5/6 is not involved and consists of two main branches (Valenta et al., 2012): the PCP pathway and the Wnt/Ca 2+ pathway. Non-canonical Wnt signaling is initiated by Wnt5a type ligands (Wnt4, Wnt5a, Wnt5b, Wnt6, Wnt7a, and Wnt11). These Wnt ligands bind to Fzd receptors. In addition, receptor tyrosine kinase-like orphan receptor 2 (Ror2), and receptor tyrosine kinase (Ryk) have been suggested as non-canonical signaling co-receptors, which are required for downstream activation. These signal transductions jointly activate the calcium-dependent signaling cascade by activating Dvl (Rao and Kühl, 2010). In the Wnt/Ca 2+ pathway, Wnt ligands bind to receptor complex, leading to the activation of phospholipase C (PLC). This results in inositol 1,4,5-triphosphate-3 (IP3) production and subsequent Ca 2+ release (Anastas and Moon, 2013). Calcium release and intracellular accumulation activate several calcium-sensitive proteins, including protein kinase C (PKC) and calcium/calmodulin-dependent kinase II (CaMKII) (González-Moles et al., 2014). Calcineurin activates nuclear factor of activated T cells (NFAT) and subsequent NFATmediated gene expression (Saneyoshi et al., 2002). Some evidence had been found that parts of the non-canonical Wnt signaling proteins influence the canonical Wnt/β-catenin pathway (van Tienen et al., 2009;Fan et al., 2017). However, the specific mechanism is not yet clear, and more research is needed. PCP was first demonstrated in insects because their cuticular surface has a rich morphology (Adler, 2012). The Wnt/PCP pathway mediates the event of collective migration, but abnormal activation leads to tumor migration ability. In the Wnt/PCP pathway, the binding of Wnt to Fzd and a co-receptor causes recruitment of Dvl to Fzd and its association with disheveledassociated activator of morphogenesis 1 (DAMM1). DAMM1 activates small G protein Rho, through guanine exchange factor and then activates Rho-associated protein kinase to reorganize the cytoskeleton and change cell polarity and migration (Peng et al., 2011). It is characteristic of the plane polarity signal that Rho-associated kinases can mediate cytoskeleton rearrangement. Alternatively, the PCP pathway can also be mediated by the triggering of RAC to initiate the c-Jun amino terminal kinase (JNK) signaling cascade (Javed et al., 2019). The activation of Dvl-mediated Wnt signal induces the activation of heterotrimeric G protein and promotes the transport of intracellular Ca 2+ to the extracellular environment (De, 2011). This transport activates JNK and Nemo-like kinase (NLK) which can phosphorylate TCF transcription factors and antagonize the canonical Wnt signaling pathway (Humphries and Mlodzik, 2018). Taken together, these observations indicate that the Wnt/Ca 2+ pathway is a key regulator of canonical signaling pathways and planar cell polarity pathways. On the other hand, non-canonical signaling pathways phosphorylate TCF through NLK, thereby mediating the activation of canonical Wnt signaling (Figure 1). ABERRANT WNT SIGNALING PATHWAY IN HNSCC With the discovery that a number of Wnt genes are associated with the development of various human cancers, aberrant activation of Wnt signaling pathway became evident. To date, different roles of Wnt in HNSCC have been confirmed. Leethanakul et al. used microarray technology to reveal the role of Wnt in HNSCC for the first time. They found that homologs of both Fzd and Dvl were increased compared with normal tissue samples. This suggests that Wnt mediates invasiveness in the development of HNSCC (Leethanakul et al., 2000). Currently, several other studies have shown that abnormal activation of the Wnt signaling pathway facilitates tumor transformation in head and neck tissues (Iwai et al., 2005). For example, Wnt1-induced signaling pathway protein 1 (WISP-1) is involved in the progression of OSCC, and high expression of WISP-1 is significantly associated with treatment failure . Wnt7b, an agonist of the canonical Wnt pathway, shows significantly increased expression in samples from patients with OSCC compared with matched samples of adjacent non-tumorous tissues (Shiah et al., 2016), and the Wnt/β-catenin signaling pathway prevents shedding-mediated apoptosis (anoikis) in SCC1 cells and promotes the growth of HNSCC-xenograft tumors in vivo (Farooqi et al., 2017). The Wnt/β-catenin signaling pathway may regulate the epithelialmesenchymal transition in laryngeal squamous cell carcinoma, thereby regulating tumor development (Psyrri et al., 2014). In OSCC, the non-canonical Wnt/Ca 2+ /PKC pathway is activated by Wnt5a, which promotes migration and invasion (Prgomet et al., 2015). Wnt5b has been found to be significantly increased in the highly metastatic cell line of OSCC cells. Wnt5b gene silencing can significantly inhibit the formation of filopodia-like protrusive structures and migration, whereas stimulation with Wnt5b can significantly increase the formation of filopodia-like protrusions in SAS-LM8 cells (Takeshita et al., 2014). The roles of more Wnt ligands in HNSCC are listed in Table 1. Thus, both canonical Wnt pathways and non-canonical Wnt pathways play great roles in HNSCC. Although Wnt1 type or Wnt5a type ligands activate canonical or non-canonical Wnt pathways, respectively, there is more research that suggests that the results of different Wnt ligands depend on specific combinations of Wnt receptors and coreceptors (Wang et al., 2013;Sakisaka et al., 2015). Besides the canonical Fzd and Lrp receptor, Ror and Ryk are also important alternative receptors for Wnt transduction. Head and neck squamous cell carcinoma can be divided into human papillomavirus (HPV)-positive and HPV-negative tumors, each of which has its unique clinical, pathological, and epidemiological significance (Cancer Genome Atlas Network, 2015). Increasing evidence shows that Wnt/β-catenin signaling has an impact on the pathobiology of HPV-and HPV + HNSCC. HPV viral oncoprotein E6/E7 has been used to alter the prognosis of HPV-HNSCC patients (Liu et al., 2017). In oropharyngeal squamous cell carcinoma, β-catenin is driven to nuclear translation through E6 oncoprotein by activating epidermal growth factor receptors (EGFR). Some researchers have used small interfering RNAs to suppress E6 expression and erlotinib to downregulate EGFR activity and thereby eliminate the nuclear localization of β-catenin and the phosphorylation of EGFR while reducing the invasion characteristics of HPV + HNSCC cell lines in vitro (Nwanze et al., 2015). According to reports, E6/E7 may also suppress E3 ubiquitin ligase protein to induce nuclear translocation of β-catenin. The regulatory effect of E6/E7 on HPV + HNSCC requires further study. Recently, it was found that some microRNAs have potential roles in the attenuation of HPV+/HPV-HNSCC, although the effects are weak (Nwanze et al., 2015). More research is needed to deepen the understanding of the Wnt/β-catenin signaling pathway in HPV + HNSCC (Kobayashi et al., 2018). Due to limited tumor specimens and relevant clinical data, research on HPV + HNSCC lags behind than on HPV-HNSCC (Cancer Genome Atlas Network, 2015; Beck and Golemis, 2016). GENETIC AND EPIGENETIC CHANGES OF WNT SIGNALING IN HNSCC Components of the Wnt signaling pathway, such as Wnt ligand proteins, Wnt antagonists, membrane receptors, and FIGURE 1 | Overview of the Wnt pathway. (A) Canonical pathway. Binding of Wnt to frizzled receptors activates disheveled (DVL), which disrupts the stability of the destruction complex, composed of Axin, APC, GSK3-β, CK1, and β-catenin. Subsequently, phosphorylation and degradation of β-catenin are inhibited, which allows the association of β-catenin with TCF transcription factors. In the absence of Wnt ligands, the complexes promote phosphorylation of β-catenin. Phosphorylated β-catenin becomes multiubiquitinated (Ub) and subsequently degraded in proteasomes (Foulquier et al., 2018). (B) Non-canonical pathway. In the Wnt/Ca 2+ pathway, Wnt ligands bind to a complex consisting of Fzd, DVL, and G-proteins, leading to the activation of PLC, which cleaves phosphatidylinositol 4,5 biphosphate (PIP2) into diacylglycerol (DAG) and IP3. DAG activates PKC whereas IP3 promotes the release of intracellular Ca 2+ , which in turn activates CamKII and calcineurin (Russell and Monga, 2018). Calcineurin activates NFAT to regulate cell migration and cell proliferation. In the PCP pathway, Wnt ligands bind to a complex consisting of Fzd, Ror2, and DVL, which mediates the activation of RhoA and ROCK, or activation of Rac and JNK signaling, to regulate cell polarity and migration. Frontiers in Molecular Biosciences | www.frontiersin.org intracellular conduction medium, are often disrupted by genetic or epigenetic inheritance in human tumors (Polakis, 2012). It is reported that the activation of the Wnt1 and Wnt pathways occurs due to epigenetic changes in secreted frizzled-related protein (SFRP), Wnt inhibitory factor (WIF), and the Wnt signaling pathway inhibitor Dickkopf 3 (DKK3). Previous data demonstrated that DKK-3 protein is mainly expressed in HNSCC (Katase et al., 2013), and its expression is associated to the high metastasis rate and poor prognosis of OSCC (Katase et al., 2012). Therefore, epigenetic changes of DKK3 may be closely related to the occurrence and development of HNSCC (Katase et al., 2020). Epigenetic alterations of SFRP, WIF-1, and DKK-3 genes can active Wnt pathways, resulting in delocalization of catenin in HNSCC (Pannone et al., 2010). It was recently reported that overexpression of β-catenin is significantly associated with increased transcriptional activity in HNSCC (Kartha et al., 2018). The destructive complex strictly controls the level of β-catenin in the cytoplasm. Previous studies have suggested that mutations in APC, Axin, and β-catenin are widespread in colon cancer (Hernández-Maqueda et al., 2013;Yu et al., 2018), esophageal cancer, and gastric cancer. The Axin1 mutation was first identified in hepatocellular carcinoma (Satoh et al., 2000). In a small, diverse group of colon cancer cases, activation of point mutations in β-catenin removed the regulated N-terminal Ser/Thr residue. Similar β-catenin mutations have also been reported in melanoma and other tumors (Morin et al., 1997;Rubinfeld et al., 1997). Mutations in these genes stabilize β-catenin, allowing it to accumulate in the nucleus, and subsequently activate the Wnt signaling pathway. However, mutants of APC, Axin, or β-catenin still ultimately depend on exogenous Wnts (Lammi et al., 2004). According to HNSCC studies, there are few gene mutations relevant to Wnt pathways in HNSCC, which indicates that abnormal β-catenin accumulation in oral cancer is not associated with mutations in these genes. Although Wnt/β-catenin mutations are not common in HNSCC, other signal pathways, such as FAT1 and AJUBA, can crosstalk with Wnt/β-catenin, resulting in changes in the activity of Wnt signaling pathway (Cancer Genome Atlas Network, 2015; Beck and Golemis, 2016). Mutations in these signaling cascades are almost entirely related to HPVnegative tumors and to the absence of epithelial differentiation programs. Another possible mechanism for the degradation and inactivation of β-catenin involves EGFR signaling (Lee et al., 2010). In OSCC, EGFR stabilizes β-catenin and enhances nuclear accumulation of β-catenin through phosphorylation, possibly via two molecular mechanisms: (1) binding directly and then β-catenin is phosphorylated and (2) phosphorylation through GSK-3β to regulate the activity of the destruction complex (Billin et al., 2000;Hu and Li, 2010). DNA methylation and histone modification also play important parts in the occurrence of HNSCC. Epigenetic regulation may contribute to the silencing of Wnt related genes. Because there is no changes of methylation levels in the CpG island of APC, Axin, and β-catenin genes in OSCC (Shiah et al., 2016), downregulation of Wnt signaling in OSCC and HNSCC is usually due to methylation of different Wnt pathway inhibitors, such as SFRP-2, WIF-1, DKK-1 (Katase et al., 2010), Dachshund family transcription factor 1 (DACH1), and RUNT-related transcription factor 3 (RUNX3). Microarraybased genome-wide epigenetic analyses of human cancer have shown that inhibitors of Wnt signaling pathway are common sites for promoter methylation silencing. However, these Wnt pathway inhibitors may have different levels of methylation in OSCC and HNSCC cells, and may be significantly related to tumor recurrence or disease-free survival. For example, in OSCC cell, the WIF-1 and SFRP2 genes are frequently methylated, whereas the DACH1 and Dkk1 genes are less frequently methylated (Farooqi et al., 2017). In the same way, the WIF-1 gene is often methylated in primary oropharyngeal cancer tissue and associated with poorer survival (Paluszczak et al., 2015). In addition, methylation of the E-cadherin promoter is the main reason for the loss of membrane β-catenin expression, which leads to the release of β-catenin from the E-cadherin/β-catenin complex into the cytoplasm (Wong et al., 2018). By performing chromatin immunoprecipitation promoter array and gene expression analyses in hepatocellular carcinoma, Cheng et al. (2011) found that enhancer of zeste homolog 2 (EZH2) occupancy of the promoter decreased the expression of several Wnt antagonists including Axin2, NKD1, PPP2R2B, DKK1, and SFRP5. EZH2 is the core components of polycomb repressor complex 2 (PRC2) and has methyltransferase activity. It can catalyze histone 3 lysine 27 trimethylation (H3K27me) and eliminate PRC2-mediated gene suppression. Thus, overexpression of EZH2 promotes the neoplastic transformation of epithelial cells. These findings show that inhibiting the activity of Wnt antagonists through DNA methylation and histone modification enables to the constitutive activation of Wnt/β-catenin signaling. Moreover, testing body fluids to detect DNA methylation is feasible and minimally invasive. Therefore, the Wnt antagonist gene such as SFRP-2, WIF-1, and DKK-1 secreted in plasma can be used as a biomarker for diagnosis and prognosis (Shiah et al., 2016). WNT SIGNALING PATHWAY IN CANCER STEM CELLS OF HNSCC Stem cells (SCs) have the ability of self-renewal and differentiation. The maintenance and repair of tissue homeostasis depends on the activity of tissue-specific SCs. Cancer SCs (CSCs) are a subset of cells that are resistant to chemotherapy and radiotherapy and often promote relapse by stopping or evading clinical treatment (Mannelli and Gallo, 2012). Like other cancer tissues, HNSCC tissue contains small cell subsets with stem-like characteristics (CSCs), which can bring about tumors with hierarchical structure. According to reports, aberrant Wnt signaling has a promoting effect on different forms of cancer (such as colon cancer, liver cancer, and lung cancer), and plays a key role in guarding CSCs (Vermeulen et al., 2010). Le et al. co-cultured HNSCC tumor spheres and cancer-related fibroblast (CAF) cell line in 3D environment to simulate the interaction in vivo and found that Wnt3a activated Wnt signals in cancer cells and CAF. The activation of Wnt increases the characteristics of CSC, such as sphere formation and invasiveness (Lamb et al., 2013;Le et al., 2019). Non-canonical Wnt signals in CSCs are activated by Wnt5a, Wnt11, or other non-canonical Wnt ligands. It is known that non-canonical Wnt signals promote the survival and drug resistance of CSCs through activation of PI3K-AKT signal and YAP/TAZ-mediated transcription. But there are few studies on the role of non-canonical Wnt signaling pathway in the CSC of HNSCC, most of the findings focus on the Wnt/β-catenin signaling pathway. Recent advances suggest that Wnt/β-catenin signaling is involved in the differentiation and development of CSCs in HNSCC. One proposed mechanism is that Wnt/β-catenin may play a specific role in asymmetric cell division, which allows Dvl, Fzd, Axin, and APC to divide asymmetrically in the cytoplasm, producing a progenitor cell and a cell destined to differentiate (Lien and Fuchs, 2014). The analysis of CSC proliferation stimulated by canonical Wnt signal pathway inhibitors has become the latest experimental method to study the role of this signal pathway in CSC self-renewal. In nasopharyngeal carcinoma, CSC isolated from HNE1 cell line treated with Wnt-C59, an inhibitor of Wnt, can reduce the proliferation of CSC (Cheng et al., 2015). In addition, several other studies have shown that numerous canonical Wnt signal pathway inhibitors, including SFRP4, alltrans retinoic acid (Atra), and active natural compounds and honokiol, can reduce the expression of β-catenin and ultimately inhibit the proliferation of CSC in HNSCC (Lim et al., 2012;Yao et al., 2017). The Wnt/β-catenin signaling pathway also plays an important role in regulating differentiation of SC during early embryonic development (Vlad et al., 2008) and cancer including HNSCC. It is reported that CSC isolated from M3a2 and M4e (HNSCC cell lines) are highly activated. The CSCs injected into nude mice differentiate into tumor cells, resulting in five times larger tumor growth than non-CSC after 8 weeks (Lee et al., 2014). A study showed that the expression of CD44 + was essential for maintaining tumor heterogeneity in HNSCC (Prince et al., 2007). The CSCs with high CD44+ were shown to be characterized by high aldehyde dehydrogenase activity (ALDH) and by expression of c-Met and SOX2. According to reports, CD44+/ALDH (high) cells have stronger oncogenicity and selfrenewal ability than CD44 + ALDH (low) cells. ALDH is thought to cause treatment resistance and tumor prevalence by regulating the expression of phosphoinositide 3-kinase (PI3K) and SOX2 signaling pathway (Bertrand et al., 2014). The mesenchymalepithelial transition factor c-Met has been reported to interact with the Wnt/β-catenin pathway in HNSCC (Arnold et al., 2017). The roles of c-Met and Wnt/β-catenin have been widely studied in colon cancer cells, in which their activities determine the fate of cells in CSC. However, the activation of c-Met inhibitor in the presence of β-catenin has been found to result in the elimination of CSCs in HNSCCs (Arnold et al., 2017). It has been reported that FZD8, a modulator of the Wnt/β-catenin pathway, increases the expression of CSCs in HNSCCs by activating the (extracellular regulated MAP kinase) ERK/c-fos signaling axis (Bordonaro et al., 2016;Chen and Wang, 2019). Due to the presence of drug-resistance CSCs, disease recurrence is the main marker of HNSCC. A large body of evidence suggests that Wnt confers chemotherapeutic resistance by upregulating CSC activity in HNSCC. The use of the Fzd/Wnt antagonist SFRP4 was found to increase the drug sensitivity of HNSCC by 25%. SFRP4 was shown to compete directly with Wnt, significantly enhancing cisplatin-induced apoptosis and reducing the activity of tumor cells (Warrier et al., 2014). Furthermore, the use of antagonists had no effect on nontumorigenic mouse embryonic fibroblasts, suggesting that Wnt signaling plays an important role in the development and differentiation of CSCs related to HNSCC. However, the potential mechanism underlying the upregulation of chemical resistance in CSCs remains unclear, as does the mechanism by which Wnt mediates the activation of CSCs. Studies have identified five types of ABC transporters, ABCC1 to ABCC5, as main mediators in the canonical hyperactivation of the Wnt pathway in spheroid cells of HNSCC. The ability of spheroid cells to exhibit CSC-induced chemotherapy resistance was eliminated after knocking out the genes for β-catenin synthesis. However, this knock out resulted in the loss of SC tags necessary for selfrenewal (Song et al., 2010;Yao et al., 2013). Although research on Wnt signal modulators has made great progress, few drugs have been imported for clinical use. Since CSCs have the same characteristics (self-renewal, differentiation) as normal SCs, they present an obstacle to the development of suitable pharmaceutical formulations for HNSCC. WNT SIGNALING AS A THERAPEUTIC TARGET FOR HNSCC Wnt signaling plays an important role in tumorigenesis and acts as a regulator of CSCs renewal in the process of cell homeostasis; thus, it is an attractive therapeutic target. To date, several approaches have been developed, and a few have moved on to clinical trials. One of them is to block the activity of Wnt with specific inhibitor. PORCN, also known as porcupine, is an enzyme which can limit the activation of Wnt signals in serine residues and promote the palmitoylation of Wnt. Using small inhibitors of PORCN, such as IWP, C59, and LGK974 caused rapid decreases in the expression of Wnt signaling (Proffitt et al., 2013). In vitro, C59 inhibited the activity of PORCN, and then inhibited the Wnt palmitoylation, Wnt interaction with carrier protein Wntless/WLS, Wnt secretion, and Wnt activation of β-catenin reporter protein. The chick chorioallantoic membrane (CAM) experiment proved that LGK974 can inhibit the growth and metastasis of HNSCC (Rudy et al., 2016). Studies have also shown that PORCN directly prevents the excessive production of Wnt, thus inhibiting the interaction between Wnt and Fzd protein. At present, the inhibition of PORCN on Wnt is being verified in vivo and in vitro. Additionally, inhibitors of tankyrase stabilize axin and antagonize Wnt signaling including XAV939, IWR, G007-LK, and G244-LM, though they have not yet entered clinical trials (Huang et al., 2009;Lau et al., 2013;Kulak et al., 2015). Moreover, ICG-001, a small molecule that inhibits the transcription of CREB binding proteins, downregulates β-catenin/T cell factor signaling by specifically binding to cyclic AMP response elementbinding protein (Emami et al., 2004;Bordonaro and Lazarova, 2015). ICG-001 is currently in phase I clinical trials in patients with HNSCC. Furthermore, OMP-18R5 is a human monoclonal antibody against the Fzd receptor and is currently in phase I clinical trials. Wnt ligands and their compound receptors are also being evaluated in clinical trials (Kawakita et al., 2014). Examples include Omp-54F28, a chimera of human IgG1 and Fzd8, which is related to the growth of pancreatic cancer cells. Currently, most clinical trials use small RNAs as biomarkers for cancer detection, diagnosis, and prognostic evolution (Hayes et al., 2014). To date, no clinical trial has used miRNAs to predict prognosis and the clinical effect in HNSCC patients. A more comprehensive understanding of the involvement of the Wnt pathway in HNSCC is necessary to develop effective therapeutics for oral cancer. CONCLUSION As outlined above, aberrant activation of the Wnt signaling pathway may impact on HNSCC. In addition to gene mutations in the Wnt component, abnormal changes downstream of EGFR are involved in regulating the Wnt/β-catenin pathway, which can reshape the histone/chromatin structure of the target gene. Because the epigenetic alterations of Wnt antagonists are the cause of Wnt signal activation, it may become a potential biomarker for predicting OSCC recurrence in plasma. Appropriate methods are required to deal with CSC generated by aberrant Wnt signaling. Wnt signaling is one of the regulators of CSC generation involving HNSCC. Because of the complexity of non-canonical signal pathway, most of the research on Wnt in HNSCC is focused on canonical WNT signal pathway, but there are few related studies on non-canonical signal pathway. More attention needs to be paid to non-canonical signaling pathways in the future. The evaluation of various aspects of signal transduction can expand our understanding of both this key pathway and the crosstalk between signaling pathways in cells. Such advancement will enable the development of a broad range of therapeutic interventions to eradicate and respond to HNSCC recurrence. AUTHOR CONTRIBUTIONS JX and LH contributed equally in conceiving the review focus, conducting the literature review, summarizing the manuscript, writing the first draft, and finalizing the manuscript. D-LZ and Y-GL designed and directed the review. JX, LH, D-LZ, Y-GL revised and made corrections to the manuscript. All authors have read and agreed to the final version of the manuscript. ACKNOWLEDGMENTS We thank Dr. April Darling (University of Pennsylvania School of Medicine) for the language editing for this manuscript.
6,477.8
2021-01-05T00:00:00.000
[ "Medicine", "Biology" ]
On the application of the reachability distance in the suppression of mixed Gaussian and impulsive noise in color images In this paper, we address the problem of mixed Gaussian and impulsive noise reduction in color images. A robust filtering technique is proposed, which is utilizing a novel concept of pixels dissimilarity based on the reachability distance. The structure of the denoising method requires the estimation of the impulsiveness of each pixel in the processing block using the introduced local reachability concept. Furthermore, we determine the similarity of each pixel in the block to the central patch consisting of the processed pixel and its neighbors. Both measures are calculated as an average of modified reachability distances to the most similar pixels of the central patch and the final filtering output is a weighted average of all pixels belonging to the processing block. The proposed technique was compared with widely used filtering methods and the performed experiments proved its satisfying denoising properties. The introduced filtering design is insensitive to outliers and their clusters introduced by the impulsive noise process, preserves details and is able to efficiently suppress the Gaussian noise while enhancing the image edges. Additionally, we proposed a method which estimates the noise contamination intensity, so that the proposed filter is able to adaptively tune its parameters. Introduction In the recent years the topic of image denoising has been extensively studied in computer vision and digital image processing fields. The enhancement of image quality is a crucial step for almost every computer vision system. Color digital images are often affected by various types of noise which can be caused by analog to digital converter errors during the acquisition process, transmission disturbances in noisy channels, malfunctioning pixels in the camera sensors, natural and man-made electromagnetic noise sources, aging of the storage material and flawed memory locations, among many others [3,19,41,64]. As the denoising is the first step in the image processing pipeline, the effective restoration allows to successfully accomplish its further stages. Thus, denoising is one of the most significant low level processing operations. Generally, the noise filtering methods used for color image enhancement can be divided into component-wise and vector-based techniques. The component-wise filters process the color image channels independently, neglecting the usually strong inter-channel correlation. The advantage of this approach is that many methods used for the greyscale image denoising can be directly applied to the color image channels and the processing results are merged to obtain the final restored output. However, the separate treatment leads to color artifacts, which are especially apparent at image edges. Therefore, generally the vectorial processing is preferred. The noise distortions are usually modelled using a Gaussian or a heavy-tailed distribution or a mixture of both [12,43]. The enhancement of images degraded by the combination of Gaussian and impulsive noise is a challenging task, since the methods which are designed to reduce the Gaussian noise are not able to remove the outlying samples and those capable of suppressing the impulses, usually fail to smooth out the Gaussian noise [15,24,33,39]. Effective methods for the reduction of Gaussian noise like Non-local Means (NLM) [7] or Block-Matching and 3D Filtering (BM3D) [14] are not able to suppress the impulsive noise. The concept of the NLM filter is to estimate the new values of a pixel by looking for similar samples in the processing block. To determine the similarity between the central pixel and other samples of the block, the pixels in the local neighbourhoods are analyzed (in so-called patches). The accumulated distances between the corresponding pixels serve as a dissimilarity measure. Unfortunately, the impulsive noise is often handled as tiny details and is preserved. The BM3D method performs the denoising using the sparse representation of an image in the transform domain. First, similar fragments of an image are stacked together into 3D data arrays and then a collaborative filtering using a 3D transform on those arrays is applied to estimate the denoising output. Like in the NLM method, when the processed pixel is corrupted, patches with similarly damaged central pixels are privileged, which leads to the preservation of impulses. The idea of these filters relies on the assumption that in a non-noisy image, similar patches can be found in different image regions. In this manner, the image can be denoised by finding all of its corresponding patches, and then by estimating the most similar patch. In [16] the process is built upon the Maximum Likelihood Estimator (MLE) with weighted distance between patches and achieves good restoration results on textured regions. Similar problems with impulsive noise preservation limit also the capability of popular methods like Mean Shift (MS) [13] and Bilateral Filtering (BF) [55], in which pixels from the local neighborhood which are similar to the corrupted, processed pixel are assigned high weighting values and as a result the impulses are again preserved. The inefficiency in impulsive noise removal of the mentioned above filters can be alleviated by dividing the denoising process into two steps. First, by removing the impulses with a filter designed to cope with them and then by applying on the resulting image a filter designed to suppress the Gaussian noise. For the first step, various techniques can be used, like the standard channel-wise median filter [5], the widely used Vector Median Filter (VMF) [2] and its modifications [38], fuzzy filters [48] and highly effective switching methods [36,40]. The median based filters process uniformly the noisy and uncorrupted pixels, which leads to the removal of tiny details and texture. To avoid this effect, only pixels which are detected as impulses are being replaced by a suitable filtering method. Another family of efficient denoising techniques is based on the concept of the Peer Group Filter (PGF) [9,40]. In this approach each image pixel is analysed and depending on the distance to its closest neighbors in the processing window, it is classified as noisy or not corrupted. The switching techniques can be also applied for the removal of mixed noise [32,39]. First, impulses are detected and replaced with an output of a robust technique and the remaining pixels are restored using a smoothing method. Such a design was applied in [61], where impulses were removed by a median based filtering technique and then the image was smoothed with a BM3D filter. Blur is also a problem of image distortion and has been taken into consideration together with impulsive noise in [10,11]. The approach was similar as described before -first impulses were identified and suppressed, and further, remaining pixels were smoothed out by variational methods. Combining NLM [6] with different filters and techniques is also an effective solution, which has been proposed in [31] using the Trilateral Filter [21]. Also in [16,22] a similar patch-based approach for the reduction of mixed noise was introduced. The idea of image inpainting has been also applied for noise removal, like in the approach called robust ALOHA (Annihilating filter-based LOw-rank HAnkel matrix) [25], which uses the sparse and low-rank decomposition of a Hankel structured matrix. Due to the high computational complexity, the usage of parallel CUDA computing is required. The filters intended to smooth out the noise contaminating in color images are mostly exploiting the Minkowski norms in the RGB color space [20]. Some approaches operate on the perceptual spaces like HSV [56] or Lab [26], which yield improved efficiency in terms of objective and subjective quality measures. Techniques utilizing the concepts of fuzzy sets theory also offer a satisfying denoising performance in the mixed noise suppression [23,29,37,44,45]. The filters based on the quaternion representation of color image pixels can also be used for the removal of outliers before subsequent image smoothing [28,57]. In [65] the quaternion based approach was combined with the local reachability density introduced in [4], where the Local Outlier Factor (LOF) has been defined. The concept of reachability distance and LOF has been also successfully utilized in switching fiters intended for the removal of impulsive noise in color images [27,51,58]. In this paper we propose an efficient filtering design, which is an extension and refinement of the technique introduced in [52] in which we described the Pixel to Patch Similarity (PPS) measure. Instead of using the Euclidean distance, we employ the concept of reachability to determine the dissimilarity between pixels and also estimate their measure of impulsiveness. For each sample in the processing block, a weighted average is calculated using the PPS concept and additionally we determine the impulsiveness of each pixel in the processing block. In this way, we are able to eliminate not only impulsive pixels, but also their clusters. This procedure significantly improves the restoration results, especially in the case of strongly contaminated images. The proposed filtering scheme, which will be denoted as Robust Reachability based Local Similarity Filter (RRLSF) enables to efficiently suppress the mixed Gaussian and impulsive noise in color images. The combination of the PPS concept with the reachability distance, enables to determine the membership of a pixel from the filtering block to the local neighborhood of the processed sample being restored and also allows to diminish the influence of outliers on the final restoration result. Additionally we propose a novel method of mixed noise intensity estimation and propose a self-adaptive procedure, which automatically tunes the parameters of the novel RRLSF. Thus, we propose an adaptive denoising design, which requires no tuning of its parameters. The proposed pixel to patch similarity concept and the introduced method of estimating the degree of image degradation can be used in various denoising designs and segmentation procedures like mean or medoid shift [49], anisotropic diffusion [42], fuzzy medians based methods [50] or various extensions of fuzzy clustering algorithms [59]. The main contributions of the paper can be summarized as follows: -Application of the pixel reachability concept within the framework of Pixel to Patch Similarity, which was introduced in [52], for the suppression of mixed Gaussian and impulsive noise in color images. -Analysis of the influence of the filter's parameters on the objective efficiency of the restoration process. -Introduction of a simple and fast method of the mixed noise intensity estimation. -Construction of an adaptive design, able to tune the filtering parameters to the contamination level. The paper is organized as follows. In Section 2 we introduce the proposed filtering design. In the next Section we analyze the influence of the filter's parameters on its denoising efficiency. Additionally we propose a method of mixed noise intensity estimation and develop a fully adaptive filter. Then we compare the proposed techniques with state-of-theart denoising methods. In Section 4 we discuss the properties of the proposed filter, analyze its computational complexity and provide an example of its application for the processing of one-dimensional signals. Finally, we draw some concluding remarks in Section 5. Methods The elaborated in this work noise filtering technique is based on the concept of Rank-Ordered Absolute Differences (ROAD) statistic [21], already applied for color image denoising [34,35], and the extensively used Bilateral Filter (BF) [55]. The BF is utilizing the similarity measure between a given pixel and the samples from the processing block by combining the spatial distance and color dissimilarity. The pixels in the processing block B of size η = (2r + 1)×(2r + 1) will be denoted as x 1 , . . . , x η , where for convenience x 1 is located in the center of B. The output y 1 of the BF, which replaces the central pixel x 1 of B, is where the weights w r and w s are usually defined as · denotes the Euclidean distance in the RGB color space and τ stands for the distance between the pixels on the image domain. In this way, the BF combines the radiometric closeness of the pixels in the color space and their topographic nearness. The influence of the two weights on the final filter output is controlled by the parameters σ r and σ s , which have to be tuned to the image characteristic and noise contamination intensity. Despite the fact that many filters based on the BF have been developed, their common drawback is the inability to efficiently suppress impulsive pixels. Because of their design, the outliers are treated as image details and are retained. Therefore, the BF is a very effective solution in the case of Gaussian noise, but for the mixed noise, an additional mechanism, reducing the influence of impulses on the denoising result has to be incorporated into its structure. The Trilateral Filter (TF) [21] exploits one of such instruments. In this design, the concept of ROAD statistic has been applied, and it serves as an indicator of pixel corruption level [18,62]. It can be defined for color images as [8,34] ROAD α ( where d i(k) is the k-th smallest Euclidean distance between the central pixel x i of a small filtering window W i (patch consisting of n pixels) and its neighbors, and α denotes the number of closest samples. If a pixel is corrupted by impulsive noise, then the corresponding value of ROAD reaches high value, even when in its neighborhood a similarly damaged pixel can be found, as shown in Fig. 1. In [30,52] an efficient approach to the mixed noise reduction in color images called Robust Local Similarity Filter (RLSF) was proposed. It is based on the ROAD and BF concepts and utilizes the impulsivity measure used in TF. However, to diminish its computational complexity and to decrease the number of required parameters, the influence of topographic distance between pixels on the image domain has been neglected. In the RLSF, for the pixels x j , (j = 1, . . . , η), from a processing block B centered at pixel x 1 , we assigned the ROAD defined as the sum of α smallest distances between a pixel x j belonging to B and the pixels from the filtering window W 1 in the center of B. The output of the filter presented in [30,52] replacing pixel x 1 , is defined as a weighted average with where d j 1(k) is the k-th smallest dissimilarity measure between x j and the pixels of the window W 1 of size 3 × 3 (patch containing n = 9 elements), which is centered at x 1 and σ is a smoothing parameter. The RLSF efficiently reduces the mixed noise, but highly corrupted images may contain too many impulsive pixels, which are not recognized as outliers. Thus, in our new Robust Reachability based Local Similarity Filter we analyze the neighborhood of the central pixel of the processing block and its remaining samples, and try to further reduce the influence of impulsive pixels on the final restoration result. First, we introduce the concept of reachability distance, which will be used in the definition of a novel, robust dissimilarity measure discriminating the image pixels. Let d ij denotes the Euclidean distance in a chosen color space between pixels x i and x j and let d i(k) stands for the distance of a pixel x i to the k-th neighbor as depicted in Fig. 2 Thus, the reachability distance R α of x i from x j is at least d i(α) and takes the value d ij when it is greater than d i(α) , (see Fig. 2a). Figure 2 b shows an example, in which the distance between x j and x i is relatively small, however as the distance d i (4) is greater than d ij , the reachability R 4 (x i , x j ) is equal to d i (4) . Therefore, even if a point is very close to the reference one, the reachability distance can be high, as for its computation the local structure of data points is considered. In this paper we will employ a modified definition of reachability, denoted as R [1,47], which enables more stable description of the outlying data points structure with where ρ ij is a chosen dissimilarity measure between x i and x j , which allows to adopt other types of pixels discrepancy measures and υ α is the mean of the α-smallest difference measures calculated for the closest samples of x i . Thus, instead of the Euclidean distance, more robust distance types or dissimilarity measures can be used and the averaging operation guarantees more stable behavior in the case of low number of points which are being analyzed. In this way, instead of taking in (6) the Euclidean distance with rank α, the average of the smallest α difference measures is computed. The calculation of the reachability distance R is illustrated in Fig. 3. First the average of the nearest α dissimilarities of the pixel x j ∈ B to the pixels in the window centered at x 1 is calculated and then compared with the absolute difference of intensities of pixels x 1 and x j . Finally, the maximum is taken as a reachability distance. Now, we are able to introduce a new measure of discrepancy between a pixel x j and a patch W i centered at x i , which can be viewed as a generalization of the previously defined ROAD and will be denoted as Ψ α and is equal to the average of reachabilities from the pixel x j of a processing block to the α most similar pixels, denoted as x i(k) , belonging to the patch W i . The proposed measure of dissimilaritiy between a pixel and a patch is insensitive to outliers, as instead of direct measures of closeness, the modified reachability concept is employed. Using the measure Ψ α (W i , x j ) of the difference between pixels x j and those contained in W i , we can define a weight describing their closeness where σ 1 is a smoothing parameter. However, we are only interested in the patch W 1 in the center of the processing block. Therefore, in order to replace the pixel x 1 in its center, we assign to every pixel of B the weight w 1 (W 1 , x j ) according to (10). The pixels in W 1 , which is located in the center of B, are likely to be noisy, however the reachability values are calculated only for the α pixels which show similarity to the individual pixels from the processing block. Additionally, the application of reachability distance prevents the situation that corrupted pixels in B get high weighting values, when similar pixels are contained in the central patch W 1 . Fig. 3 Illustration of the computation of reachability of the noisy pixel x 1 = 244 in the center of a 3 × 3 window from a pixel x j = 198 of the processing block. For the sake of simplicity, a greyscale image is used and absolute differences of intensities between pixels are taken as dissimilarity measures. Note that the distance between x j and x 1 is 46 and is significantly smaller than the reachability distance, whose value is 116 In order to further diminish the influence of an outlying pixel x j ∈ B on the restoration result, we can evaluate its impusiveness by analyzing the closeness to local neighborhood or in in other words to the patch W j . To this end, we can assign the pixels a second weight where σ 2 is the second parameter. It is worth noticing, that when calculating the second weight, we are using the reachability distances to the pixels in W j which are most similar to x j . This requires finding the closest neighbors and in fact the information from a 5 × 5 window is exploited. Additionally, the reachability distance makes the procedure robust to the influence of small clusters of impulsive pixels. For the estimation of new filter output y 1 , replacing the central pixel x 1 of B, we compute a weighted average of all pixels x j in this block. As a result, the output pixel y 1 of the proposed filter, will be The described approach is illustrated in Fig. 4, using for simplicity a greyscale image. For a pixel x 1 with intensity 244, a processing block B and a small 3 × 3 window W 1 is taken -both marked red. Then for an exemplary pixel x j with intensity 198 in the center of W j , we proceed as follows -to compute w 1 , we first calculate the distances between pixel of intensity 198 to each pixel in the patch W 1 . Then, we compute the weight by averaging the computed reachabilities for the α closest pixels in W 1 , -to compute w 2 , we proceed similarly, but instead of taking the patch W 1 we analyze the patch W j centered at x j . Utilizing the reachability approach, we minimize the impact of cluster of impulsive pixels in the averaging process. Furthermore, the second weight w 2 eliminates single impulses, which can be wrongly taken with high weight into the averaging process. Results The aim of our work is to design a filter which is capable of suppressing both the Gaussian and impulsive noise in one denoising framework. The need of developing such an approach is illustrated in Fig. 5, which depicts the performance of NLM, BM3D and MS techniques, when denoising a test image (a) contaminated by Gaussian noise of standard deviation σ = 30 (b) and with subsequently introduced impulsive noise (c), in which the fraction of 30% of pixels was corrupted. In a pixel affected by impulsive noise, each RGB channel was randomly replaced by a value drawn from uniform distribution in the range [0, 255]. As can be observed, using the classical methods, the Gaussian noise is efficiently removed, but the impulses are retained. This example demonstrates the need for developing robust techniques which are able to cope with the noise mixtures. Analysis of the influence of parameters on denoising efficiency The efficiency of the RRLSF was evaluated on a set of color test images depicted in Figs. 6 and 7. The images were first distorted by Gaussian noise with standard deviation in the range 10-50, (with step 10) and then 10-50 % of the pixels was replaced by random-valued impulsive noise, so that every RGB channel of a corrupted pixel was assigned a value drawn from uniform distribution in the range [0, 255]. To simplify the notation, p denotes the intensity of the Gaussian noise with standard deviation p, combined with impulsive noise contaminating p% of image pixels. where x j,q , q = 1, 2, 3, are the channels of the original image pixels, N denotes the number of pixels in an image and y j,q are the restored components. Additionally, the spectral residual based similarity (SRSIM) measure was used to better express the image restoration quality in consistency with subjective ratings [63]. First, we have investigated the influence of the radius r of the processing block B and the parameter α on the denoising efficiency of the proposed filter. Figure 8 shows the dependence of PSNR on the radius r and α using the test color image PEPPERS. As can be seen, the higher the contamination level, the bigger processing blocks are needed. The parameter α however, does not depend significantly on the noise level, and taking α equal to 3 or 4 guarantees good denoising performance. To draw more general conclusions, in Fig. 9 the distribution of r and α values yielding the highest PSNR values, using 100 images from the database depicted in Fig. 7 has been Fig. 9 Distribution of radius r and α parameters providing best possible filtering efficiency in terms of PSNR measure. The box plots have been created using 100 images (Fig. 7) with filter setting yielding the highest PSNR values Fig. 10 Dependence of the highest PSNR values obtained for various σ 1 and σ 2 for test image PEPPERS presented. For low contamination level, the processing block of size 5×5, (r = 2) gives satisfactory results. For higher noise intensity (p = 30), a block radius r = 5 is recommended and finally for very high contamination r = 7 is required. The second important parameter α is again not much dependent on the noise level. For low noise level, taking α = 3 nearest pixels for the similarity measure is adequate. For higher noise intensity, 4 closest pixels are sufficient. As can be noticed, the box plots presented in Fig. 9 confirm the dependence of r and α on the noise intensity depicted in the heatmaps of Fig. 8. Using too low or too high α value leads to deterioration of the denoising results. For low contamination, too large value of α will smooth the image, resulting in loosing important details, while for higher contamination, taking too small α will result in insufficient suppression of outliers. Additionally, the two tuning parameters σ 1 and σ 2 are used for better adaptation of the smoothing process, as they reduce the impact of Gaussian noise in the outlier detection process. As can be observed in the heat maps presented in Fig. 10 and also in the box plots exhibited in Fig. 11, the optimal σ 1 parameter is proportional to the noise level p. This behavior is apprehensible, as with the first weight w 1 defined in (10), we analyze the pixels in terms of their impulsiveness. In this way, using a larger smoothing parameter for high noise contamination levels, the outliers are better smoothed out. The σ 2 parameter needed in the second weight w 2 defined in (11) is inversely proportional to the contamination level. For strong noise, the outliers tend to group into clusters and lower σ 2 helps decrease their influence of the final denoising result. Fig. 11 Distribution of σ 1 and σ 2 parameters providing the best possible filtering efficiency in terms of PSNR measure, using the database depicted in Fig. 7 Fig. 12 Box plots depicting the correlation between the average ROAD measureR and the noise level p obtained using the database shown in Fig. 7 Noise intensity estimation To achieve satisfying filtering results, an efficient noise level estimation method is required to adaptively tune the filtering parameters. Using the already introduced ROAD measure, a simple but effective noise intensity estimator can be constructed. LetR denotes the average ROAD αR where ROAD α is defined by (3) and N is the number of image pixels. Figure 12 reveals a strong correlation betweenR and the noise intensity p for α = 3, 4 and 5 using the database presented in Fig. 7. The dependence betweenR and p is nearly linear, which enables to estimate the image contamination determining itsR value. Figure 13 depicts the dependence of the block radius r and smoothing parameters σ 1 , σ 2 onR using α = 3. As can be observed, the optimal setting of the parameters, yielding the optimal performance in terms of PSNR quality measure, is dependent on the image structure, as the scatter plots are rather widely spread, which is in accordance with the box plots depicted in Figs. 9 and 11. Nevertheless, the block size r and σ 1 and σ 2 parameters can be approximated exploiting a linear dependence ofR on the noise intensity level p (see Fig. 12), estimated for the image Fig. 13 Dependence of the block radius r (a), parameters σ 1 (b) and σ 2 (c) on theR image contamination measure, evaluated using the database shown in Fig. 7 which is to be denoised. In this way, assuming a linear relation betweenR and p, we can estimate the values of block size r and parameters σ 1 and σ 2 r = max{1, round(0.065 ·R − 0.5)}, (16) Needless to say, the obtained in this way parameter settings do not guarantee optimal denoising performance, as the deviations from the estimated parameters can be significant, especially in the case of σ 2 , (Fig. 13c). However, the difference between the best possible filtering result and those achieved using the adaptive RRLSF, with automatic parameter settings, is not large and visually hardly noticeable. In this way, the introduced adaptive filter is able to enhance the noisy images in an self-adaptive manner, without experimentally adjusting the filter parameters dependent on the noise intensity. Comparison with competitive methods The described filtering design has been compared with a set of commonly used denoising methods: • Robust Local Similarity Filter, (RLSF) [52], • Trilateral Filter, (TF) [21], • Fuzzy Ordered Vector Median Filter, (FOVMF) [43], • Alpha-Trimmed Vector Median Filter, (ATVMF) [43], • Patch-based Approach to Remove Impulse-Gaussian Noise, (PARIGI), [17] • Restricted Marginal Median Filter, (RMMF) [46], • Combined Reduced Ordering Marginal Ordering, (CROMO), [38] • Annihilating fillter-based LOw-rank HAnkel matrix, (ALOHA), [25] • Bilateral Filter, (BF) [55], • Vector Median Filter, (VMF) [2], and with two-step filtering techniques: VMF or Peer Group Filter followed by BF, MS, NLM or BM3D methods. Different distance measures between pixels were tested and finally the squared Euclidean distance has been chosen, because of the lower computation load and better denoising results. In this way, we do not need to compute the square root and the distance value between two pixels is more precise. Table 1 presents the PSNR, MAE and SRSIM values obtained with the RRLSF and its adaptive version. For all contamination levels, the loss in PSNR when using the self-tuning procedure is mostly lower than 0.5 dB. Additionally, in the Table the quality measures obtained when employing a set of competitive filters are presented. Table 2 shows the comparison with two-stage denoising methods, in which first the impulses are removed and then a filter well suited for the denoising of Gaussian noise is applied. Again, the comparison shows that the new design is able to enhance efficiently color images highly degraded with mixed noise. What is important, the adaptive design yields results which mostly excels over the computationally expensive two-stage approaches. The analysis of the results shows, that the proposed filter produces especially good results for high noise contamination levels. This behavior is caused by the applied concept of the pixel-patch similarity measure, which is effective for highly corrupted images. It is worth noticing that all the results obtained for the competitive filters are optimal in terms of the used quality measures, which means that their parameters yield best possible performance. The RRLSF excels over the filters taken for comparisons for highly contaminated images, however even the fully adaptive version, which requires no tuning of any parameters, offers very satisfying outcomes. Discussion The satisfactory denoising results offered by the proposed RRLSF can be evaluated using objective quality measures, but they can also be visually assessed when analyzing the filtering outcomes depicted in Fig. 14. The impulsive and Gaussian noise is well attenuated, edges are sharp, smooth areas do not contain color blotches and the denoised images are visually pleasing and of overall much better quality than the restoration results obtained using the competitive filters. Figure 14 also shows that the new filter much better preserves image details and smooths out the Gaussian noise in homogeneous image areas. Analyzing Table 1 it can be concluded, that the proposed algorithm outperforms other state-of-art filters for highly contaminated images. Additionally, it performs generally better that the solutions which first remove impulses and then smooth out the Gaussian noise component. On the other hand, for low contaminated images, the best results achieves the Trilateral Filter or a combination of two filters, e.g. PGF+NLM. In terms of the SRSIM measure, the two-pass filters are sometimes slightly better than the proposed RRLSF. The cause can be the fact, that the new filter was being optimized using the PSNR quality measure. A drawback of two-pass filters is their high computational load. In our approach, we need to analyze for every image pixel all reachabilities of elements in a block B to the pixels in the window W centered at the central pixel of B and the window centered at the analyzed pixel. In this manner we get a complexity of O(N · η · n), where N , η and n denote the number of image pixels, number of pixels in the block and the filtering patch, respectively. Figure 15 shows the comparison of the execution time of the RRLSF when compared with the NLM [6] and BF [55]. The new filtering design is slower than BF, but significantly faster than NLM, which allows to apply the elaborated filter for real-time processing tasks even for images in full HD resolution. The experiments have been executed on a CUDA compatible NVIDA RTX2080Ti graphics card. Figure 16 illustrates the application of the proposed filtering design in the denoising of one-dimensional signal (part of a row of the grayscale test image PEPPERS). As can be seen the mixed noise is well attenuated, impulses are completely removed and details are well preserved. In this particular example, we used a processing window consisting of 5 samples and a processing block with the same length. This example shows that the proposed method can be used also in 1D case, however a thorough analysis of the filter's behavior is beyond the scope of the present Fig. 15 Comparison of computational efficiency of RRLSF with the NLM [6] and BF [55] for varying block radius r on test images with increasing size. The presented times are an average of 1000 processing cycles submission. Nevertheless, the proposed method can be useful for artifacts suppression in electroencephalography, electrocardiography or seismic signal processing, among many others. The proposed filtering design compared to our previous work (RLSF) and other stateof-the-art filters reveals a high potential of its use for strongly contaminated images. For low-level noise, two-pass solutions like PGF+BM3D give better results in terms of PSNR and MAE measures, but the computational load is higher and the settings of the optimal parameters is much more complex than in our filtering method. Furthermore, our solution can be easily implemented in a parallel computing environment like CUDA or OpenMP. The proposed novel noise estimator allows applying the proposed RRLSF filter directly on noisy images without a priori knowledge of the noise level. Thus, the filter can be used as self-tuning denoising tool. The proposed denoising structure can be of interest to the image processing community, as it is relatively fast and simple in implementation and can be applied in a straightforward way in many existing image enhancement frameworks. Conclusions In this paper a new method of mixed Gaussian and impulsive noise suppression in color digital images has been presented. The proposed filtering mechanism utilizes the novel reachability concept to determine the dissimilarity of pixels, which is used to estimate the impulsiveness of picture elements. In the proposed design, the pixel to patch similarity measure is used and combined with reachability concept to build a weighted average of pixels in a processing block. The experiments revealed that the new filter is robust to outliers, thus effectively removes the impulsive disturbances, while efficiently suppressing the Gaussian noise. Additionally, the proposed approach preserves details and enhances image edges. The reported results confirm the high efficiency of the elaborated filtering technique when enhancing color images degraded by high intensity mixed noise. Additionally, we proposed a method of the estimation of mixed noise contamination intensity and designed a self-tuning filter, which is able to suppress the noise distortions without any adjustment of its parameters. Therefore, the described in this paper filtering approach can be of interest when the noise intensity is changing and no tuning of parameters is possible. We also implemented our algorithm so that it can work on time series and performed many experiments, denoising one-dimensional signals. The obtained results are very promising and the investigation of the properties of the introduced framework might be addressed in future studies.
8,356.6
2020-08-29T00:00:00.000
[ "Computer Science" ]
ON INFORMATICS Visualization and Analysis of Safe Routes to School based on Risk Index using Student Survey Data for Safe Mobility — Risk analysis is important in heterogeneous industrial domains to enable sustainable development. Data is the basis for emphasizing the potential risk elements for improving efficiency, quality, and safety. For supplying safe routes to schools based on risk analysis, the risk assessment of routes is one of the widely used and very effective methodologies to filter the most dangerous roads, intersections, or specific points on roads. This paper presents a visualization and analysis of the risk assessment approach based on the risk index model using geographical information, including routes, danger points, and student survey data. The proposed risk index model is used for deriving a risk index based on geographical information, including danger points and a route's path. The model includes an equation to calculate the distance of danger points to the path using the coordinates of each location. The survey data is mainly comprised of route and survey information that is analyzed and preprocessed for the input data of the risk index model. The survey mainly consists of basic information on the route, survey participants, school route information, and school route coordinates. The data is classified into the school route data set and the school route danger points data set, and these values are applied to the analysis and the risk index model. Also, the risk index model is designed and developed through the analysis of routes. I. INTRODUCTION The attention to road safety-related problems has grown fast in recent decades. Road accident is a significant threat that accounts for the considerable loss of lives and cost to society [1]- [3]. Statistics show that a child every three minutes and 3000 lives every day [4] face different road problems worldwide. To enhance road safety, the United Nations has declared the decade of 2011-2020 as the road safety decade, thus increasing the importance of Road Safety Analysis (RSA) [5]. RSA is a preventative procedure for analyzing prospective security issues for pedestrians and drivers and finding optimal solutions to eliminate or decrease road dangers or crashes [6]. Various factors can be consequences of road accidents, such as vehicles, population, road network length, and the infrastructure of the road sections. These factors are categorized as public-related, traffic-related, and collective and dynamic-related risk factors [7]- [9]. Many theoretical and practical attempts have recently reduced road dangers and accident rates in recent decades. However, these solutions seem to be in their initial process because of the lack of knowledge regarding road infrastructure, human behavior, and vehicle mechanisms [10]- [12]. For these reasons, societies, institutions, international organizations, researchers, and individuals are paying attention to improving the transportation structure and control functionalities to decrease crash levels, as well as the road infrastructure has been analyzed deeply to detect the black spots on roads [13], [14]. Over the last decades, many resources and attention have been invested in the improvement of road user's protection. A safe road traffic system is defined as one that accommodates drivers' and pedestrians' safety [15], [16]. Several new types of technologies have been deployed to enhance driver safety, such as intelligent airbags [17], automatic vehicle brakes [18], self-control systems, and to name a few. However, these efforts seem insufficient to reduce the risks and crashes on the roads because road infrastructure also plays an essential role [19]. The road safety level can be improved by analyzing, planning, reconstructing, and monitoring [20], [21]. Various tools and applications have been suggested and promoted to increase road safety. In this paper, we propose a visualization and analysis of the risk index model through the routes and risk points on these routes based on survey data on children's going to and coming from schools. The data used for this study was survey-based information collected from 1707 participants from 11 elementary schools. The survey data is analyzed and processed to find the paths for children to go and return to school. The survey mainly consists of basic information on the route, survey participants, school route information, and school route coordinates. The data is classified into the school route data set and the school route danger points data set, and these values are applied to the analysis and the risk index model. The rest of the paper is structured as follows. Section 2 presents the proposed school route risk index model and the survey dataset with its parameters. Section 3 presents the proposed model's implementation details and experimental results. The conclusion and future directions are discussed in Section 4. II. MATERIALS AND METHOD In this section, the proposed risk assessment approach architecture is presented. Fig. 1 presents the risk index model calculation configuration diagram based on the input and output parameters. Input parameters for the risk index model contain routes to the schools and danger points in these routes, and the coordinates of these input parameters are described in latitude and longitude units. The route coordinates present the children's back-and-forth routes to school and home. These routes include multiple risk points collected from school children through the survey. Both routes and risk points on routes are used as inputs to the risk index model. The risk index model derives the risk index value through the risk point's latitude and longitude coordinate values and each path point. The original dataset of the school children's survey is presented in Figure 2. In the dataset, each row contains ID, survey type, family member, living with grandparents, grade, gender, and school information. The number of rows in the dataset is 1707, the ID is described in four digits, and the values of the questionnaire type are all 10, that is, students. Students who participated in the survey were recruited from 11 elementary schools. Fig. 4 shows the percentage of danger levels based on survey results. The largest proportion of danger levels in the dataset was comprised of 39.13% (danger level 4) of overall danger levels, whereas the smallest shares of danger levels were 0.88% and 4.34% for danger level 1 and danger level 2, respectively. The second highest contribution for danger level was accounted for danger level 3 with 23.37%, according to the survey. There was little difference between the Figure of danger level 5 and danger level 6, as the former contributes the third-highest percentage with 17.57%, whereas the share of the latter was marginally lower (14.7%). Fig 5 presents the pseudocode for implementing the read dataset value function. First, get the dataset by reading the CSV file and assign a value to the variable data CSV. The variable data CSV is of type List <String []>. The value of each cell included in the following rows is assigned to the variable routeVo of type RouteVo. The value of the route is of type MutilineString, which is parsed and included in the variable coordVo of type CoorVo and put into routeVo. In this way, the location coordinates of the danger point are also parsed and put into the variable DangerVo of type DangerVo and put back into routeVo. Finally, add the final routeVo from the static variable route list. The map-based application is implemented using Google Map APIs based on JavaScript. We implement the client application using web client frameworks that are enabled to implement using the JavaScript sources. The map presents the information of points by geographical coordinates. IV. CONCLUSION To provide a predictable risk assessment to improve the quality of school life, in this paper, we proposed a school route analysis approach based on a risk index model to provide a risk index. We use survey data collected from 1707 students of 11elementary schools to derive the risk index using the risk index model. The risk index model is implemented based on the proposed equation that results in the risk index by calculating the distance of coordinates. The risk index derived from the school route dataset ranges between 1032.854 and 41183.41. In the risk index of the school route, 87.46% is less than 10000. The risk index derived from the discourse path dataset is at least 669.6247 and at most 66040.45. 93.49% in the risk index of the dismissal route is less than 10000. In future work, we will apply the proposed risk index model to recommendation applications based on learning approaches for providing smart services to students. The application can provide the user with one or more optimal routes through analysis and prediction based on data using the proposed approach and prediction algorithm.
1,982.4
2022-09-30T00:00:00.000
[ "Economics" ]
Reduced Replication of Highly Pathogenic Avian Influenza Virus in Duck Endothelial Cells Compared to Chicken Endothelial Cells Is Associated with Stronger Antiviral Responses Highly pathogenic avian influenza viruses (HPAIVs) cause fatal systemic infections in chickens, which are associated with endotheliotropism. HPAIV infections in wild birds are generally milder and not endotheliotropic. Here, we aimed to elucidate the species-specific endotheliotropism of HPAIVs using primary chicken and duck aortic endothelial cells (chAEC and dAEC respectively). Viral replication kinetics and host responses were assessed in chAEC and dAEC upon inoculation with HPAIV H5N1 and compared to embryonic fibroblasts. Although dAEC were susceptible to HPAIV upon inoculation at high multiplicity of infection, HPAIV replicated to lower levels in dAEC than chAEC during multi-cycle replication. The susceptibility of duck embryonic endothelial cells to HPAIV was confirmed in embryos. Innate immune responses upon HPAIV inoculation differed between chAEC, dAEC, and embryonic fibroblasts. Expression of the pro-inflammatory cytokine IL8 increased in chicken cells but decreased in dAEC. Contrastingly, the induction of antiviral responses was stronger in dAEC than in chAEC, and chicken and duck fibroblasts. Taken together, these data demonstrate that although duck endothelial cells are permissive to HPAIV infection, they display markedly different innate immune responses than chAEC and embryonic fibroblasts. These differences may contribute to the species-dependent differences in endotheliotropism and consequently HPAIV pathogenesis. Introduction Avian influenza A viruses (AIVs) are maintained through enzootic circulation in wild waterfowl, predominantly in the orders of the Anseriformes (e.g., ducks, geese, and swans) and Charadriiformes (e.g., gulls) [1]. Influenza A viruses are classified based on the antigenic properties of their surface glycoproteins hemagglutinin (HA) and neuraminidase (NA). To date, sixteen HA subtypes (H1-H16) and nine NA subtypes (N1-N9) have been identified in wild waterfowl [2]. Infections with influenza viruses in wild waterfowl are mostly asymptomatic and do not cause histological lesions [3,4]. In these species, AIV tropism is limited to the digestive tract [5]. As a result, the route of transmission among wild waterfowl is thought to be primarily fecal-oral. Occasionally, AIVs spillover from wild birds to terrestrial poultry, e.g., chickens and turkeys, via (in)direct contact [6]. In terrestrial poultry, AIVs can be of low pathogenicity causing mild or subclinical disease in the respiratory and gastrointestinal tracts [7]. These viruses were coined low-pathogenic avian influenza viruses (LPAIVs). However, viruses of the H5 and H7 subtypes can mutate into highly pathogenic avian influenza viruses (HPAIVs) in terrestrial poultry, leading to severe systemic infections with mortality rates reaching 100% [7]. Primary endothelial cells were cultured from aortic arches of chicken, Gallus gallus domesticus, and Pekin duck, Anas platyrhynchos domesticus, embryos as described previously [54,56]. Eighteen-day-old chicken embryos were obtained from Drost Loosdrecht B.V., The Netherlands, and 21-day-old duck embryos were obtained from Duck-To-Farm B.V., The Netherlands. In the European Union, embryos from avian species are not subjected to ethical regulations, regardless of the embryonic stage. Briefly, ascending aortic arches were separated from the hearts and minced into small pieces. The pieces were transferred to culture dishes coated with 0.2% gelatin (Sigma-Aldrich, St. Louis, MO, USA) and maintained in Microvascular Endothelial Cell Growth Medium-2 (EGM TM -2MV; LONZA, Basel, Switzerland). Endothelial cells were passaged every 3-4 days for a minimum of 15 passages before use in infection experiments. Chicken and duck embryonic fibroblasts (CEF; DEF) were isolated from 11-day-old chicken and 13-day-old duck embryos (protocol adapted from [57]) and cultured in Medium 199 (LONZA, Basel, Switzerland) supplemented with 10% fetal calf serum (FCS; Greiner Bio-One, Kremsmünster, Austria), 10% Tryptose Phosphate Broth (TPB; MP Biomedicals, Santa Ana, CA, USA), 100 U/mL penicillin (LONZA, Basel, Switzerland), and 100 U/mL streptomycin (LONZA, Basel, Switzerland). Viral Replication Kinetics Confluent monolayers of CEF, DEF, chAEC, or dAEC were inoculated with H5N1 A/Vietnam/1203/04 at an MOI of 1 or 0.001. After 1 h of incubation, the inoculum was removed, and the cells were washed thrice with PBS. Fresh serum-free medium (M199 for EF and EGM TM -2MV for AEC) was overlaid, and cultures were incubated at 40 • C in 5% CO 2 . Viral replication curves were generated in the absence of exogenous trypsin. Supernatant was harvested at the specified time points and stored at −80 • C until further analysis. Infectious virus titers in the supernatant were determined by end-point titration in MDCK cells, and viral matrix segment copy number was determined by reverse transcriptase quantitative PCR (RT-qPCR) (see Section 2.8). RT-qPCR To determine innate immune gene expression patterns, monolayers of CEF, DEF, chAEC, and dAEC were inoculated with H5N1 A/Vietnam/1203/04 at an MOI of 1 in serum-free medium (M199 for EF and EGM TM -2MV for AEC). After 1 h of incubation, the inoculum was removed, and the cells were washed thrice with PBS. Fresh serum-free medium was overlaid, and cultures were incubated at 40 • C in 5% CO 2 . Mock controls were treated with medium only, since the percentage of allantoic fluid in the virus-containing inoculum was limited (<1%). At 6 hpi and 12 hpi, total RNA from virus-and mock-inoculated cells was extracted using the High Pure RNA Isolation Kit (Roche, Basel, Switzerland) according to manufacturer's instructions. RNA was extracted similarly from 200 µL replication curve supernatant (see Section 2.7). Concentration and quality of the RNA was determined using a NanoDrop™ spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). For cDNA synthesis, 100 ng of RNA was reverse-transcribed using oligo (dT) primers (Thermo Fisher Scientific, Waltham, MA, USA) and SuperScript ® IV Reverse Transcriptase (Thermo Fisher Scientific, Waltham, MA, USA) according to manufacturer's instructions. Gene expression levels were assessed by dye-based qPCR using the primer pairs listed in Table 1, targeting GAPDH (glyceraldehyde-3fosphate dehydrogenase), IL6 (interleukin-6), IL8 (interleukin-8), or RSAD2 (viperin), and SYBR ® Green PCR Master Mix (Applied Biosystems, Waltham, MA, USA). Amplification and detection were performed on an ABI7700 (Thermo Fisher Scientific, Waltham, MA, USA) according to manufacturer's instructions. Alternatively, 5 µL of RNA was directly added to a mix for RT-qPCR, containing the primer and probes listed in Table 1, targeting the influenza matrix (M) segment, pan-species GAPDH, or chicken or duck IFNB (interferon-beta) and 4X TaqMan™ Fast Virus 1-Step Master Mix (Thermo Fisher Scientific, Waltham, MA, USA). M segment copy number was quantified using a standard curve performed with the VetMAX TM AIV control kit (Thermo Fisher Scientific, Waltham, MA, USA). The following cycling program was used on an ABI7700: 5 min 50 • C, 20 s 95 • C, (3 s 95 • C, (duck IFNB: +30 s 54 • C), 31 s 60 • C) × 45 cycles. PCR efficiency, linear range, and sensitivity were determined for all primer sets using cDNA or PCR products. Fold changes were calculated using the 2 −∆∆CT method with GAPDH serving as a housekeeping gene for normalization and mean mock values as baseline reference. Detection of Von Willebrand Factor mRNA by RT-PCR Total RNA was isolated from chAEC and dAEC as described in Section 2.8. For cDNA synthesis, 100 ng of RNA was reverse-transcribed using random primers (Promega, Madison, WI, USA) and SuperScript ® IV RT. Expression of von Willebrand Factor (vWF) mRNA in dAEC was determined by PCR as described before [56]. VWF mRNA expression in chAEC was determined using primers listed in Table 1 and the AmpliTaq Gold TM DNA Polymerase kit (Thermo Fisher Scientific, Waltham, MA, USA) according to manufacturer's instructions. The following cycling program was used: 6 min 95 • C, (30 s 95 • C, 1 min 55 • C, 20 s 72 • C) × 30 cycles. PCR product sizes were analyzed by gel electrophoresis, and the amplicons were Sanger sequenced to confirm PCR specificity. In Ovo Inoculation and Immunohistochemical Analysis ECE (14-and 17-day-old) or embryonated duck eggs (EDE; 18-and 21-day-old) were inoculated with 10 3 TCID 50 of HPAIV RG-A/turkey/Turkey/1/05 via the allantoic route. Eggs were incubated in a humidified chamber at 37 • C for 24 h (14d ECE; 18d EDE) or 48 h (17d ECE; 21d EDE), after which they were candled and chilled for a minimum of 1 h. The chorio-allantoic membrane (CAM) was harvested, followed by decapitation of the embryo and immersion of all tissues in 10% neutral buffered formalin. After 2 weeks of fixation, tissues were decalcified for 4 days in 10% EDTA and embedded in paraffin. Thin (3 µm) sections were prepared for a hematoxylin and eosin (HE) staining and immunohistochemical (IHC) analysis. Formalin-fixed, paraffin-embedded (FFPE) sections were rehydrated, and antigens were retrieved by treatment with 0.1% protease XIV from Streptomyces griseus (Sigma-Aldrich, St. Louis, MO, USA) in PBS for 10 min at 37 • C. Endogenous peroxidase activity was blocked by treatment with 3% H 2 O 2 in PBS for 10 min at room temperature. Sections were incubated with 5 µg/mL anti-NP antibody or isotype control (mouse IgG2α; MAB003; R&D Systems, Minneapolis, MN, USA) for 1 h at room temperature, followed by 1 h incubation with 10 µg/mL detection antibody (HRP-coupled goat anti-mouse IgG2α (Bio-Rad, Hercules, CA, USA; Star133P)). Subsequently, sections were developed with 3amino-9-ethyl-carbazole (Sigma-Aldrich, St. Louis, MO, USA) in N,N-dimethylformamide (Honeywell Fluka, Charlotte, NC, USA) diluted in a final concentration of 0.0475 M of sodium acetate (NaAc, pH = 5) with 0.05% of H 2 O 2 for 10 min at room temperature and counterstained with hematoxylin. Glass coverslips were mounted using Kaiser's glycerol gelatin (VWR, Radnor, PA, USA). Pictures were taken on the Microscope Axio Imager.A2 (Zeiss, Oberkochen, Germany). White balance was adjusted with Adobe Photoshop 2021. Statistical Analysis Statistical analyses were performed as described in the figure legends of Figures 3 and 4. Additionally, upregulation of genes, as described in the main text regarding Figure 4, was defined by a statistically significant change from baseline mRNA expression as determined by unpaired t-tests. Analyses were performed using Graph Pad Prism 9 (GraphPad Software Inc., San Diego, CA, USA). Data points in graphs are depicted as mean ± standard deviation (SD) and consist of three independent experiments or as otherwise stated in the figure legends. A p-value < 0.05 was considered to be significant. Primary Avian Aortic Endothelial Cells Have Endothelial Cell Characteristics Embryonic endothelial cells were isolated from the ascending aortas of 18-day-old ECE and 21-day-old EDE as described previously [54,56]. Pekin duck embryos were used for all experiments because they are the domestic equivalent of mallards and show comparable pathogenesis, viral replication levels, and resistance to HPAIV-induced morbidity and mortality as wild ducks [43]. The aortic cells were passaged in endothelial-cell specific medium containing growth factors until they showed a bona fide endothelial cell phenotype and cuboidal morphology (Supplementary Figure S1). This required a minimum of 14 passages, from which point onward the cells were used for infection experiments until they reached replicative senescence after passage 25. Each preparation of AEC was tested for endothelial cell characteristics, using human endothelial cells, HUVEC or EA-hy, as positive controls. The AEC formed vascular-like structures in a tube formation assay ( Figure 1A) and took up acetylated low-density lipoprotein (Ac-LDL) via their scavenger receptors ( Figure 1B,C), a characteristic of endothelial cells [66]. Furthermore, the AEC expressed mRNA coding for the endothelial cell-specific von Willebrand Factor protein ( Figure 1D). Statistical Analysis Statistical analyses were performed as described in the figure legends of Figures 3 and 4. Additionally, upregulation of genes, as described in the main text regarding Figure 4, was defined by a statistically significant change from baseline mRNA expression as determined by unpaired t-tests. Analyses were performed using Graph Pad Prism 9 (GraphPad Software Inc., San Diego, CA, USA). Data points in graphs are depicted as mean ± standard deviation (SD) and consist of three independent experiments or as otherwise stated in the figure legends. A p-value < 0.05 was considered to be significant. Primary Avian Aortic Endothelial Cells Have Endothelial Cell Characteristics Embryonic endothelial cells were isolated from the ascending aortas of 18-day-old ECE and 21-day-old EDE as described previously [54,56]. Pekin duck embryos were used for all experiments because they are the domestic equivalent of mallards and show comparable pathogenesis, viral replication levels, and resistance to HPAIV-induced morbidity and mortality as wild ducks [43]. The aortic cells were passaged in endothelial-cell specific medium containing growth factors until they showed a bona fide endothelial cell phenotype and cuboidal morphology (Supplementary Figure S1). This required a minimum of 14 passages, from which point onward the cells were used for infection experiments until they reached replicative senescence after passage 25. Each preparation of AEC was tested for endothelial cell characteristics, using human endothelial cells, HUVEC or EA-hy, as positive controls. The AEC formed vascular-like structures in a tube formation assay (Figure 1A) and took up acetylated low-density lipoprotein (Ac-LDL) via their scavenger receptors ( Figure 1B,C), a characteristic of endothelial cells [66]. Furthermore, the AEC expressed mRNA coding for the endothelial cell-specific von Willebrand Factor protein (Figure 1D). Duck AEC Mainly Express α2,3-Linked Sialic Acid Moieties The HA of AIVs has a preference for binding to α2,3-linked sialic acids (SA) [67,68] as opposed to human influenza A viruses, which bind preferentially to α2,6-linked SAs [69,70]. Lectin stainings were performed to determine which SA moieties are expressed on the surface of primary chAEC and dAEC. As shown in Figure 2, MAL-II lectins, which are specific for α2,3-linked SAs, strongly bound to chAEC and dAEC. The specificity of the MAL-II staining was confirmed by a reduction in signal upon sialidase treatment. SNA lectins did not bind chAEC, indicating the absence of α2,6-linked SAs, whereas dAEC showed a slight positivity for α2,6-linked SAs. HEK-293T cells were used as positive control for MAL-II and SNA lectin binding. The presence of α2,3-linked SAs and absence of α2,6-linked SAs corresponds to what has already been described for chAEC [54]. The presence of α2,3-linked SAs suggests that dAEC are eligible host cells for the initial step of the influenza A virus replication cycle, which is in accordance with our previous study in which dAEC were stained positively for NP upon inoculation with HPAIV at high MOI [56]. conjugated ac-LDL by chAEC (p15) and dAEC (p16) after 4 h of treatment. Human endothelial cells (EA-hy, p17) and human epithelial cells (H441, p19) were used as positive and negative controls, respectively. The scale bar represents 100 µm. (D) RT-PCR for vWF expression on chAEC (p16) and dAEC (p17). The left lane is the DNA size marker; bp = base pairs. '-RT' = samples where RNA was used as template as a control for genomic DNA detection. Duck AEC Mainly Express α2,3-Linked Sialic Acid Moieties The HA of AIVs has a preference for binding to α2,3-linked sialic acids (SA) [67,68] as opposed to human influenza A viruses, which bind preferentially to α2,6-linked SAs [69,70]. Lectin stainings were performed to determine which SA moieties are expressed on the surface of primary chAEC and dAEC. As shown in Figure 2, MAL-II lectins, which are specific for α2,3-linked SAs, strongly bound to chAEC and dAEC. The specificity of the MAL-II staining was confirmed by a reduction in signal upon sialidase treatment. SNA lectins did not bind chAEC, indicating the absence of α2,6-linked SAs, whereas dAEC showed a slight positivity for α2,6-linked SAs. HEK-293T cells were used as positive control for MAL-II and SNA lectin binding. The presence of α2,3-linked SAs and absence of α2,6-linked SAs corresponds to what has already been described for chAEC [54]. The presence of α2,3-linked SAs suggests that dAEC are eligible host cells for the initial step of the influenza A virus replication cycle, which is in accordance with our previous study in which dAEC were stained positively for NP upon inoculation with HPAIV at high MOI [56]. Productive Infection of HPAIV H5N1 in dAEC Duck AEC were previously shown to be susceptible to H5N1 HPAIVs following high MOI inoculation [56]. Here, we extended the previous analysis by comparing initial infection percentages upon inoculation with HPAIV H5N1 at an MOI of 1 between chAEC, dAEC, and primary embryonic fibroblast cultures (CEF and DEF) at 6 hpi ( Figure 3A). Embryonic fibroblasts were included to discern species-related differences from cell typerelated differences. The GsGd-lineage H5N1 HPAIV isolate A/Vietnam/1203/04 was used as it replicates to high titers in Pekin ducks without infecting the endothelium [21]. CEF, chAEC, and dAEC showed significantly lower infection percentages than MDCK cells, a cell line highly susceptible to influenza A viruses. Duck AEC showed a trend towards lower infection percentages compared to chAEC, CEF, and DEF, albeit the differences were not statistically significant. No statistically significant differences were observed in the amount of virus produced upon the first cycle of H5N1 replication in all investigated cell types, as shown by viral RNA copy number and infectious titer in the supernatant ( Figure 3B). Subsequently, to assess whether dAEC can sustain multi-cycle replication of HPAIV, replication of H5N1 HPAIV at low MOI was compared between chAEC and dAEC. H5N1 replicated to lower viral titers in dAEC than in chAEC, showing significant differences at 24 and 48 hpi ( Figure 3C). This coincided with a delay in the onset of cytopathic effects in dAEC (Supplementary Figure S2). The multi-cycle replication of HPAIV in dAEC confirms the expression of proteases that can cleave and activate HPAIV HA. Together, these data indicate that dAEC are not inherently resistant to HPAIV infection, but duck endothelial cells might be more potent in limiting HPAIV infection than chicken endothelial cells. Duck AEC were previously shown to be susceptible to H5N1 HPAIVs following high MOI inoculation [56]. Here, we extended the previous analysis by comparing initial infection percentages upon inoculation with HPAIV H5N1 at an MOI of 1 between chAEC, dAEC, and primary embryonic fibroblast cultures (CEF and DEF) at 6 hpi ( Figure 3A). Embryonic fibroblasts were included to discern species-related differences from cell typerelated differences. The GsGd-lineage H5N1 HPAIV isolate A/Vietnam/1203/04 was used as it replicates to high titers in Pekin ducks without infecting the endothelium [21]. CEF, chAEC, and dAEC showed significantly lower infection percentages than MDCK cells, a cell line highly susceptible to influenza A viruses. Duck AEC showed a trend towards lower infection percentages compared to chAEC, CEF, and DEF, albeit the differences were not statistically significant. No statistically significant differences were observed in the amount of virus produced upon the first cycle of H5N1 replication in all investigated cell types, as shown by viral RNA copy number and infectious titer in the supernatant ( Figure 3B). Subsequently, to assess whether dAEC can sustain multi-cycle replication of HPAIV, replication of H5N1 HPAIV at low MOI was compared between chAEC and dAEC. H5N1 replicated to lower viral titers in dAEC than in chAEC, showing significant differences at 24 and 48 hpi ( Figure 3C). This coincided with a delay in the onset of cytopathic effects in dAEC (Supplementary Figure S2). The multi-cycle replication of HPAIV in dAEC confirms the expression of proteases that can cleave and activate HPAIV HA. Together, these data indicate that dAEC are not inherently resistant to HPAIV infection, but duck endothelial cells might be more potent in limiting HPAIV infection than chicken endothelial cells. (B) CEF, DEF, chAEC, and dAEC were inoculated as described for panel A, and supernatants were harvested at the indicated time points. Viral copy numbers were quantified by RT-qPCR on the matrix gene segment (left), and infectious virus titers were determined by endpoint titration in MDCK cells and expressed as TCID 50 /mL (right). Bars indicate the mean of three biological replicates, and the error bars represent the SD. Dotted line represents the limit of detection of the endpoint titration assay. (C) ChAEC and dAEC were inoculated with A/Vietnam/1203/04 H5N1 virus isolate at an MOI of 0.001. Supernatants were harvested at the indicated time points, and infectious virus titers were determined by endpoint titration in MDCK and expressed as TCID 50 /mL. Data are presented as mean ± SD from three independent experiments. Dotted line represents the limit of detection of the endpoint titration assay. Statistically significant differences were determined by one-way ANOVA followed up with individual unpaired t-tests. * p < 0.05; ** p < 0.01. Differential Host Responses in dAEC upon HPAIV Inoculation Compared to chAEC and DEF In chickens, H5N1 HPAIV infections are associated with excessive cytokine responses, which are hypothesized to contribute to the high morbidity and mortality [29,30]. Exuberant cytokine responses are infrequent in HPAIV-infected ducks, which primarily mount an antiviral response in the affected tissues (reviewed in [71,72]). The innate immune responses in AEC upon HPAIV A/Vietnam/1203/04 inoculation were determined to assess whether chicken and duck primary cultures respond differentially and whether these differences correlate with the reduced HPAIV replication in dAEC. Relative gene expression levels of a subset of immune genes were analyzed in chAEC, dAEC, CEF, and DEF at 6 and 12 hpi upon inoculation with HPAIV H5N1 at an MOI of 1. Only minor mRNA level fold changes were detected for the pro-inflammatory cytokines IL6 and IL8 ( Figure 4A), which have previously been shown to be upregulated in HPAIV-infected chickens but rarely in ducks [73]. Gene expression levels of IL6 were not markedly altered upon infection with H5N1 HPAIV in any cell type, but IL8 fold change values were higher in chicken than in duck cells. Although IL8 expression levels were unaltered in DEF, IL8 expression significantly decreased in dAEC at 6 and 12 hpi. Contrastingly, upregulation of IFNB, a type-I-interferon-encoding gene that orchestrates antiviral defenses, was the highest in dAEC. IFNB expression was increased significantly more at 12 hpi in dAEC than in chAEC ( Figure 4B). The IFNB upregulation in dAEC correlated with a significantly higher fold change value of the interferon stimulated gene (ISG) RSAD2, coding for the antiviral protein viperin, than in chAEC and in DEF ( Figure 4C). The data suggest that duck endothelial cells react to HPAIV infection in a weaker pro-inflammatory and stronger antiviral fashion than chicken endothelial cells and duck fibroblasts. Monolayers of CEF, DEF, chAEC, and dAEC were inoculated with A/Vietnam/1203/04 H5N1 virus isolate at an MOI of 1. The cells were harvested at 6 and 12 hpi and analyzed for gene expression differences as compared to mock-inoculated cells. Messenger RNA levels were determined by a nucleic acid dye-based (A,C) or primer/probe (B) RT-qPCR. Fold changes were calculated using the 2 −ΔΔCT method with GAPDH serving as a housekeeping gene for normalization. (C) contains data from the 12 hpi time point. Bars indicate three biological replicates, and the error bars represent the SD. Statistically significant differences were determined on log-transformed fold changes by oneway ANOVA, followed up with individual unpaired t-tests. Only intra-species or intra-cell type significances are depicted. * p < 0.05; ** p < 0.01. Endothelial Cells Are a Target of HPAIV Infection in Duck Embryonated Eggs To further study the susceptibility of duck endothelial cells to HPAIV infection, HPAIV tropism was assessed in duck embryos. We aimed to mimic the apical infection of endothelial cells following systemic dissemination of HPAIVs in vivo. This was achieved by inoculation of embryonated eggs in the allantoic cavity [74]. The allantoic route of inoculation results in replication of HPAIVs in the epithelial cell layers of the CAM. Subsequently, HPAIVs spread to the CAM vasculature and into the embryonic blood stream. ECE and EDE at intermediate (14-day-old ECE; 18-day-old EDE) and late (17-day-old ECE; 21-day-old EDE) gestational stages were inoculated with 10 3 TCID50 of the GsGd- Monolayers of CEF, DEF, chAEC, and dAEC were inoculated with A/Vietnam/1203/04 H5N1 virus isolate at an MOI of 1. The cells were harvested at 6 and 12 hpi and analyzed for gene expression differences as compared to mock-inoculated cells. Messenger RNA levels were determined by a nucleic acid dye-based (A,C) or primer/probe (B) RT-qPCR. Fold changes were calculated using the 2 −∆∆CT method with GAPDH serving as a housekeeping gene for normalization. (C) contains data from the 12 hpi time point. Bars indicate three biological replicates, and the error bars represent the SD. Statistically significant differences were determined on log-transformed fold changes by one-way ANOVA, followed up with individual unpaired t-tests. Only intra-species or intra-cell type significances are depicted. * p < 0.05; ** p < 0.01. Endothelial Cells Are a Target of HPAIV Infection in Duck Embryonated Eggs To further study the susceptibility of duck endothelial cells to HPAIV infection, HPAIV tropism was assessed in duck embryos. We aimed to mimic the apical infection of endothelial cells following systemic dissemination of HPAIVs in vivo. This was achieved by inoculation of embryonated eggs in the allantoic cavity [74]. The allantoic route of inoculation results in replication of HPAIVs in the epithelial cell layers of the CAM. Subsequently, HPAIVs spread to the CAM vasculature and into the embryonic blood stream. ECE and EDE at intermediate (14-day-old ECE; 18-day-old EDE) and late (17-day-old ECE; 21-day-old EDE) gestational stages were inoculated with 10 3 TCID 50 of the GsGd-H5N1 HPAIV RG-A/turkey/Turkey/1/05. The embryo and CAM were harvested for further processing at 24 hpi (intermediate stage) or 48 hpi (late stage), upon which both species displayed ubiquitous subcutaneous hemorrhaging. Productive HPAIV infection was determined by immunohistochemical detection of NP. In both species, viral antigen was predominantly present in the epithelial cells of the CAM and in the endothelial cells of the CAM and embryonic tissues, including but not limited to the lungs, heart, liver, intestine, and kidney ( Table 2). As an illustration, the endothelial cells of the lungs were positive for NP and showed evident histopathological changes in response to HPAIV infection ( Figure 5). The developing avian lung contains parallel structures, the parabronchi, which consist of premature epithelial cells in tubular structures surrounded by mesenchyme ( Figure 5A). The vasculature is embedded within the mesenchyme and can be recognized by the thin layer of endothelium which surrounds a lumen with nucleated erythrocytes. The lungs of infected ECE and EDE showed hyperemia and increased optical empty space in the mesenchyme, indicative of edema ( Figure 5B). In both species, viral antigen was mostly confined to the endothelial cell layers. Taken together, the overt infection of endothelial cells in many embryonic duck tissues, including the respiratory tract, shows that embryonic duck endothelial cells are susceptible to HPAIV. Discussion Terrestrial poultry show a pronounced endotheliotropism upon infection with HPAIVs, which is associated with a plethora of systemic fatal clinical manifestations such as edema, hemorrhage, and coagulopathy. Contrastingly, HPAIVs do not, in general, infect endothelial cells of wild and domestic ducks, and viral antigen is predominantly de- Discussion Terrestrial poultry show a pronounced endotheliotropism upon infection with HPAIVs, which is associated with a plethora of systemic fatal clinical manifestations such as edema, hemorrhage, and coagulopathy. Contrastingly, HPAIVs do not, in general, infect endothelial cells of wild and domestic ducks, and viral antigen is predominantly detected in parenchy-mal and epithelial tissues instead [46]. Concomitantly, clinical manifestations upon HPAIV inoculation are generally milder. The reason for the species-dependent endotheliotropism has remained elusive partially due to the absence of robust endothelial cell culture methods of avian species other than chicken. We employed our previously established primary duck aortic endothelial cell cultures in combination with an in ovo setting to investigate the susceptibility of duck endothelial cells to HPAIVs. Here, we show that primary duck endothelial cells are susceptible to A/Vietnam/1203/04 and RG-A/turkey/Turkey/1/05, two H5N1 HPAIVs that cause severe disease in young Pekin ducks yet without endotheliotropism [75,76], upon direct inoculation in vitro and in ovo. However, primary duck endothelial cells showed a markedly different innate immune response than primary chicken endothelial cells, which was associated with reduced viral replication. While the cellular distribution of sialic acid moieties has been extensively studied in mammals, information on SA moiety expression in avian tissues is limited, especially regarding endothelial cells. Kuchipudi et al. reported that adult chicken kidney endothelium expresses solely α2,6-linked SA moieties, showing a striking absence of α2,3-linked SA moieties [77]. A similar expression pattern was observed in embryonic chicken lung endothelium [74]. Contrastingly, primary chAEC in this study and previously published work [54] were detected by MAL-II lectin, which is specific for α2,3-linked SA moieties. However, in light of the limited information available on chicken endothelial cell-specific SA expression and contradicting results from FFPE-tissue lectin binding studies in general [77,78], a definitive conclusion regarding SA expression in chicken endothelial cells in vivo cannot be drawn. The results from the current study are in accordance with the high susceptibility of chicken endothelial cells in vivo to HPAIVs that have a preference for α2,3-linked SA moieties. Primary dAEC expressed both α2,6and α2,3-linked SA moieties, as was reported for endothelial cells in adult mallard and Pekin ducks [77,79]. The absence of viral antigen in endothelial cells of mallard and domestic ducks upon HPAIV inoculation via the natural route has led to the suggestion that duck endothelial cells are refractory to HPAIV infection [46]. Although we detected a trend towards lower initial HPAIV infection percentages in dAEC, both in the present study using the strain A/Vietnam/1203/04 and previously using RG-A/turkey/Turkey/1/05 [56], the susceptibility of primary dAEC to HPAIVs suggests that duck endothelial cells are not inherently resistant to direct infection by HPAIVs. Additionally, dAEC sustained HPAIV multi-cycle replication, alluding to the presence of furin-like proteases. Endothelial cells were widely infected in duck embryos when those were inoculated with HPAIV via the allantoic route. Consequently, the absence of endotheliotropism of HPAIVs in wild and domestic ducks in vivo might be attributed to other factors, cellular or soluble, that are not accounted for in the current experimental system. For example, Short et al. have shown that human endothelial cells were not infected by H5N1 HPAIV when co-cultured with human epithelial cells, despite being susceptible to HPAIV infection in monoculture [80]. Epithelial cells are the first target of HPAIVs and might act as barrier that counteracts viral dissemination into the cardiovascular system or that indirectly alters the antiviral state of endothelial cells through paracrine signaling. Only one study, to our knowledge, has performed an in vivo investigation into the cellular tropism of HPAIVs (GsGd-H5N1) upon intravenous inoculation in domestic ducks, eliminating the prior targeting of epithelial cells [81]. No or minimal numbers of NP-positive endothelial cells were detected, indicating the resistance of duck endothelial cells to HPAIVs, even when apically targeted in vivo. However, the ducks were dissected at different days after inoculation than the inoculated chickens, which prevents direct comparison of endotheliotropism between the species. Therefore, further research on endotheliotropism upon intravenous HPAIV inoculation in ducks is necessary to establish whether duck endothelial cells are intrinsically resistant to infection in vivo. Here, embryonic aortic endothelial cells and embryonated eggs were used as model system to study HPAIV infections. Although embryonated eggs present an affordable and easily accessible source of material for primary cell isolation, their use is accompanied with some caveats. The immaturity of the embryonic endothelial cells, innate immune system, and structural features within the developing embryo might favor infection by HPAIVs. The innate immune system develops early during avian gestation, and 14-day-old ECE can mount a proper innate immune response [82]. Nevertheless, antiviral responses increase and mature during further development, which continues in the weeks post-hatch [83]. The infection of endothelial cells in duck embryos upon HPAIV inoculation via the allantoic route is in stark contrast with the previously mentioned data from intravenously inoculated adult ducks [81], which might allude to the immaturity of the embryonic tissues. The AEC in this study were harvested from late-stage embryos and cultured in growth factor rich medium that promotes endothelial cell differentiation. However, we cannot exclude that the differentiated AEC still possess an embryo-like phenotype. Additionally, endothelial cells are a heterogeneous population and are not equally targeted during HPAIV infection in chickens [16,19]. Thus, care should be taken before results obtained with one endothelial cell type are extrapolated to others. We previously compared the susceptibility to HPAIVs of duck aortic endothelial cells and endothelial progenitor cells from the bone marrow, which was comparable [56]. This suggests that our results are in part generalizable to endothelial cells from other sources. Once currently available protocols will be adapted to the isolation of endothelial cells from different tissues, the validation of our results in a wider range of avian endothelial cell types is desirable. The innate immune responses during HPAIV infections in chickens and ducks have been studied in vitro and in vivo as a means to explain the stark differences in HPAIV susceptibility and pathogenesis between these species (reviewed in [71,72]). Care needs to be taken when results from these studies are compared and summarized as they contain a plethora of different HPAIV strains, experimental methods, sample times, and often describe only one of the two species. However, the pro-inflammatory cytokine induction is generally stronger in chickens than in ducks following HPAIV infection [40,65,73]. Furthermore, excessive cytokine responses have been observed following infection with some but not all HPAIV strains in terrestrial poultry [29,30,84], whereas they are less apparent in ducks [29,65]. Ducks are thought to mount a quick and robust antiviral response upon HPAIV infection, consisting of the overexpression of interferons, ISGs, and patternrecognition-receptors (PRR) [65,[85][86][87], but this is not always reported [40]. A similar induction of antiviral responses has been observed in chickens inoculated with HPAIV, which is remarkably often similar or stronger than in duck counterparts [61,73,88] and could be due to higher levels of viral replication. However, the chicken antiviral responses are unable to ameliorate or clear HPAIV infection. Differences in innate immune responses between chickens and ducks are often attributed to the absence of RIG-I in chickens [89], which is the main cytoplasmic RNA-sensing PRR, but whose absence is partially compensated by signaling through the MDA5 receptor [90][91][92]. Here, pro-inflammatory responses were indeed slightly stronger in chicken cells, as shown by a modest upregulation of pro-inflammatory chemoattractant cytokine IL8 upon HPAIV inoculation in chAEC, as described before [54], whereas a decrease was observed in dAEC. Similarly, Tong et al. recently set out to understand the role of endothelial cells in cytokine induction in avian species [55]. In that study, they reported the upregulation of pro-inflammatory genes in chAEC upon HPAIV infection, whereas those genes were either not differentially expressed or downregulated in dAEC. The individual relative expression patterns for IL6 and IL8 do not align perfectly between our study and that of Tong et al. Downregulation of IL8 was not detected in dAEC by Tong et al. and, in contrast to our results, the induction of IL6 expression differed between chAEC and dAEC. Interestingly, Tong et al. showed that direct stimulation of the innate immune system by poly(I:C) treatment resulted in an even bigger difference in pro-inflammatory cytokine expression between chAEC and dAEC. In the present study, antiviral responses, indicated by IFNB and RSAD2 expression, were stronger in dAEC than chAEC, which correlated with reduced HPAIV replication. Tong et al. did report the overall trend of a stronger antiviral response in dAEC compared to chAEC, as seen by the upregulation of ISGs such as MX1 and RSAD2, but did not detect differences in relative expression levels of IFNB itself nor in HPAIV infection levels. The differences be-tween our two studies might be explained by different experimental methods, viral strains, and cell preparations. Additionally, we observed a cell type specific effect of interferon and ISG induction in duck cells, as endothelial cells showed stronger responses than embryonic fibroblasts. Based on the current results, we postulate that duck endothelial cells are more potent than other duck and chicken cell types in mounting antiviral responses, which might explain reduced and absent virus replication in vitro and in vivo, respectively. To allow for generalization of the current results in duck endothelial cells to HPAIVs in general, the panel of tested virus strains requires extension as only two viral strains were used to generate the current data. Moreover, further studies are warranted to clarify what other factors might limit the replication of HPAIV in duck endothelial cells and how endotheliotropism influences the immunopathology that accompanies HPAIV infections. Conclusions The present study provides insight into the susceptibility and innate immune responses of duck endothelial cells to HPAIV infection. Although dAEC were permissive to HPAIV infection, multi-cycle virus replication in dAEC was limited when compared to chAEC. Moreover, dAEC displayed a markedly different innate immune response than chAEC. These differences may contribute to the species-dependent endotheliotropism in vivo and consequently HPAIV pathogenesis in chickens and ducks. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/v14010165/s1, Figure S1: Primary AEC have an endothelial cell morphology, Figure S2: Delayed onset of cytopathic effects in dAEC compared to chAEC upon inoculation with HPAIV. Institutional Review Board Statement: Ethical review and approval were waived for this study due to the fact that, in the European Union, embryos (regardless of the gestational stage) from avian species are not subjected to ethical regulations. Please see the "European directive on the protection of animals used for scientific purposes" (Article 1, point 3). https://eur-lex.europa.eu/legal-content/ EN/TXT/HTML/?uri=CELEX:32010L0063&amp;from=EN (accessed on 5 November 2021). Informed Consent Statement: Not Applicable. Data Availability Statement: Data are contained within the article or Supplementary Materials.
8,597.4
2022-01-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
The Project Title: The Virtual Laboratory and Quality of Education The up-to-date requirements to education determine the need in improving all possible forms and methods of teaching and learning process. Currently, the students show interest in the technical means of learning as compared to the traditional ones (books and laboratories) as the first allows eliminating the disadvantages of the traditional teaching and learning methods. The study of subjects through virtual laboratories allows speeding up the process of grasping of the learning materials. At present, the information and computing systems have the determining value in the research and education, production, and other spheres of human activities. Development of informatics and the use of computers in the scientific researches raises the question of revising of the basic concepts on presentation of scientific knowledge even in the already deeply developed and highly formalized areas; this places a premium on the problem of structuring this knowledge. Development of the multimedia teaching and research laboratories and their use in the engineering education is a promising trend in training for the high modern technologies, in preparing of highly qualified scientists and subject specialists, as well as upgrading qualifications of the engineering and technical personnel and staff of the enterprises of the industrial sector (Paluch et al. 2012, pp. 34–37). The aim of this work is to create virtual laboratories for technical disciplines and to incorporate them in the teaching and learning process that determines the relevance of the chosen theme. Introduction The need in using innovative methods in learning professional technical subjects in the educational institutions defines the up-to-dateness of this theme. In spite of the sufficient provision of the educational institutions with the technical media of teaching, often teachers cannot give students the full complex of the knowledge, which is required in the modern era. The electronic educational resources based on the modern computer threedimensional simulation of physical processes and phenomena are being realized in the form of multimedia teaching and research laboratories or virtual simulators. The argument for using new technologies of virtual simulators is an active introduction of the modern means of computer simulation and information technologies in the sphere of education, as a new transdisciplinary area. The main reasons for using the technologies of virtual simulators are as follows: • The existing laboratory stands and workshops are not sufficiently equipped with modern appliances, devices, and tools. • The majority of the laboratory stands and training workshops are put into effect after their withdrawal from the production, do not meet the modern requirements and outdated, and thus can distort the results of the experiments and serve as a potential source of danger to the students. • The laboratory work and stands require annual upgrading, which leads to additional financial cost. • The areas like construction material engineering or physical chemistry, in addition to the equipment, need consumable materials, raw materials, reagents, etc., the cost of which is sufficiently high. • The modern computer technologies allow to see the processes, which are difficult to see in the real world without the use of additional equipment, for example, due to the small size of the observed particles. • The virtual simulators allow to simulate the processes that are fundamentally impossible in the laboratory conditions. • Safety is an important advantage for using virtual laboratories in the cases of working with high voltages or chemicals. • Due to the inertia of the work or processes, it is difficult to carry out repeated analysis or verifications in some laboratory equipment during the allotted time. Considering the abovementioned facts, it is necessary to introduce a new, effective, and affordable teaching methodology, which would facilitate the solution of the following tasks: • Initiate sufficient interest among the audience, thus will increase activity and independence of their educational work. • Attract the attention of listeners, considering their psychological peculiarities, to improve the perception of educational material due to its multimedia character. • Provide full control on learning of the material by each student. • Facilitate the process of repetition and training in preparing for exams and other forms of knowledge evaluation. • Release teachers from the routine monitoring and counseling activities. • Use extracurricular time to study the design in the form of home assignments. Exactly from this point of view, the introduction of information technologies promotes optimal solution of the above problems and elimination of the number of shortcomings of the traditional method of teaching. The multimedia teaching and research laboratories, which are created on the computers, can help in solving these issues fully. Practical Application The amount of hours allocated for teaching of chemistry in the higher educational institutions of the Republic of Tajikistan is not sufficient to capture all the themes of the chemistry lessons. The multimedia means are regularly used in the lessons as innovation methods for accelerated education of students. To this end, the Branch Office of the Tajik Technological University in the city of Isfara uses multimedia technology as a modern method of teaching in all subjects and especially in chemistry. While teaching chemistry, it is efficient to use computer technologies in the classroom for teaching new materials (presentations for lectures), developing skills (educational testing), as well as carrying out practical works in chemistry. The purpose of the computer application in a chemistry class is to create didactic active environment conducive to productive cognitive activity in the process of learning new material and developing students' thinking. By implementing new technologies in the educational process, we give students the opportunity not only to learn the subject but also to operate a computer. Many tasks in the computer variant of the subject allow developing the creative abilities of students, looking at the subject from the other angle and expressing themselves in new activities. A definite positive side of introducing new technologies allows us to make a new step toward the future, where the computer is a means for realizing our capabilities and talents (Paluch 2015). When working with the multimedia technology, the students are actively involved in the cognitive activity from the beginning. During such training, they learn not only to acquire and to apply the knowledge but also to find the necessary learning tools and sources of information and to be able to work with this information. Use of Virtual Laboratories in the Chemistry Lessons The objectives for use of virtual laboratories in the lessons: • Create a bank of training modules that can be used in the lessons. • Implement the idea of individualization of learning in accordance with the speed, which is the most appropriate to each student. • Optimize the monitoring process to check students' knowledge. • Minimize the likelihood of the formation of students "inferiority complex". • Improve the quality of education. The extensive use of the animation and chemical modeling by using computer makes the teaching and learning process visual, understandable, and remembering. Not only the teacher can check the students' knowledge by using the test system, but also the student himself/herself can control the degree of grasping the material. The use of virtual tours greatly expands the students' horizons and facilitates the understanding of the essence of chemical production. We believe that the main advantage of the computer design in the chemistry class is its use when considering explosion and fire processes, reactions involving toxic substances and radioactive substances, in short, everything that makes a direct danger for the health of students. An Interactive Whiteboard as a Mean for Productive Learning of the Educational Material in the Chemistry Classes An interactive whiteboard is a touch-sensitive screen connected to a computer, which transmits the image from the projector to the board. It is enough to touch the surface of the board and to operate it. No doubt that an integral part of the chemistry is an experiment. In the traditional lessons, students do practical work, the purpose of which is to examine the properties of the substances with the help of observations. When conducting experiments, the students observe only the external effect of interaction and express the changes occurred with the substances in the form of equations, by using chemical formulas and mathematical symbols. Why do some chemical reactions occur and others do not? What happens to the atoms and molecules in the process of chemical reactions? To see this, one needs to look into a completely different world -a microworld, which is really closed. Not everyone has the ability to abstraction. So, in parallel with the demonstrations of experiments, we have decided to show with the help of graphics, animation, and sound effects, using electronic presentations and work on the interactive whiteboard how this world of atoms is arranged and what happens to the atoms and molecules in chemical reactions (Balanova 2013). An interactive whiteboard use is the matter of topical interest. The lessons conducted with the help of an interactive whiteboard are more productive compared to the traditional lessons. It is known that 87% of the information comes into the brain through the visual channel of perception (Norenkov and Zimin 2004). The use of a computer and interactive board opens up great opportunities: the brilliance and different effect of entry, movement, and exit of objects. With the help of graphics and animation, one can show how gradually a structural formula of the substance appears and how consistently transfer of complex reactions happens; it can display the mechanism of chemical reactions -in which chemical bonds are broken and are formed again -and at the same time how reactive molecules are aligned with each other; it is possible to show how the speed of the reaction takes place. Thus, an interactive whiteboard gives much greater opportunities for joint activities of the teacher and students compared to the traditional board: • A lesson with an interactive whiteboard is better provided with the use of visual methods and information, contributes to the increase of interest and attention in the class, and gives the opportunity to save study time, to check homework quickly and effectively, and to deepen the knowledge in the study of the chemical reactions by designing molecules on the interactive whiteboard and improving the knowledge on the types of chemical bonds. • At such type of lesson, you can return back to the previously received information. • The modern interactive whiteboard creates a special spirit of cooperation in the classroom. An interactive whiteboard is nice. There are "teacher-computer", "student-computer", and "teacher-student" interactions. Conducting lessons with the interactive whiteboard broadens the horizons of interaction: it provides the broad possibility for combining "teacher-computer-student" interaction. The lessons using ICT arouse a desire of the students to make presentations themselves and to show them to the classmates (Ivanov 2009, pp. 207-211). No reports and messages can withstand the presentations on the interactive whiteboard. Conclusion Thus, the effective application of virtual laboratories in education contributes both to the improvement of the quality of education and saving of the financial resources, as well as creating the safe and clean environment. Considering the above facts, it arises a need in introducing a new, effective, and affordable method of training -interactive learning by using virtual laboratories and interactive board. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2,765
2018-01-01T00:00:00.000
[ "Education", "Computer Science" ]
The X-ray Correlation Spectroscopy instrument at the Linac Coherent Light Source A description of the X-ray Correlation Spectroscopy instrument at the Linac Coherent Light Source is presented. Recent highlights illustrate the coherence properties of the source as well as some recent dynamics measurements and future directions. Introduction The Linac Coherent Light Source (LCLS), a US Department of Energy Office of Science user facility operated by Stanford University, achieved first light in 2009 (Emma et al., 2010). The X-ray Correlation Spectroscopy (XCS) instrument began operations in 2011; located in the Far Experimental Hall (FEH), XCS was the fifth LCLS instrument to become operational. In contrast with synchrotron storage ring sources, which provide high-brilliance beam with partial coherence (Grü bel et al., 2008), LCLS provides pulsed transversely coherent hard X-rays with unprecedented flux and short pulse duration. These characteristics enable the investigation of dynamics in condensed matter by measuring time-resolved coherent scattering patterns (i.e. time-resolved speckles). This can be achieved by means of X-ray Photon Correlation Spectroscopy (XPCS) (Grü bel et al., 2008), where the temporal evolution of speckle patterns can be quantified by calculating intensity autocorrelation functions and extracting typical relaxation times for specific length scales (Robert, 2007). The LCLS repetition rate, 120 Hz, limits the fastest measurable time scales to several tens of milliseconds (Stephenson et al., 2009). More elaborate schemes, requiring the measurement of the sum of two speckle patterns originating from a sequence of two X-ray pulses separated in time (Gutt et al., 2009), allow the measurement of much faster dynamics. In that case the accessible time scales are limited by the ability to generate two pulses separated in time by Át and can potentially reach the femtosecond regime. The XCS instrument was designed to provide an optimum platform for taking advantage of the unique coherence properties of the LCLS (e.g. the possibility to access very large ISSN 1600-5775 sample-detector distances) while still accommodating experiments using different X-ray techniques (small-angle X-ray scattering, diffraction, X-ray spectroscopy and imaging) . The addition of an ultrafast laser system will not only allow 'standard' optical pump/X-ray probe experiments but will open new opportunities to use coherent scattering techniques in combination with ultrafast optical excitation. Instrument overview The XCS instrument operates in the hard X-ray regime (above 4 keV). A set of silicon mirrors (coated with silicon carbide), with an incidence angle of 1.32 mrad, are located in the frontend enclosure of the Near Experimental Hall (NEH), which feeds the hard X-ray beam to all LCLS hard X-ray hutches. These limit the maximum photon energy that can be delivered to the LCLS hard X-ray hutches to 25 keV. The first components of the XCS instrument are located in the X-ray Transport Tunnel, a 200 m-long tunnel connecting the Near and Far Experimental Halls. These include slits and diagnostics located 196 m upstream from the sample, as indicated in Fig. 1. In the following, we describe the specifications of key components of the XCS instrument. Monochromators. The XCS instrument has two distinct configurations. It can operate in the LCLS main line and take full advantage of the first-harmonic properties of LCLS and its various operation modes (SASE, two-color, etc.). To operate in this 'pink beam' configuration, all components located downstream of the dashed line in Fig. 1 are translated to the LCLS main line. For experiments that require a monochromatic beam [i.e. a larger longitudinal coherence length l = 2 =Á = ðE=ÁEÞ] but do not intend to scan broadly the incident energy, a custom-built (JJ X-ray, Denmark) largeoffset Double-Crystal Monochromator (DCM) (Zhu et al., 2014) is located 44 m upstream of the sample. In this configuration all components located downstream of the dashed line are translated 600 mm horizontally as displayed in Fig. 1. The DCM operates with Si(111) crystals and provides a monochromaticity of ÁE=E = 1.4  10 À4 at 8 keV. If more monochromaticity is required, a Si(511) Channel-Cut Monochromator (CCM) (Narayanan et al., 2008) operating in the vertical scattering geometry provides an energy resolution of ÁE=E = 8.9  10 À6 . Future plans include an upgrade of the DCM crystals to C à (111), which not only provides better energy resolution (ÁE=E = 5.3  10 À5 ) but most importantly allows multiplexing with another instrument located downstream in the FEH (Feng et al., 2013(Feng et al., , 2015Zhu et al., 2014). A detailed discussion concerning the effect of the monochromaticity on the intensity and longitudinal coherence properties of a FEL SASE beam is provided by Lee et al. (2012). X-ray focusing lenses. The unfocused beam at XCS is typically 0.75 mm  0.75 mm FWHM in size. Beryllium compound refractive lenses (RWTH, Aachen, Germany) allow focusing of the beam, and provide beam size control in one or two dimensions (Snigirev et al., 1996). The lenses are located 3.3 m upstream of the sample and can be adjusted AE 150 mm along the incident beam axis. The minimum spot size was measured to be $ 2 mm  2 mm (limited by the finite SASE bandwidth and imperfections in the optics). A onedimensional focusing option allows delivery of a line focus for elongated samples or for applications such as grazing-incidence experiments. Pulse picker. A fast shutter is available for selecting single X-ray pulses from the LCLS pulse train on demand, as well as for reducing the repetition rate. It consists of a channel which is rotated back and forth in order to create a brief opening time. It can be used to create arbitrary pulse train time patterns provided that the pulse train structure has an average rate of equal to or less than 10 Hz. Mirrors. Two silicon mirrors located 1.5 and 2 m upstream of the sample allow delivery of the beam with a vertical grazing angle (as, for example, is required for liquid surfaces). These can also reduce the third-harmonic content of the LCLS beam by operating above the critical angle for these energies. Diffractometer. A custom-built horizontal geometry four-circle diffractometer (Huber, Germany) is available and enables precise orientation of samples or sample environments such as vacuum chambers, gas/liquid injectors, etc. Its sphere of confusion is better than 20 mm. It can be used in conjunction with the auxiliary 2 arm (FMB Oxford, UK) providing large sample-detector distances (4 and 7.5 m) and covering scattering angles up to 55 in the horizontal plane. Overview of the XCS instrument layout. Distances are indicated in meters from the center of the diffractometer. S&D: slits and non-destructive intensity diagnostics; DCM: large-offset doublecrystal monochromator; CCM: channel-cut monochromator; TT: time-tool measuring the arrival time of the optical laser with reference to the X-rays; M1/M2: silicon mirrors that can be used to deflect the beam in the vertical direction and can also provide harmonic rejection; L-IN: laser incoupling for the optical laser. Components located downstream of the dashed line can be translated into the main LCLS line and allow the XCS instrument to take advantage of the full power and properties of the fundamental. The sample at the XCS instrument is located approximately 420 m from the source. Additional diagnostics. The SASE process induces pulse-topulse fluctuations of the beam properties, such as pulse energy, duration, spatial profile, wavefront, temporal profile and spectral content. In situ pulse property monitoring is thus crucial for data interpretation. Multiple intensity monitors are installed at various locations along the instrument for pulseto-pulse intensity normalization (Feng et al., 2011). The spatial profile of the beam can also be measured at various locations along the instrument using scintillating screens with highresolution camera-lens combinations. Detectors. Several X-ray detectors are available and integrated with the XCS data acquisition system. These have various characteristics (pixel size, number of pixel, noise, frame rate and dynamic range) which are evaluated in order to identify the most suitable detector for specific experimental needs. Coherent X-ray experiments, for example, typically require small pixel size, very low noise, moderate dynamic range and a large number of pixels. This can be achieved with the 20 mm pixel size direct illumination Princeton CDD, but at a very low frame rate. A new detector is currently being developed at LCLS with 55 mm pixels and low noise, but running at the 120 Hz full repetition rate (Dragone et al., 2014). For more information about the LCLS detectors, see Blaj et al. (2015). Split and delay. The XCS instrument has space allocated for instrumentation to generate double-pulse X-ray patterns with a controlled Át typically below 1 ns. A split and delay prototype built by DESY (Hamburg, Germany) is currently installed and its performances are described elsewhere (Roseker et al., 2009. Other prototypes offering different beam properties are being evaluated or tested. Ultrafast laser capabilities will be added to the XCS instrument in 2015. These include the construction of a dedicated laser hutch in close proximity to XCS, a standard ultrafast laser system, and timing diagnostics; the specifications of each are listed below: Optical laser system. Core laser systems at the LCLS consist of an ultrashort-pulse Ti:sapphire oscillator synchronized to the FEL seeding a commercially available chirped pulse amplifier producing 4 mJ at 40 fs. An additional four-pass amplifier, developed in-house, can boost the pulse energy to over 30 mJ. Wavelength conversion can cover a broad spectral range from 200 nm to 150 mm ($ 1500 to 2 THz). A more thorough description of the optical laser capabilities at LCLS can be found by Minitti et al. (2015). Timing diagnostics. Typical phase locking between the accelerator and the laser system only holds the temporal jitter between the two sources to about 200 fs FWHM. In order to take full advantage of the short pulses and reach pulse-lengthlimited time resolution, diagnostics to measure the relative arrival time between laser and X-ray pulses have been developed. These are based on the X-ray induced change in refractive index of a thin target probed by a chirped broadband white-light continuum pulse derived from the optical laser. The optical light transmission change is resolved by an optical spectrometer for each pulse. Typical target materials are silicon nitride (Si 3 N 4 ) or Ce:YAG crystals of different thicknesses to accommodate different beam conditions . A summary of the XCS instrument parameters is given in Table 1. Highlights The XCS instrument focuses on measuring time-resolved coherent scattering patterns from condensed matter systems from which typical relaxation rates are deduced. These measurements take full advantage of the transverse coherence properties of the LCLS beam. Fig. 2 displays a typical singleshot speckle pattern from a static sample consisting of dried silica colloidal 150 nm diameter spheres (Kisker GmbH) with 8.3 keV X-rays. The detector is a CSPAD (Blaj et al., 2015) Figure 2 Single-shot speckle pattern measured at 8.3 keV from 150 nm colloidal spheres. The dark blue areas are gaps between the CSPAD tiled sensors. The central aperture allows the transmitted beam to pass through, and therefore does not require a beamstop. 0.01 Å À1 are typical small-angle scattering form-factor features related to the size and shape of the colloidal particles. As observed, the speckles (coherent scattering pattern appearing as the grainy features decorating the structure rings) are well developed. In order to characterize the transverse coherence of the beam, a series of speckle patterns are measured with and without the XCS monochromator, in the SASE operation mode, and analyzed in terms of photon statistics to determine their contrast, i.e. a measure of the transverse coherence in the small-angle regime Hruszkewycz et al., 2012;Lee et al., 2013;Lehmkü hler et al., 2014). The intensities from a narrow iso-Q area consisting of an annulus centered at Q = 0 with a radius Q are histogrammed as displayed in Fig. 3 for Q = 0.0067 Å À1 . These are then modeled using the negativebinomial distribution function (Mandel, 1959;Goodman, 2007) where I is the number of photons, " I I is the mean number of photons in that area and M is the number of modes. M is related to the contrast of the speckle pattern C = 1= ffiffiffiffi ffi M p . An example of the result of such analysis is displayed in Fig. 3 (inset). For that specific shot the fit to the experimental data yields a mean number " I I ' 5.1 photons, mode number M = 2.75 corresponding to a contrast C = 0.6. The fit reproduces well the experimentally measured single-shot intensity distribution. Because the mean number of photons is large one can also use a simpler formulation of the speckle contrast C = = " I I, where is the standard deviation of the measured intensities. This analysis was performed on successive shots at the same Q = 0.0067 Å À1 . The results are displayed in Fig. 3. The observed contrast fluctuates between 0.6 and 0.8 with a mean contrast of 0.69. The shot-to-shot fluctuations of the contrast originate from the fine structures in the energy spectrum of the SASE beam, as simulated and described by Lee et al. (2012). The large degree of coherence of the LCLS beam makes it a suitable source to perform XPCS, measuring time-resolved speckle patterns from a sample. As the sample presents some dynamics, speckle patterns fluctuate in time. By means of timeautocorrelation functions calculated from the speckle patterns, information on the characteristic relaxation times can be obtained (Robert, 2007), a measure of the normalized intermediate scattering function of the system. In some cases the sample can also undergo non-equilibrium dynamical phenomena, referred to as aging (Robert et al., 2006), from which clear signatures can be observed in a time-dependent analysis of the degree of correlation of speckle patterns. This was recently investigated at XCS by Carnis et al. (2014), where the relaxation and aging dynamics of thin polymer films were investigated. One should also note that with a FEL the spatial position of the beam jitters shot-to-shot. Because of this effect the contrast , usually referred to in second-order correlation functions (Robert, 2007), will be smaller than the single-shot speckle contrast, as discussed by Carnis et al. (2014). Time-resolved speckle spectroscopy at an FEL is intrinsically limited by the repetition rate of the source, as it relies on the time-correlation of two recorded coherently scattered patterns originating from two pulses separated in time. For the LCLS, the fastest dynamics that can be reached is of the order of 8.3 ms (i.e. 120 Hz repetition rate). A natural way of accessing faster timescales is to increase the repetition rate of the source. This would require a completely different accelerator technology such as the one planned to be used at the European XFEL (Grü bel et al., 2007), where series of X-ray bunches separated by a minimum of 220 ns will be generated. To reach even faster timescales down to the picosecond regime and below, other possibilities involve generating two sub-pulses with a time separation Át between tens of femtoseconds up to a nanosecond as illustrated in Fig. 4. Each subpulse generates a speckle pattern. Current X-ray detectors are, however, only capable of measuring the sum of the scattered signal of both sub-pulses. If dynamics occurs on timescales of Át, the summed speckle patterns will present a decay of contrast, as compared with the contrast of a single speckle pattern. By adjusting the time delay Át between the two subpulses, one can extract the time evolution of the summed speckle pattern contrast, from which the normalized intermediate scattering function is obtained (Gutt et al., 2009), and therefore gain information about the underlying dynamics of the system. Single-shot speckle contrast measured at Q = 0.0067 Å À1 for various consecutive shots. Inset: probability density function of intensity within part of the speckle pattern corresponding to a wavevector Q = 0.0067 Å À1 . The solid line represents the gamma distribution with number of modes M = 2.75 and average count rate " I I ' 5.1 photons. Figure 4 Schematic of the double-pulse scheme. A single pulse is split into two subpulses; one is delayed relative to the other by an increase in its pathlength. The sub-pulses, separated in time by Át, are then redirected on a common trajectory to the sample. A typical pathlength difference of 1 mm corresponds to a time delay of 3 ps. Recent results from the commissioning of a split and delay prototype based on eight perfect Bragg crystals [including two thin crystals that can act as a beam splitter/recombiner (Roseker et al., 2009)] have shown the possibility to reach time delays of the order of a nanosecond. Other technical developments involving split and delayed beams have also been investigated. This is particularly the case with a single-shot split and multiple-delay system, offering the possibility to probe ultrafast dynamics by means of X-ray pump/X-ray probe experiments, as recently demonstrated by David et al. (2015). Conclusion The XCS instrument takes full advantage of the large number of transversely coherent photons per pulse at the LCLS. XCS is a versatile tool for performing time-resolved coherent scattering experiments in the hard X-ray regime from which fundamental dynamics in condensed matter ordered and disordered systems can be explored. This can be achieved by means of XPCS for slow dynamics and will further be extended to ultrafast timescales by double-pulse experiments. More details about the XCS instrument can be found on the following website: http://lcls.slac.stanford.edu/xcs. Facility access LCLS instruments are open to academia, industry, government agencies and research institutes worldwide for scientific investigations. There are two calls for proposals per year and an external peer-review committee evaluates proposals based on scientific merit and instrument suitability. Access is without charge for users who intend to publish their results. Prospective users are encouraged to contact instrument staff members to learn more about the science and capabilities of the facility, and opportunities for collaboration.
4,134.2
2013-03-22T00:00:00.000
[ "Physics" ]
Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge Highlights • This work presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017.• This work introduces the related information to the challenge, discusses the results from the conventional methods and deep learning-based algorithms, and provides insights to the future research.• The challenge provides a fair and intuitive comparison framework for methods developed and being developed for WHS.• The challenge provides the training datasets with manually delineated ground truths and evaluation for an ongoing development of MM-WHS algorithms. Introduction According to the World Health Organization, cardiovascular diseases (CVDs) are the leading cause of death globally (Mendis et al., 2011).Medical imaging has revolutionized the modern medicine and healthcare, and the imaging and computing technologies become increasingly important for the diagnosis and treatments of CVDs.Computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission computed tomography (SPECT), and ultrasound (US) have been used extensively for physiologic understanding and diagnostic purposes in cardiology (Kang et al., 2012).Among these, CT and MRI are particularly used to provide clear anatomical information of the heart.Cardiac MRI has the advantages of being free from ionizing radiation, acquiring images with great contrast between soft tissues and relatively high spatial resolutions (Nikolaou et al., 2011).On the other hand, cardiac CT is fast, low cost, and generally of high quality (Roberts et al., 2008). To quantify the morphological and pathological changes, it is commonly a prerequisite to segment the important structures from the cardiac medical images.Whole heart segmentation (WHS) aims to extract each of the individual whole heart substructures, including the left ventricle (LV), right ventricle (RV), left atrium (LA), right atrium (RA), myocardium of LV (Myo), ascending aorta (AO) or the whole aorta, and the pulmonary artery (PA) (Zhuang, 2013), as Fig. 1 shows.The applications of WHS are ample.The results can be used to directly compute the functional indices such as ejection fraction.Additionally, the geometrical information is useful in surgical guidance such as in radio-frequency ablation of the LA.However, the manual delineation of whole heart is labor-intensive and tedious, needing almost 8 hours for a single subject (Zhuang and Shen, 2016).Thus, automating the segmentation from multi-modality images, referred to as MM-WHS, is highly desired but still challenging, mainly due to the following reasons (Zhuang, 2013).First, the shape of the heart varies largely in different subjects or even for the same subject at different cardiac phases, especially for those with pathological and physiological changes.Second, the appearance and image quality can be variable.For example, the enhancement patterns of the CT images can vary significantly for different scanners or acquisition sessions.Also, motion artifacts, poor contrast-to-noise ratio and signal-to-noise ratio, commonly presented in the clinical data, can significantly deteriorate the image quality and consequently challenge the task. State-of-the-art for Whole Heart Segmentation In the last ten years, a variety of WHS techniques have been proposed for cardiac CT and MRI data.The detailed reviews of previously published algorithms can be URL: zxh@fudan.edu.cn(Xiahai Zhuang 1,2, *), lilei.sky@sjtu.edu.cn(Lei Li 3, *<EMAIL_ADDRESS>(Guang Yang 21,22, *) found in Kang et al. (2012), Zhuang (2013) and Peng et al. (2016).Kang et al. (2012) reviewed several modalities and corresponding segmentation algorithms for the diagnosis and treatments of CVDs.They summarized the roles and characteristics of different modalities of cardiac imaging and the parameter correlation between them.In addition, they categorized the WHS techniques into four kinds, i.e., (1) boundary-driven techniques, (2) region-based techniques, (3) graph-cuts techniques, and (4) model fitting techniques.The advantages and disadvantages of each category were analyzed and summarized.Zhuang (2013) discussed the challenges and methodologies of the fully automatic WHS.Particularly, the work summarized two key techniques, i.e., the construction of prior models and the fitting procedure for segmentation propagation, for achieving this goal.Based on the types of prior models, the segmentation methods can be divided into two groups, namely the deformable model based methods and the atlas-based approaches; and the fitting procedure can be decomposed into three stages, including localizing the whole heart, initializing the substructures, and refining the boundary delineation.Thus, this review paper mainly analyzes the algorithms based on the classification of prior models and fitting algorithms for the WHS from different modality images.Peng et al. (2016) reviewed both the methodologies of WHS and the structural and functional indices of the heart for clinical assessments.In their work, the WHS approaches were classified into three categories, i.e., image-driven techniques, model-driven techniques, and direct estimation. The three topic review papers mentioned above mainly cover the publications before 2015.A collection of recent works not included by them are summarized in Table 1.Among these works, (Zhuang et al., 2015) proposed an atlas ranking and selection scheme based on conditional entropy for the multi-atlas based WHS of CT.Zhou et al. (2017) developed a set of CT atlases labeled with 15 cardiac substructures.These atlases were then used for automatic WHS of CT via the multi-atlas segmentation (MAS) framework.Cai et al. (2017) developed a method with window width-level adjustment to pre-process CT data, which generates images with clear anatomical structures for WHS.They applied a Gaussian filter-based multi-resolution scheme to eliminate the discontinuity in the down-sampling decomposition for whole heart image registration.Zuluaga et al. (2013) developed a MAS scheme for both CT and MRI WHS.The proposed method ranked and selected optimal atlases based on locally normalised cross correlation.Pace et al. (2015) proposed a patch-based interactive algorithm to extract the heart based on a manual initialization from experts.The method employs active learning to identify the areas that require user interaction.Zhuang and Shen (2016) developed a multi-modality MAS framework for WHS of cardiac MRI, which used a set of atlases built from both CT and MRI.The authors proposed modality invariant metrics for computing the global image similarity and the local similarity.The global image similarity was used to rank and select atlases, from the multi-modality atlas pool, for segmenting a target image; and the local similarity metrics were proposed for the patch-based label fusion, where a multi-scale patch strategy was developed to obtain a promising performance. In conclusion, WHS based on the MAS framework, referred to as MA-WHS, has been well researched in recent years.MAS segments an unknown target image by propagating and fusing the labels from multiple annotated atlases using registration.The performance relies on the registration algorithms for label propagation and the fusion strategy to combine the segmentation results from the multiple atlases.Both these two key steps are generally computationally expensive. Recently, a number of deep learning (DL)-based methods have shown great promise in medical image analysis.They have obtained superior performance in various imaging modalities and different clinical applications (Roth et al., 2014;Shen et al., 2017).For cardiac segmentation, Avendi et al. (2016) proposed a DL algorithm for LV segmentation.Ngo et al. (2017) trained multiple layers of deep belief network to localize the LV, and to define the endocardial and epicardial borders, followed by the distance regularised level set.Recently, Tan et al. (2018) designed a fully automated convolutional neural network (CNN) architecture for pixel-wise labeling of both the LV and RV with impressive performance.DL methods have potential of providing faster and more accurate segmentation, compared to the conventional approaches, such as the deformable model based segmentation and MAS method.However, little work has been reported to date using DL for WHS, probability due to the limitation of training data and complexity of the segmentation task. Table 2 summarizes the recent open access datasets for Motivation and Contribution Due to the above mentioned challenges, we organized the competition of MM-WHS, providing 120 multi-modality whole heart images for developing new WHS algorithms, as well as validating existing ones.We also presented a fair evaluation and comparison framework for participants.In total, twelve groups who submitted their results and methods were selected, and they all agreed to contribute to this work, a benchmark for WHS of two modalities, i.e., CT and MRI.In this work, we introduce the related information, elaborate on the methodologies of these selective submissions, discuss the results and provide insights to the future research. The rest of this paper is organised as follows.Section 2 provides details of the materials and evaluation framework.Section 3 introduces the evaluated methods for benchmarking.Section 4 presents the results, followed by discussions in Section 5. We conclude this work in Section 6. Data Acquisition The cardiac CT/CTA data were acquired from two state-of-the-art 64-slice CT scanners (Philips Medical Systems, Netherlands) using a standard coronary CT angiography protocol at two sites in Shanghai, China.All the data cover the whole heart from the upper abdomen to the aortic arch.The in-plane resolution of the axial slices is 0.78×0.78mm, and the average slice thickness is 1.60 mm. The cardiac MRI data were obtained from two hospitals in London, UK.One set of data were acquired from St. Thomas Hospital on a 1.5T Philips scanner (Philips Healthcare, Best, The Netherlands), and the other were from Royal Brompton Hospital on a Siemens Magnetom Avanto 1.5T scanner (Siemens Medical Systems, Erlangen, Germany).In both sites we used the 3D balanced steady state free precession (b-SSFP) sequence for whole heart imaging, and realized free breathing scans by enabling a navigator beam before data acquisition for each cardiac phase.The data were acquired at a resolution of around (1.6 ∼ 2) × (1.6 ∼ 2) × (2 ∼ 3.2) mm, and reconstructed to half of its acquisition resolution, i.e., about (0.8∼1)×(0.8∼1)×(1∼1.6)mm. Both cardiac CT and cardiac MRI data were acquired 200 in real clinical environment.The pathologies of patients cover a wide range of cardiac diseases, including myocardium infarction, atrial fibrillation, tricuspid regurgitation, aortic valve stenosis, Alagille syndrome, Williams syndrome, dilated cardiomyopathy, aortic coarctation, Tetralogy of Fallot.The subjects for MRI scans also include a small number of normal controls. All the CT and MRI data have been anonymized in agreement with the local regional ethics committee before being released to the MM-WHS challenge.In total, we provided 120 multi-modality whole heart images from multiple sites, including 60 cardiac CT and 60 cardiac MRI.Note that the data were collected from clinical environments, so the image quality was variable.This enables to assess the validation and robustness of the developed algorithms with representative clinical data, rather than with selected best quality images. Definition and Gold Standard The WHS studied in this work aims to delineate and extract the seven substructures of the heart, into separate individuals (Zhuang, 2013).These seven structures include the following, (1) the LV blood cavity, also referred to as LV; (2) the RV blood cavity, also referred to as RV; (3) the LA blood cavity, also referred to as LA; (4) the RA blood cavity, also referred to as RA; (5) the myocardium of the LV (Myo) and the epicardium (Epi), defined as the epicardial surface of the LV; (6) the AO trunk from the aortic valve to the superior level of the atria, also referred to as AO; (7) the PA trunk from the pulmonary valve to the bifurcation point, also referred to as PA. The four blood pool cavities, i.e., LV, RV, LA and RA, are also referred to as the four chambers. Manual labeling was adopted for generating the gold standard segmentation.They were done by clinicians or by students majoring in biomedical engineering or medical physicists who were familiar with the whole heart anatomy, slice-by-slice using the ITK-SNAP software (Yushkevich et al., 2006).Each manual segmentation result was examined by a senior researchers specialized in cardiac imaging with experience of more than five years, and modifications have been take if revision was necessary.Also, the sagittal and coronal views were visualised simultaneously to check the consistency and smoothness of the segmentation, although the manual delineation was mainly performed in the axial views.For each image, it takes approximately 6 to 10 hours for the observer to complete the manual segmentation of the whole heart. Evaluation Metrics We employed four widely used metrics to evaluate the accuracy of a segmentation result, including the Dice score (Kittler et al., 1998), Jaccard index (Jaccard, 1901), surfaceto-surface distance (SD), and Hausdorff Distance (HD).For WHS evaluation, we adopted the generalized version of them, the normalized metrics with respect to the size of substructures.They are expected to provide more objective measurements (Crum et al., 2006;Zhuang, 2013). For each modality, the data were split into two sets, i.e., the training set (20 CT and 20 MRI) and the test set (40 CT and 40 MRI).For the training data, both the images and the corresponding gold standard were released to the participants for building, training and cross-validating their models.For the test data, only the CT and MRI images were released.Once the participants developed their algorithms, they could submit their segmentation results on the test data to the challenge moderators for a final independent evaluation.To make a fair comparison, the challenge organizers only allowed maximum of two evaluations for one algorithm. Participants Twelve algorithms (teams) were selected for this benchmark work.Nine of them provided results for both CT and MRI data, one experimented only on the CT data and two worked solely on the MRI data. All of the 12 teams agreed to include their results in this paper.To simplify the description below, we used the team abbreviations referring to both the teams and their corresponding methods and results.The evaluated methods are elaborated on in Section 3, and the key contributions of the teams are summarized in Table 3.Note that the three methods, indicated with Asterisk (*), were submitted after the challenge deadline for performance ranking. Evaluated Methods In this section, we elaborate on the twelve benchmarked algorithms.Table 3 provides the summary for reference. Graz University of Technology (GUT) Payer et al. ( 2017) proposed a fully automatic whole heart segmentation, based on multi-label CNN and using volumetric kernels, which consists of two separate CNNs: one to localize the heart, referred to as localization CNN, and the other to segment the fine detail of the whole heart structure within a small region of interest (ROI), referred to as segmentation CNN.The localization CNN is designed to predict the approximate centre of the bounding box around all heart substructures, based on the U-Net (Ronneberger et al., 2015) and heatmap regression (Payer et al., 2016).A fixed physical size ROI is then cropped around the predicted center, ensuring that it can enclose all interested substructures of the heart.Within the cropped ROI, the multi-label segmentation CNN predicts the label of each pixel.In this method, the segmentation CNN works on high-resolution ROI, while the localization CNN works on the low resolution images.This two-step CNN pipeline helps to mitigate the intensive memory and runtime generally required by the volumetric kernels equipped 3D CNNs. University of Lubeck (UOL) Heinrich and Oster (2017) proposed a multi-atlas registration approach for WHS of MRI, as Fig. 2 shows.This method adopts a discrete registration, which can capture large shape variations across different scans (Heinrich et al., 2013a).Moreover, it can ensure the alignment of anatomical structures by using dense displacement sampling and graphical model-based optimization (Heinrich et al., 2013b).Due to the use of contrast-invariant features (Xu et al., 2016), the multi-atlas registration can implicitly deal with the challenging varying intensity distributions due to different acquisition protocols.Within this method, one can register all the training atlases to an unseen test image. The warped atlas label images are then combined by means of weighted label fusion.Finally, an edge-preserving smoothing of the generated probability maps is performed using the multi-label random walk algorithm, as implemented and parameterized in Heinrich and Blendowski (2016). KTH Royal Institute of Technology (KTH) Wang and Smedby ( 2017) propose an automatic WHS framework combined CNN with statistical shape priors.The additional shape information, also called shape context (Mahbod et al., 2018), is used to provide explicit 3D shape knowledge to the CNN.The method uses a random forest based landmark detection to detect the ROI.The statistical shape models are created using the segmentation masks of the 20 training CT images.The probability map is generated from three 2D U-Nets learned from the multi-view slices of the 3D training images.To estimate the shape of each subregion of heart, a hierarchical shape prior guided segmentation algorithm (Wang and Smedby, 2014) is then performed on the probability map.This shape information is represented using volumetric shape models, i.e., signed distance maps of the corresponding shapes.Finally, the estimated shape information is used as an extra channel, to train a new set of multi-view U-Nets for the final segmentation of whole heart. The Chinese University of Hong Kong, Method No. 1 (CUHK1) Yang et al. (2017b) apply a general and fully automatic framework based on a 3D fully convolutional network (FCN).The framework is reinforced in the following aspects: First, an initialization is achieved by inheriting the knowledge from a 3D convolutional networks trained on the large-scale Sports-1M video dataset (Tran et al., 2015).Then, the gradient flow is applied by shortening the back-propagation path and employing several auxiliary loss functions on the shallow layers of the network.This is to tackle the low efficiency and over-fitting issues when directly train the deep 3D FCNs, due to the gradient vanishing problem in shallow layers.Finally, the Dice similarity coefficient based loss function (Milletari et al., 2016) is included into a multi-class variant to balance the training for all classes. where S indicates the segmentation result.The differences between the reliable and unreliable regions are used to guide the reliability of the segmentation process, namely the higher the difference, the more reliable the segmentation. 3.6.The Chinese University of Hong Kong, Method No. 2 (CUHK2) Yang et al. (2017c) employ a 3D FCN for an end-toend dense labeling, as Fig. 3 shows.The proposed network is coupled with several auxiliary loss functions in a deep supervision mechanism, to tackle the potential gradient vanishing problem and class imbalance in training.The network learns a spatial-temporal knowledge from a largescale video dataset, and then transfer to initialize the shallow convolutional layers in the down-sampling path (Tran et al., 2015).For the class imbalance issue, a hybrid loss is proposed (Milletari et al., 2016), combining two complementary components: (1) volume-size weighted cross entropy loss (wCross) to preserve branchy details such as PA trunks.(2) multi-class Dice similarity coefficient loss (mDSC ) to compact anatomy segmentation.Then, the proposed network can be well trained to simultaneously segment different classes of heart substructures, and generate a segmentation in a dense but detail-preserved format. Southeast University (SEU) Yang et al. (2017a) develop a MAS-based method for WHS of CT images.The proposed method consists of the following major steps.Firstly, a ROI detection is performed on atlas images and label images, which are downsampled and resized to crop and generate a heart mask.Then, an affine registration is used to globally align the target image with the atlas images, followed by a nonrigid registration to refine alignment of local details.In addition, an atlas ranking step is applied by using mutual information as the similarity criterion, and those atlases with low similarity are discarded.A non-rigid registration is further performed by minimizing the dissimilarity within 400 the heart substructures using the adaptive stochastic gradient descent method.Finally, the propagated labels are fused with different weights according to the similarities between the deformed atlases and the target image. 3.8.University of Tours (UT) Galisot et al. (2017) propose an incremental and interactive WHS method, combining several local probabilistic atlases based on a topological graph.The training images are used to construct the probabilistic atlases, for each of the substructures of the heart.The graph is used to encode the priori knowledge to incrementally extract different ROIs.The priori knowledge about the shape and intensity distributions of substructures is stored as features to the nodes of the graph.The spatial relationships between these anatomical structures are also learned and stored as the profile of edges of the graph.In the case of multi-modality data, multiple graphs are constructed, for example two graphs are built for the CT and MRI images, respectively.A pixelwise classification method combining hidden Markov random field is developed to integrate the probability map information.To correct the misclassifications, a post-correction is performed based on the Adaboost scheme. Shenzhen Institutes of Advanced Technology (SIAT) Tong et al. ( 2017) develop a deeply-supervised endto-end 3D U-Net for fully automatic WHS.The training dataset are artificially augmented by considering each ROI of the heart substructure independently.To reduce false positives from the surrounding tissues, a 3D U-Net is firstly trained to coarsely detect and segment the whole heart structure.To take full advantage of multi-modality information so that features of different substructures could be better extracted, the cardiac CT and MRI data are 2018) design a pixel-wise dilated residual networks, referred to as Bayesian VoxDRN, to segment the whole heart structures from 3D MRI images.It can be used to generate a semantic segmentation of an arbitrarysized volume of data after training.Conventional FCN methods integrate multi-scale contextual information by reducing the spatial resolution via successive pooling and sub-sampling layers, for semantic segmentation.By contrast, the proposed method achieves the same goal using dilated convolution kernels, without decreasing the spatial resolution of the network output.Additionally, residual learning is incorporated as pixel-wise dilated residual modules to alleviate the degrading problem, and the WHS accuracy can be further improved by avoiding gridding artifacts introduced by the dilation (Yu et al., 2017). University of Bern, Method No. 2 (UB2*) This method includes a multi-scale pixel-wise fully convolutional Dense-Nets (MSVoxFCDN) for 3D WHS of MRI images, which could directly map a whole volume of data to its volume-wise labels after training.The multi-scale context and multi-scale deep supervision strategies are adopted,to enhance feature learning.The deep neural network is an encoder (contracting path)-decoder (expansive path) architecture.The encoder is focused on feature learning, while the decoder is used to generate the segmentation results.Skip connection is employed to recover spatial context loss in the down-sampling path.To further boost feature learning in the contracting path, multi-scale contextual information is incorporated.Two down-scaled branch classifiers are inserted into the network to alleviate the potential gradient vanishing problem.Thus, more efficient gradients can be back-propagated from loss function to the shallow layers. University of Edinburgh (UOE*) Wang and Smedby (2017) develop a two-stage concatenated U-Net framework that simultaneously learns to detect a ROI of the heart and classifies pixels into different substructures without losing the original resolution.The first U-Net uses a down-sampled 3D volume to produce a coarse prediction of the pixel labels, which is then resampled to the original resolution.The architecture of the second U-Net is inspired by the SRCNN (Dong et al., 2016) with skipping connections and recursive units (Kim et al., 2016).It inputs a two-channel 4D volume, consisting of the output of the first U-Net and the original data.In the test phase, a dynamic-tile layer is introduced between the two U-Nets to crop a ROI from both the input and output volume of the first U-Net.This layer is removed when performing end-to-end training to simplify the implementation.Unlike the other U-Net based architecture, the proposed method can directly perform prediction on the images with their original resolution, thanks to the SRCNN-like network architecture. Results Table 4 and Table 5 present the quantitative results of the evaluated algorithms on CT and MRI dataset, respectively. For the CT data, the results are generally promising, and the best Dice score (0.91±0.09) was achieved by GUT, which is a DL-based algorithm with anatomical label configurations.The DL-based methods generally obtained better accuracies than the MAS-based approaches in terms of Jaccard, Dice, and SD metrics, though this conclusion was not applied when the HD metric is used.Particularly, one can find that the mean of HD from the two MAS methods was not worse than that of the other eight DL-based approaches. For MRI data, the best Dice score of the WHS (0.87 ± 0.04) was obtained by UB2 * , which is a DL-based method and a delayed submission; and the best HD (28.535 ± 13.220 mm) was achieved by UOL, a MAS-based algorithm.Here, the average accuracy of MAS (two teams) was better than that of the DL-based segmentation (nine teams) in all evaluation metrics.However, the performance across different DL methods could vary a lot, similar to the results from the CT experiment.For example, the top four DL methods, i.e., GUT, KTH, UB1 * and UB2 * , obtained comparable accuracy to that of UOL, but the other DL approaches could generate much poorer results. Fig. 4 shows the boxplots of the evaluated algorithms on CT data.One can see that they achieved relatively accurate segmentation for all substructures of the heart, except for the PA whose variability in terms of shape and appearance is notably greater.For GUT, KTH, CUHK1, UB1 * , and CUHK2, the delineation of PA is reasonably good with the mean Dice score larger than 0.8.Fig. 5 presents the boxplots on the MRI data.The five methods, i.e., UB2 * , UOL, UB1 * , GUT, and KTH, all demonstrate good Dice scores on the segmentation of four chambers and LV myocardium.Similar to the conclusion drawn from Table 4 and Table 5, the segmentation on the CT images is generally better than that on the MRI data as indicated by the quantitative evaluation metrics. Fig. 6 shows the 3D visualization of the cases with the median and worst WHS Dice scores by the evaluated methods on the CT data.Most of the median cases look reasonablely good, though some contain patchy noise; and the worst cases require significant improvements.Specifically, UOE * median case contains significant amount of misclassification in AO, and parts of the LV are labeled as LA in the UOE * and SIAT median cases.In the worst cases, the CUHK1 and CUHK2 results do not have a complete shape of the RV; KTH and SIAT contain a large amount of misclassification, particularly in myocardium; UCF mistakes the RA as LV; UOE * only segments the LA, and UT generates a result with wrong orientation.Fig. 7 visualizes the median and worst results on MRI WHS.Compared with the CT results, even the median cases of MRI cases are poor.For example, the SIAT method could perform well on most of the CT cases, but failed to generate acceptable results for most of the MRI images, including the median case presented in the figure.The worst cases of UOE * , CUHK2 and UB1 miss at least one substructure, and UCF and SIAT results do not contain any complete substructure of the whole heart.In conclusion, the CT segmentation results look better than the MRI results, which is consistent with the quantitative results.Also, one can conclude from Fig. 6 and Fig. 7 that the resulting shape from the MAS-based methods looks more realistic, compared to the DL-based algorithms, even though the segmentation could sometimes be very poor or even a failure, such as the worst MRI case by UOL and the worst CT case by UT. Overall performance of the evaluated algorithms The mean Dice scores of the evaluated methods for MM-WHS are respectively 0.872 ± 0.087 (CT) and 0.824 ± 0.102 (MRI), and the best average Dices from one team are respectively 0.908 ± 0.086 (CT by GUT) and 0.874 ± 0.039 (MRI by UB2 * ).Table 4 and Table 5 provide the average numbers of the other evaluation metrics, for the different methodological categories and different imaging modalities.In general, the benchmarked algorithms obtain better WHS accuracies for CT than for MRI, using the four metrics.In addition, the mean Dice scores of MAS-based methods are 0.859 ± 0.108 (CT) and 0.844 ± 0.047 (MRI), and those of DL-based methods are 0.875 ± 0.083 (CT) and 0.820 ± 0.107 (MRI).DL-based WHS methods obtain better mean accuracies, but the MAS-based approaches tend to generate results with more realistic heart shapes. Furthermore, the segmentation accuracies reported for the four chambers are generally good, but the segmentation of the other substructures demonstrates more challenges.For example, one can see from Fig. 4 and Fig. 5 that in CT WHS the PA segmentation is much poorer compared to other substructures; in MRI WHS, the segmentation of myocardium, AO and PA appears to be more difficult.One reason could be that these regions have much larger variation in terms of shapes and image appearance across different scans.Particularly, the diverse pathologies can result in heterogeneous intensity of the myocardium and blood fluctuations to the great vessels.The other reason could be the large variation of manual delineation of boundaries for these regions, which results in more ambiguity for the training of learning-based algorithms and the generation of the gold standard. MAS versus DL-based segmentation As Table 4 and Table 5 summarize, 9 out of the 11 benchmarked CT WHS methods and 8 out of the 10 MRI Method Strengths Limitations GUT -Combining localization and segmentation layers of the CNNs to reduce the requirements of memory and computation time.-Good segmentation performance for both CT and MRI. -The cropping of the fixed physical size ROI is required. UOL -The discrete registration can capture large shape variations across scans. -The regularization is used to obtain smooth surfaces that are important for mesh generation and motion or electrophysiological modelling. -Only tested on the MRI data. -The automatic cropping of ROI sometimes do not cover the whole heart. KTH -Combining shape context information with orthogonal U-Nets for more consistent segmentation in 3-D views.-Good segmentation performance, particularly for CT. -Potential of overfitting because the U-Nets rely much on the shape context channels.-Weighting factors of the shape context generation are determined empirically. CUHK1 -Pre-trained 3-D Network provides good initialization and reduces overfitting. -Auxiliary loss functions are used to promote gradient flow and ease the training procedure. -Tackling the class-imbalance problem using a multi-class Dice based metric. -The introduced hyperparameters need determining empirically.-Relatively poor performance in MRI WHS. UCF -Multi-planar information reinforce the segmentation along the three orthogonal planes. -Multiple 3-D CNNs require less memory compared to a 3-D CNN. -The softmax function in the last layer could cause information loss due to class normalization. CUHK2 -Coupling the 3-D FCN with transfer learning and deep supervision mechanism to tackle potential training difficulties caused by overfitting and vanishing gradient.-Enhance local contrast and reduce the image inhomogeneity. -Relatively poor performance in MRI WHS. SEU -Three-step multi-atlas image registration method is lightweight for computing resources. -The method can be easily deployed. -Only tested on the CT data. UT -The proposed incremental segmentation method is based on local atlases and allows users to perform partial and incremental segmentation. -The registration of MRI atlas can be inaccurate, and the evaluated segmentation accuracy is low. SIAT -Combining a 3-D U-Net with a ROI detection to alleviate the impact of surrounding tissues and reduce the computational complexity.-Fusing MRI and CT images to increase the training samples and take full advantage of multi-modality information so that features of different substructures can be better extracted. -Poor segmentation performance, particularly for MRI data. UB1* -The focal loss and Dice loss are well encapsulated into a complementary learning objective to segment both hard and easy classes.-An iterative switch training strategy is introduced to alternatively optimize a binary segmentation task and a multi-class segmentation task for a further accuracy improvement. -Late submission of the WHS results. -The clinical usage and usefulness of the uncertainty measurements are not clear. UB2* -Multi-scale context and multi-scale deep supervision are employed to enhance feature learning and to alleviate the potential gradient vanishing problem during training.-Reliable performance on the tested MR data. -Late submission of the WHS results. -Only tested on the MRI data. UOE* -The proposed two-stage U-Net framework can directly segment the images with their original resolution. -Late submission of the WHS results. -Poor performance, particularly for CT data. WHS algorithms are based on deep neural networks.In 600 general, the DL-based approaches can obtain good scores when the models have been successfully trained.However, tuning the parameters for a network to obtain the optimal performance can be difficult, as several DL-based methods reported poor results.This is also evident from Fig. 4 and Fig. 5 where some of the DL methods have very large interquartile ranges and outliers, and from the 3D visualization results presented in Fig. 6 and Fig. 7.In several cases, the shape of the heart from the segmentation results can be totally unrealistic, such as the worst CT case of UOE * , median and worst MRI cases of SIAT, worst MRI cases of CUHK1 and UCF. In general, the conventional methods, mainly based on MAS framework, can generate results with more realistic shapes, though their mean accuracies can be less compared to the well trained DL models.Particularly, in MRI WHS the MAS-based methods obtained better mean accuracies than the DL-based approaches, though only two MAS methods were submitted for comparisons.Notice that the WHS of MRI is generally considered more challenging compared to that of CT.Since the DL-based approaches performed much better in the CT WHS, one can expect the performance of MR WHS could be significantly improved by resorting to new DL technologies in the future. CT WHS versus MRI WHS The MRI WHS is generally more arduous than the CT WHS, which is confirmed by the results presented in this work.The mean generalized Dice score of CT WHS is evidently better than that of MRI WHS averaged from the benchmarked algorithms, namely 0.872 ± 0.087 (CT) versus 0.824 ± 0.102 (MRI).One can further confirm this by comparing the results for these two tasks in Table 4 and Table 5, as nine methods have been evaluated on both the CT and MRI test data, and the same algorithms generally obtain better accuracies for CT data.Similar conclusion can be also drawn for the individual substructures as well as for the whole heart, when one compares the boxplots of segmentation Dice scores between Fig. 4 and Fig. 5. Progress and challenges The MM-WHS challenge provides an open access dataset and ongoing evaluation framework for researchers, who can make full use of the open source data and evaluation platform to develop and compare their algorithms.Both the conventional methods and the new DL-based algorithms have made great progress shown in this paper.It is worth mentioning that the DL models with best performance have demonstrated potential of generating accurate and reliable WHS results, such as the methods from GUT, UB1 * and UB2 * , though they were trained using 40 training images (20 CT and 20 MRI).Nevertheless, there are limitations, particularly from the methodological point of view.Table 6 summarizes the advantages and potential limitations of the benchmarked works. WHS of MRI is more arduous.The average performance of the MRI WHS methods is not as good as that of the CT methods, concluded from the submissions.The challenges could mainly come from the low image quality and inconsistent appearance of the images, as well as the large shape variation of the heart which CT WHS also suffers from.Enlarging the size of training data is a commonly pursued means for improving the learning-based segmentation algorithms.However, availability of whole heart training images can be as challenging as the task itself.One potential solution is to use artificial training data, such as by means of data augmentation or image synthesis using generative adversarial networks (Goodfellow et al., 2014).Alternately, shape constraints can be incorporated into the training and prediction framework, which is particularly useful for the DL-based methods to avoid generating results of unrealistic shapes. Conclusion Knowledge of the detailed anatomy of the heart structure is clinically important as it is closely related to cardiac function and patient symptoms.Manual WHS is laborintensive and also suffers from poor reproducibility.A fully automated multi-modality WHS is therefore highly in demand.However, achieving this goal is still challenging, mainly due to the low quality of whole heart images, complex structure of the heart and large variation of the shape.This manuscript describes the MM-WHS challenge which provides 120 clinical MRI/ CT images, elaborates on the methodologies of twelve evaluated methods, and analyzes their evaluated results. The challenge provides the same training data and test dataset for all the submitted methods.Note that these data are also open to researchers in future.The evaluation has been performed by the organizers, blind to the participants for a fair comparison.The results show that WHS of CT has been more successful than that of MRI from the twelve submissions.For segmentation of the substructures, the four chambers generally are easy to segment from the submitted results.By contrast, the great vessels, including aorta and pulmonary artery, still need more efforts to achieve good results.For different methodologies, the DL-based methods could achieve high accuracy for the cases they succeed.They could also generate poor results with unrealistic shape, namely the performance can vary a lot.The conventional atlas-based approaches, either using segmentation propagation or probabilistic atlases, however generally perform stably, though they are not as widely used as the DL technology now.The hybrid methods, combining deep learning with prior information from either the multi-modality atlas or shape information of the heart substructures, should have potential and be worthy of future exploration. Figure 1 : Figure 1: Examples of cardiac images and WHS results: (a) displays the three orthogonal views of a cardiac CT image and its corresponding WHS result, (b) is from a cardiac MRI image and its WHS.LV: left ventricle; RV: right ventricle; LA: left atrium; RA: right atrium; Myo: myocardium of LV; AO: ascending aorta; PA: pulmonary artery. cardiac segmentation, which mainly focus on specific substructures of the heart.Radau et al. (2008); Suinesiaputra et al. (2011); Petitjean et al. (2015); Bernard et al. (2018) organized the challenges for segmenting the left, right or full ventricles.Moghari et al. (2016) organized a challenge for the segmentation of blood pool and myocardium from 3D MRI data.This work aims to offer pre-procedural planning of children with complex congenital heart disease.Tobon-Gomez et al. (2015);Karim et al. (2018) andZhao and Xiong (2018) provided data for benchmarking algorithms of LA or LA wall segmentation for patients suffering from atrial fibrillation. Figure 2 : Figure 2: Multi-atlas registration and label fusion with regularization proposed by Heinrich and Oster (2017). Figure 3 : Figure 3: A schematic illustration of the method developed by Yang et al. (2017c).Digits represent the number of feature volumes in each layer.Volume with dotted line is for concatenation. Figure 4 : Figure 4: Boxplot of Dice scores of the whole heart segmentation on CT dataset by the ten methods. Figure 5 : Figure 5: Boxplot of Dice scores of the whole heart segmentation on MRI dataset by the eleven methods. Figure 6 :Figure 7 : Figure 6: 3D visualization of the WHS results of the median and worse cases in the CT test dataset by the ten evaluated methods.The color bar indicates the correspondence of substructures.Note that the colors of Myo and LV in 3D visualization do not look exactly the same as the keys in the color bar, due to the 50% transparency setting for Myo rendering and the addition effect from two colors (LV and 50% Myo) for LV rendering, respectively. Table 1 : Summary of previous WHS methods for multi-modality images.Here, the abbreviations are as follows, PIS: patch-based interactive segmentation; FIMH: International Conference on Functional Imaging and Modeling of the Heart; MICCAI: International Conference on Medical Image Computing and Computer-assisted Intervention; MedPhys: Medical Physics; MedIA: Medical Image Analysis; RadiotherOncol: Radiotherapy and Oncology. Table 2 : Summary of the previous challenges related to cardiac segmentation from MICCAI society. Table 3 : Summary of submitted methods.Asterisk (*) indicates the results that were submitted after the challenge deadline. Table 4 : Results of the ten evaluated algorithms on CT dataset.SD: surface-to-surface distance; HD: Hausdorff Distance; DL: deep learningbased method; MAS: conventional method based on multi-atlas segmentation.Asterisk (*) indicates the results were submitted after the challenge deadline. Table 5 : Results of the eleven evaluated algorithms on MRI dataset.SD: surface-to-surface distance; HD: Hausdorff Distance; DL: deep learning-based method; MAS: conventional method based on multi-atlas segmentation.Asterisk (*) indicates the results were submitted after the challenge deadline. Table 6 : Summary of the advantages and limitations of the 12 benchmarked methods.
9,224.6
2019-02-21T00:00:00.000
[ "Computer Science" ]
Classification of rank 2 cluster varieties We classify rank $2$ cluster varieties (those for which the span of the rows of the exchange matrix is $2$-dimensional) according to the deformation type of a generic fiber $U$ of their $\mathcal{X}$-spaces, as defined by Fock and Goncharov. Our approach is based on the work of Gross, Hacking, and Keel for cluster varieties and log Calabi-Yau surfaces. Call $U$ positive if $\dim[\Gamma(U,\mathcal{O}_U)] = \dim(U)$ (which equals $2$ in these rank $2$ cases). This is the condition for the GHK-construction to produce an additive basis of theta functions on $\Gamma(U,\mathcal{O}_U)$. We find that $U$ is positive and either finite-type or non-acyclic (in the usual cluster sense) if and only if the monodromy of the tropicalization $U^{trop}$ of $U$ is one of Kodaira's monodromies. In these cases we prove uniqueness results about the log Calabi-Yau surfaces whose tropicalization is $U^{trop}$. We also describe the action of the cluster modular group on $U^{trop}$ in the positive cases. Introduction [FG09] defines a class of schemes, called cluster varieties, whose rings of global regular functions are upper cluster algebras. [GHK13a] describes how to view cluster varieties as certain blowups of toric varieties. We review this description, as well as [GHK11]'s construction of the tropicalization of a log Calabi-Yau surface. We then use these ideas to give a classification of rank 2 cluster varieties (those for which the symplectic leaves of the X -space are 2 dimensional) and to describe their cluster modular groups. By a log Calabi-Yau surface or a Looijenga interior, we mean a surface U which can be realized as Y \ D, where Y is a smooth, projective, rational surface over an algebraically closed field k of characteristic 0, and the boundary D is a choice of nodal anti-canonical divisor in Y . D = D 1 + . . . D n is a either a cycle of smooth irreducible rational curves D i with normal crossings, or if n = 1, D is an irreducible curve with one node. By a compactification of U , we mean such a pair (Y, D) ( [GHK] calls these compactifications with "maximal boundary"). We call (Y, D) a Looijenga pair, as in [GHK11]. Every such U can be obtained by performing certain blowups on a toric surface, cf. Lemma 2.9. 1.1. Outline of the Paper. Cluster Varieties: §2 reviews [FG09]'s definition of cluster varieties and summarizes [GHK13a]'s description of cluster varieties as certain blowups of toric varieties (up to codimension 2). In particular, we review §5 of [GHK13a], which shows that log Calabi-Yau surfaces are roughly the same as fibers of rank 2 cluster X -varieties. Our classification of cluster varieties will up to deformation classes of these associated log Calabi-Yau surfaces. In §2.6 and §2.7, we review [FG09]'s definitions of the cluster complex C and the cluster modular group Γ. The Tropicalization of U : In §3, we review [GHK11]'s construction of the tropicalization U trop of a log Calabi-Yau surface. U trop is homeomorphic to R 2 , but it has a natural integral linear structure that captures the intersection data of the boundary divisors. The integer points U trop (Z) ⊂ U trop generalize the cocharacter lattice N for toric varieties in that they correspond to multiples of boundary divisors for certain compactifications of U . U trop itself generalizes N R := N ⊗ R. The integral linear structure is singular at a point 0 ∈ U trop , and in §3.5 we examine the monodromy around this point. In §3.6, we discuss properties of lines in U trop . For example, the monodromy in U trop may make it possible for lines to wrap around the origin and self-intersect. §3.7 introduces some automorphisms of U trop that we will see in §5 are induced by the action of Γ. In §3.8, we review some lemmas from [Man14] which will be useful for the classification in §4. §3.9 shows that, although U trop does not in general determine the deformation type of U , it does at least determine the charge of U , which is the number of "non-toric blowups" necessary to realize a compactification of U as a blowup of a toric variety. Classification: §4 offers several equivalent classifications of rank 2 cluster varieties, or rather, of the deformation types of the log Calabi-Yau surfaces U that arise as the fibers of cluster X -varieties. The characterizations are based on several different properties of these varieties, including (but not limited to): • The properties of the quivers associated to the cluster variety-e.g., Dynkin (finite-type), acyclic, or non-acyclic. • The space of global regular functions on U -e.g., all constant, or including some, all, or no cluster X -monomials. • The intersection data of the boundary D for a compactification of U -e.g., whether (D i ·D j ) is negative (semi)definite or not. We call the cases which are not negative semidefinite positive, as in [GHK11]. • The geometry of U trop , including the monodromy and properties of lines. • The intersection form Q on the lattice D ⊥ ⊂ A 1 (Y, Z) of curve classes which do not intersect any component of D. • The intersection of the cluster complex (a subset of X trop ) with U trop -e.g., some, all, or none of U trop . For example, we find that U corresponds to an "acyclic" cluster variety if and only if some straight lines in U trop do not wrap all the way around the origin. The cases where no lines wrap correspond to "finite-type" cluster varieties. We show that the inverse monodromies of U trop in these finitetype cases are Kodaira's monodromy matrices I n , II, III, and IV , from his classification of singular fibers in elliptic surfaces in [Kod63]. Similarly, the non-acyclic positive cases correspond to Kodaira's matrices I * n , II * , III * , and IV * -furthermore, the intersection form Q on D ⊥ here is of type D n+4 (n ≥ 0) or E n , n = 8, 7, or 6, respectively (cf. Table 1). The deformation types for these cases are uniquely determined by U trop , and we describe how to construct each of these cases explicitely. Cluster Modular Groups: [FG09] defines a certain group Γ of automorphisms of cluster varieties, called the cluster modular group. In §5 we explicitely describe the action of Γ on U trop in all the positive cases (cf. Table 3). This action is interesting because, in addition to capturing most of the relevant data about Γ, it preserves the scattering diagram which [GHK11] and [GHKK] use to construct canonical theta functions on the mirror. Symmetries of the scattering diagram induced by mutations were previously observed in Theorem 7 of [GP09], although they did not put this in the language of cluster varieties or describe the full groups of automorphisms induced in this way. 1.2. Acknowledgements. Most of this paper is based on part of the author's thesis, which was written while in graduate school at the University of Texas at Austin. I would like to thank my advisor, Sean Keel, for introducing me to these topics and for all his suggestions, insights, and support. Cluster Varieties as Blowups of Toric Varieties In [FG09], Fock and Goncharov construct spaces called cluster varieties by gluing together algebraic tori via certain birational transformations called mutations. [GHK13a] interprets these mutations from the viewpoint of birational geometry, and thereby relates the log Calabi-Yau surfaces of [GHK11] to cluster varieties. This section will summarize some of the main ideas from [GHK13a]. 2.1. Defining Cluster Varieties. The following construction is due to Fock and Goncharov [FG09]. Definition 2.1. A seed is a collection of data S = (N, I, E := {e i } i∈I , F, ·, · , {d i } i∈I ), where N is a finitely generated free Abelian group, I is a finite set, E is a basis for N indexed by I, F is a subset of I, ·, · is a skew-symmetric Q-valued bilinear form, and the d i 's are positive rational numbers called multipliers. We call e i a frozen vector if i ∈ F . The rank of a seed or of a cluster variety will mean the rank of ·, · . We define another bilinear form on N by (e i , e j ) := ǫ ij := d j e i , e j , and we require that ǫ ij ∈ Z for all 1 i, j ∈ I. Let M = N * . Define and v i := p * 2 (e i ). For each i ∈ I, define a "modified multiplier" d ′ i by saying that v i is d ′ i times a primitive vector in M . Remark 2.2. Given only the matrix (e i , e j ) and the set F , we can recover the rest of the data, up to a rescaling of ·, · and a corresponding rescaling of the d i 's. This rescaling does not affect the constructions below, and it is common take the scaling out of the picture by assuming that the d i 's are relatively prime integers (although we do not make this assumption). Also, notice that ·, · and {d ′ i } together determine {d i }, so when describing a seed we may at times give Observations 2.3. • K 1 is also equal to ker (v → v, · ), so ·, · induces non-degenerate skew-symmetric form on N 1 . This also means that we could have equivalently defined the rank to be that of (·, ·). 1 The construction of cluster varieties does not depend on the values of e i , e j or ǫ ij for i, j ∈ F , and so it is common to not include these coefficients in the data. When they are included in the data, as in [FG09] and [GHK13a], they are not typically requried to be integers. However, as [GHK13a] points out, if these are not integers, then the image of p * i is not contained in M . [GHK13a] takes a slightly different fix to this (in which the ǫ ij with i, j ∈ F are again irrelevant), but it is essentially equivalent to our fix if we dropped the requirement that e i , e j = − e j , e i when i, j ∈ F . • Define another skew-symmetric bilinear form on N by [e i , e j ] := d i d j e i , e j . Then K 2 = ker (v → [·, v]), so [e i , e j ] induces a non-degenerated skew-symmetric form on N 2 . We can extend this to N 2 sat (the saturation in M of N 2 ), and after possibly rescaling [·, ·] (and adjusting the d i 's accordingly) we can identify this with the standard skew-symmetric form on N 2 sat with the induced orientation. We will denote this form and the induced symplectic form on N 2,R by (· ∧ ·). Here and in the future, R in the subscript means the lattice tensored with R. • We note that the seed obtained from S by replacing ·, · with −[·, ·] and d i with d −1 i produces the Langland's dual seed S ∨ described in [FG09]. Switching to S ∨ essentially has the effect of switching the roles of (and negating) p * 1 and p * 2 . We also note that p * 2 is the dual map to p * 1 . • Since (·, e i ) = −d i e i , · , we see that im(p * 2 ) and im(v → v, · ) span the same subspace of M R . Thus, there is a canonical isomorphism N 2,R ∼ = N 1,R . We easily see that this is a symplectomorphism with respect to the symplectic forms induced by [·, ·] and ·, · . Given a seed S as above and a choice of non-frozen vector e j ∈ E, we can use a mutation to define a new seed µ j (S) := (N, (1) Mutation with respect to frozen vectors is not allowed. Note that although the bases change, the form ·, · does not, so K 1 and N sat 1 are invariant under mutation. The same is true for K 2 and N 2 sat , as can similarly be seen using the Langland's dual seed and [·, ·]-one can check that the procedure for obtaining S ∨ from S commutes with mutation. Given a lattice L and some v ∈ L * , we will denote by z v the corresponding monomial on T L := L ⊗ k * = Spec k[L * ] (more precisely, max-Spec of k[L * ]). Corresponding to a seed S, we can define a so-called seed X -torus X S := T M = Spec k[N ], and a seed A-torus A S := T N = Spec k[M ]. We define cluster monomials Remark 2.4. We are departing somewhat from a common convention. In place of M , other authors typically use the superlattice (M ) • ⊂ M ⊗ Q spanned over Z by vectors f i := d −1 i e * i . They then take A i := (z fi ) ∈ k[M • ]. It seems to this author that this significantly complicates the exposition and the formulas that follow, with little benefit, and so we do not follow this convention. For any j ∈ I, we have a birational morphism µ X j : X S → X µj (S) (called a cluster X -mutation) defined by Similarly, we can define a cluster A-mutation µ A j : Now, the cluster X -variety X is defined by using compositions of X -mutations to glue X S ′ to X S for every seed S ′ which is related to S by some sequence of mutations. Similarly for the cluster A-variety A, with A-tori and A-mutations. The cluster algebra is the subalgebra of k[M ] generated by the the cluster variables A i of every seed that we can get to by some sequence of mutations. In this context, the well-known Laurent phenomenon simply says that all the cluster variables are regular functions on A- [GHK13a] uses this observation to give a simple geometric proof of the Laurent phenomenon. The ring of all global regular functions on A is called the upper cluster algebra. On the other hand, the X i 's do not always extend to global functions on X . When a monomial on a seed torus (i.e., a monomial in the X i 's for a fixed seed) does extend to a global function on X , we call it a global monomial, as in [GHK13a]. 2.1.1. Quivers and Seeds. We now describe a standard way to represent the data of a seed with the data of a (decorated) quiver. Each seed vector e i corresponds to a vertex V i of the quiver. The number of arrows from V i to V j is equal to e i , e j , with a negative sign meaning that the arrows actually go from V j to V i . Each vertex V i is decorated with the number d i . Furthermore, the vertices corresponding to frozen vectors are boxed. Observe that all the data of the seed can be recovered from the quiver. Now, a seed is called acyclic if the corresponding quiver contains no directed paths that do not pass through any frozen (boxed) vertices. A cluster variety is called acyclic if any of the corresponding seeds are acyclic. It is easy to see that a seed S is acyclic if and only if there is some closed half-plane in N 2 which contains v i for every i ∈ I \ F . 2.2. The Geometric Interpretation. As in [GHK13a], for a lattice L with dual L * and with u ∈ L, ψ ∈ L * , define One can check that the mutations above satisfy The following Lemma, compiled from §3 of [GHK13a], is what leads to the nice geometric interpretations of mutations and cluster varieties. Lemma 2.5 ( [GHK13a]). Suppose that u is primitive in a lattice L. Let Σ be a fan in L with rays corresponding to u and −u. Recall that the toric variety T V (Σ) admits a P 1 fibration π with D u and D −u as sections, corresponding to the projection L → L/Z u . The mutation µ u,ψ,L is the birational map on T L ⊂ T V (Σ) coming from blowing up the "hypertorus" and then contracting the proper transforms of the fibers F of π which intersect this hypertorus. Furthermore, µ X j (and under certain conditions, µ A j ) preserve the centers of the blowups corresponding to µ X i (and, respectively, µ A i ) for each i = j. Thus, a cluster X -mutation (µ X j ) * corresponds to blowing up {X j = −1} ∩ D (·,ej) , followed by blowing down some fibers of a certain P 1 fibration, and repeating for a total of d ′ j times (since (·, e j ) is d ′ j times a primitive vector, and m (·,ej ),ej ,M = [m (·,ej )/d ′ j ,ej ,M ] d ′ j ). The new seed torus is only different from the old one in that it is missing the blown-down fibers of the initial P 1 fibration, but has gained the exceptional divisor from the final blowup (except for the lower-dimensional set of points where this exceptional divisor intersects a blown-down fiber, represented by p in Figure 2.1). Since the centers of the blowups corresponding to the other mutations have not changed, this shows that the cluster X -variety can be constructed (up to codimension 2) as follows: For any seed S, take a fan in M with rays generated by ±(·, e i ) for each i, and consider the corresponding toric variety. For each i ∈ I \ F , blow up the hypertorus {X i = −1} ∩ D (·,ei) d ′ i times, and then remove the first (d ′ i − 1) exceptional divisors. The cluster X variety is then the complement of the proper transform of the toric boundary. Remark 2.6. In this construction of X , the centers for the hypertori we blow up may intersect if (·, e i ) = (·, e j ) for some i = j, so some care must be taken regarding the ordering of the blowups. Fortunately, this issue only matters in codimension at least 2 (cf. [GHK13a] for more details). However, when we consider fibers of X below, it is possible that some special fibers will have discrepencies in codimension 1. We will use the notation X ft to denote that we are restricting to the variety constructed as above for some fixed ordering of the blowups, and keep in mind that while X \ X ft is codimension 2 in X , there may be special fibers of X whose intersection with X \ X ft is codimension 1 in the fiber. As we will see below, A is a torsor over what is perhaps the "most special" fiber of X . The failure of mutations to preserve the centers of blowups for A may be viewed as a consequence of such codimension 1 discrepancies in the special fiber. Remark 2.7. We have seen that codimension 2 issues arise as a result of missing points like p in Figure 2.1, and also as a result of reordering the blowups. There are also missing contractible complete subvarieties-the (d ′ j − 1) exceptional divisors we remove when applying (µ X j ) * . These issues are relatively unimportant, since they do not affect the sheaf of regular functions on X . When we are interested in X or its fibers up to these issues, we will say "up to irrelevant loci." 2.3. The Cluster Exact Sequence. Observe that for each seed S, there is a not necessarily exact sequence Here, M → K * 1 is the map dual to the inclusion K 1 ֒→ N . Tensoring with k * yields an exact sequence, and one can check (cf. Lemma 2.10 of [FG09]) that this sequence commutes with mutation. Thus, one obtains the exact sequence where H A := T K2 , and H X := T K * 1 . Let U := p 2 (A) ⊂ X . The sequence 1 → H A → A → U → 1 should be viewed as a generalization of the construction of toric varieties as quotients, with U being the generalization of the toric variety. 2 In fact, Section 4 of [GHK13a] shows that the ring of global sections of A is (under certain assumptions) the Cox ring of U. In this paper, we are more interested in the fibers of λ. Looijenga Interiors. §5 of [GHK13a] shows that Looijenga interiors (i.e., log Calabi-Yau surfaces), as defined in §1, are exactly the surfaces (up to irrelevant loci) which arise as fibers of λ| X ft for rank 2 cluster varieties. We explain this now. Definitions 2.8. For a Looijenga pair (Y, D) as in §1, we define a toric blowup to be a Looijenga pair ( Y , D) together with a birational map Y → Y which is a blowup at a nodal point of the boundary D, such that D is the preimage of D. Note that taking a toric blowup does not change the interior We also use the term toric blowup to refer to finite sequences of such blowups. By a non-toric blowup ( Y , D) → (Y, D), we will always mean a blowup Y → Y at a non-nodal point of the boundary D such that D is the proper transform of D. Let (Y , D) be a Looijenga pair where Y is a toric variety and D is the toric boundary. We say that a birational map Y → Y is a toric model of (Y, D) (or of U ) if it is a finite sequence of non-toric blowups. According to [GHK], all deformations of U come from sliding the non-toric blowup points along the divisors D i ⊂ D without ever moving them to the nodes of D. We call U positive if some deformation of U is affine. This is equivalent to saying that D supports an effective D-ample divisor, meaning a divisor whose intersection with each component of D is positive. We will always take the term D-ample to imply effective. See §4.3 for equivalent characterizations of U being positive. To see that Looijenga interiors are the same as fibers of λ| X ft for rank 2 cluster varieties, up to irrelevant loci, we will need the following lemma from [GHK13a]. Now, in light of Lemmas 2.9 and 2.10 and the description of X ft in §2.2, it is clear that for ·, · rank 2, every fiber of λ| X ft is a Looijenga interior, up to irrelevant loci. For the converse, we use the following: Construction 2.11. Following Construction 5.3 of [GHK13a], let U be a Looijenga interior. Choose a compactification (Y, D) admitting a toric model π : We can assume that the q i 's are distinct. We extend this to a set E := {u 1 , . . . , u s , u s+1 , . . . , u m } of not necessarily distinct primitive vectors generating N Y , and we choose positive integers d ′ s+1 , . . . , d ′ m . Now, let S be the seed with N freely generated by a set E = {e 1 , . . . , e m }, I = {1, . . . , m}, Using S to construct X , the interpretation of X -mutations from §2.2, together with Lemma 2.10, reveals that U is deformation equivalent to the generic fibers of λ, up to irrelevant loci. A bit more work shows that U is in fact isomorphic to some such a fiber, up to irrelevant loci. This construction shows that: GHK13a]). Up to irrelevant loci, every Looijenga interior can be identified with the generic fiber of some rank 2 cluster X -variety, and conversely, any generic fiber of a rank 2 cluster X -variety is a Looijenga pair. Example 2.13. Consider the case where Y is a cubic surface, obtained by blowing up 2 points on each boundary divisor of (Y ∼ = P 2 , D = D 1 + D 2 + D 3 ). We can take Then the fibers of the resulting X -variety X 1 correspond to the different possible choices of blowup points on the D i 's. The fiber U is very special, having four , then the fibers of the resulting X -variety X 2 include only the surfaces constructed by blowing up the same point twice on each D i and then removing the three resulting (−2)-curves. U is the fiber where the blowup points are colinear and so there is one remaining (−2)-curve. The deformation type of the fibers of X ft has only changed by the removal of certain (−2)-curves, i.e., by some irrelevant loci. Note that X ft 2 = X 2 , and that X 2 can be identified (after filling in the removed (−2)-curves) with a subfamily of X ft 1 whose fibers do not agree with those of X 1 in codimension 1. These examples are well-known: the former corresponds to the Teichmüller space of the fourpunctured sphere, while the latter corresponds to the Teichmüller space of the once-punctured torus (cf. §2.7 of [FG09]). Definition 2.14. Consider a seed S. Assume each e i , i ∈ I \ F , is primitive in N 1 . If i = j implies v i = v j , we call S minimal (this means that d ′ i is the total number of non-toric blowups taken on the divisor corresponding to v i ). On the other hand, if each d ′ i = 1, we will call S maximal. Two seeds S 1 and S 2 (along with the associated cluster varieties) will be called equivalent if the generic fibers of the corresponding X -varieties X 1 and X 2 are of the same deformation type, up to irrelevant loci (or equivalently, if the not necessarily generic fibers of X ft 1 and X ft 2 are of the same deformation type, up to irrelevant loci). Example 2.13 above demonstrates that we can often change the number of vectors in a seed without changing the equivalence class of the fibers. For example, consider a seed ′ := e i1 , e i2 . We say the pair is as in the partitions. The corresponding space X ′ is equivalent to the original X . By this method, we can show that: Proposition 2.15. Every rank 2 seed is equivalent to a minimal seed and to a maximal seed. Example 2.16. The first seed for the cubic surface in Example 2.13 is maximal, while the second seed is minimal. The Canonical Intersection Form. For S a maximal rank 2 seed and (Y, D) a corresponding Looijenga pair, [GHK13a] describes a natural way to identify thus inducing a canonical symmetric bilinear form Q on K 2 . This identification of K 2 with D ⊥ is as follows: an element v := a i e i of K 2 corresponds to a relation a i v i = 0 in N sat 2 , which we recall can be identified with N Y , where Y → Y is a toric model corresponding to S. Standard toric geometry says that this determines a unique curve class C v in π * [A 1 (Ȳ )] such that C v · D i = a j for each i, where the sum is over all j such that D vj = D i . So we can define an isomorphism ι : where E i is the exceptional divisor corresponding to mutating with respect to e i . Finally, for u 1 , u 2 ∈ K 2 , define Q(u 1 , u 2 ) = ι(u 1 ) · ι(u 2 ). We will see in §4 that D ⊥ together with this intersection pairing tells us quite a bit about the deformation type of U . In particular, [GHK13a] tells us that U is positive if and only if Q is negative definite. Recall that varying the fiber of X corresponds to changing the choices of non-toric blowup points on D. For some choices of blowup points, certain classes C in D ⊥ may be represented by effective curves. Let D ⊥ Eff ⊆ D ⊥ be the sublattice generated by the curve classes which are represented by an effective curve on some fiber. Example 2.17. For the seed from Example 2.13, K 2 is generated by {e 2 − e 1 , e 4 − e 3 , e 6 − e 5 , e 1 + e 3 + e 5 }. The corresponding curves in where E i is the exceptional divisor of the blowup corresponding to e i , and L is a generic line in Y ∼ = P 2 . 4 Every rank 2 seed is equivalent to one with this primitivity condition because they all have Looijenga pairs as the fibers of their corresponding X ft . Using E i · E j = −δ ij , L · L = 1, and L · E i = 0 for each i, one easily checks that this lattice has type D 4 . On the special fiber U, these four curve classes are effective, so D ⊥ Eff = D ⊥ . 2.5. Tropicalizations of Cluster Varieties. [FG09] describes tropicalizations A trop and X trop of the spaces A and X , respectively. Given a seed S, A trop can be canonically identified as an integral piecewise-linear manifold with N R,S , and the integral points A trop (Z) of the tropicalization are identified with N S . For a different seed µ j (S), the identification is related by the integral piecewiselinear function µ j : N R → N R , where we use the overline to indicate that e j is mapped by the same piecewise-linear function as the other vectors, rather than getting a special treatment. Similarly for X trop and X trop (Z) using M R,S , M S , and the dual seed mutations. We will use the subscript S to indicate that we are equiping the tropical space with the vector space structure corresponding to the seed S. Our interest in this paper is primarily with the fibers U of λ. U trop can be canonically identified 5 with N 2 ⊗ R = p * 2 (A trop ) ⊂ X trop . We will spend §3 analyzing U trop in the rank 2 cases. [GHK11] has shown that in these cases, U trop has a canonical integral linear structure which is closely related to the geometry of the compactifications (Y, D). 2.6. The Cluster Modular Group. A seed isomorphism h : S → S ′ is an isomorphism of the underlying lattices which respects all the seed data in the obvious way. This induces a cluster isomor- A seed transformation is a composition of seed mutations and seed isomorphisms, and a cluster transformation is a composition of cluster mutations and cluster isomorphisms (i.e., the corresponding maps on the varieties). [FG09] defines the cluster modular group Γ to be the group of cluster automorphisms of a base seed S, modulo trivial cluster automorphisms (those which are the identity on both A and X , hence on U). We also define an extended cluster modular group Γ by allowing seed isomorphisms to reverse the sign of the skew-symmetric form on N . For example, for toric varieties, Γ can be thought of as the subgroup of SL(N ) which preserves the fan, wheras Γ can be thought of as the subgroup of GL(N ) preserving the fan. As motivation, we note that Γ and Γ induce automorphisms of U trop which preserve the canonical scattering diagram of [GHK11]. We will analyze this action on U trop in §5. 2.7. The Cluster Complex. A seed S with seed vectors e 1 , . . . , e n determines a cone C S ⊂ X trop S := M R,S given by e i ≥ 0 for all i. The collection of all such cones in X for every seed mutation equivalent to S is called the cluster complex C. [GHKK] shows that C forms a fan in X . It is a particularly nice piece of the "scattering diagram" that they use for constructing canonical theta functions on the "mirror" to X . Note that a wall W i ⊂ e ⊥ i in some C S ⊂ C has a naturally associated vector e i := (e i , ·) ∈ e ⊥ i . The following is essentially a restatement of [FG09]'s Lemma 2.15, although our cluster complex is really the cone over their cluster complex. Recall that U trop S := N 2,R,S has a natural symplectic structure induced by [·, ·]. Proposition 2.18. Γ is the group of vector space isomorphisms g between X trop S and X trop S ′ , for some fixed S and varying S ′ mutation equivalent to S, which take C S to C S ′ , take the vectors e i,S to e i,S ′ , and restrict to a symplectomorphism from U trop S to U trop S ′ . Similarly for Γ, but with the symplectic form on U trop possibly being negated. Proof. The correspondence is as follows: if g(e i,S ) = e i,S ′ , then as an element of Γ we say g(e i,S ) = e i,S ′ , and vice versa. Note that g ∈ Γ is a trivial cluster automorphism if and only if the cluster isomorphism between S and S ′ is the identity map on the underlying lattice N , in which case the corresponding action on the cluster complex is also trivial. g| US being a symplectomorphism exactly means that it preserves the skew-symmetric pairing [·, ·], and therefore also the pairing ·, · which is part of the seed data. Note that the condition of e i,S mapping to e i,S ′ can be replaced with the condition that the indexing of the walls is preserved: knowing C S and the form (·, ·) on N is enough to determine the e i,S 's up to reordering. We could also use the v i,S 's in place of the e i,S 's. Remark 2.19. In [GHKK], the walls e ⊥ i together with the attached functions 1 + z (ei,·) form an "initial scattering diagram," which they use to produce the "consistent scattering diagram" that is central to their mirror construction. They show that the consistent scattering diagram does not depend on the choice of initial seed (up to certain transformations which they describe). Γ may therefore be viewed as the group of symmetries of the scattering diagram which preserve the cluster complex. In general, the consistent scattering diagram may contain multiple copies of the cluster complex, corresponding to different cluster structures on the same space. These are not related by elements of Γ, but Remark 1.14 in [GHK13b] predicts that these different cluster complexes are related by an action of a Weyl group W for the lattice K 2 and correspond to different cluster structures on the underlying varieties. As a corollary of this fact that the induced action of Γ on X trop (and in fact, on A trop too) preserves the scattering diagram, one concludes that the canonical theta functions constructed in [GHKK] are indeed Γ-equivariant, as predicted in [FG09]. In fact, [GHKK] predicts that the theta functions depend only on the underlying variety and not on cluster structure. This would mean that any automorphism of the variety must act equivariantly on the theta functions, even if it changes the cluster structure. In §5 we will describe the action of the cluster modular group on U trop . In many (conjecturally all) cases, every automorphism of U trop (preserving its canonical oriented integral linear structure described below) is induced by an element of the cluster modular group. U trop as an Integral Linear Manifold Recall that U denotes a log Calabi-Yau surface. This section examines U trop with its canonical integral linear structure defined in [GHK11]. 3.1. Some Generalities on Integral Linear Stuctures. A manifold B is said to be (oriented) integral linear if it admits charts to R n which have transition maps in SL n (Z). We allow B to have a set O of singular points of codimension at least 2, meaning that these integral linear charts only cover B ′ := B \ O. B ′ has a canonical set of integral points which come from using the charts to pull back Z n ⊂ R n . Our space of interest, B = U trop , will be homeomorphic to R 2 and will typically have a singular point at 0 (which we say is also an integral point). B ′ admits a flat affine connection, defined using the charts to pull back the standard flat connection on R n . Furthermore, pulling back along these charts give a local system Λ of integral tangent vectors on B ′ . We will be interested in the monodromy of Λ around O. 3.1.1. Integral Linear Functions. By a linear map ϕ : B 1 → B 2 of integral linear manifolds, we mean a continuous map such that for each pair of integral linear charts ψ i : 1 is linear in the usual sense. ϕ is integral linear if it also takes integral points to integral points. By an integral linear function, we will mean an integral linear map to R with its tautological integral linear structure. We note that to specify an integral linear structure on an integral piecewise linear manifold (i.e., a manifold where transition functions are integral piecewise linear), it suffices to identify which piecewise linear functions are actually linear. These functions can then be used to construct charts. It therefore also suffices (in dimension 2) to specify which piecewise-straight lines are straight, since (piecewise-)straight lines form the fibers of (piecewise-)linear functions. Notation 3.1. Given a toric model (Y, D) → (Y , D), let N be the cocharacter lattice corresponding to (Y , D) (contrary to §2's notation), and let Σ ⊂ N R be the corresponding fan. Σ has cyclically ordered rays ρ i , i = 1, . . . , n, with primitive generators v i , corresponding to boundary divisors D i ⊂ D and D i ⊂ D. Assume N R is oriented so that ρ i+1 is counterclockwise of ρ i . Let σ u,v denote the closed cone bounded by two vectors u, v, with u being the clockwise-most boundary ray. In particular, if u and v lie on the same ray, we define σ u,v to be just that ray. We may use variations of this notation, such as σ i,i+1 := σ vi,vi+1 and v ρ for the primitive generator of some arbitrary ray ρ with rational slope, but these variations should be clear from context. We now use (Y, D) to define an integral linear manifold U trop . As an integral piecewise-linear manifold, U trop is the same as N R , with 0 being a singular point and U trop (Z) := N being the integral points. Note that an integral Σ-piecewise linear (i.e., bending only on rays of Σ) function ϕ on U trop can be identified with a Weil divisor of Y via W ϕ := a 1 D 1 + . . . + a n D n , where a i = ϕ(v i ) ∈ Z. We define the integer linear structure of U trop by saying that a function ϕ on the interior of σ i−1,i ∪ σ i,i+1 6 is linear if it is Σ-piecewise linear and W ϕ · D i = 0. This last condition is (for n ≥ 2) equivalent to Remark 3.2. This construction of U trop naturally generalizes to higher dimensions, but the twodimensional case is special in that the linear structure on U trop is canonically determined by (Y, D) (it does not depend on the choice of toric model). This is evident from the following atlas for U trop (from [GHK11]): the chart on σ i−1,i ∪ σ i,i+1 takes v i −1 to (1, 0), v i to (0, 1), and v i+1 to (−1, −D 2 i ), and is linear in between. Furthermore, toric blowups and blowdowns do not affect the integral linear structure, so as the notation suggests, U trop and U trop (Z) depend only on the interior U . Example 3.3. If (Y, D) is toric, then U trop is just N R with its usual integral linear structure. This follows from the standard fact from toric geometry that i (C · D i )v i = 0 for any curve class C. Taking non-toric blowups changes the intersection numbers, resulting in a singularity at the origin. 6 We assume here that there are more than 3 rays in Σ, so that σ i−1,i ∪ σ i,i+1 is not all of N R . This assumption can always be achieved by taking toric blowups of (Y, D). Alternatively, it is easy to avoid this assumption, but the notation and exposition becomes more complicated. We will therefore continue to implicitely assume that there are enough rays for whatever we are trying to do. Remark 3.4. Recall from standard toric geometry that any primitive vector v ∈ N corresponds to a prime divisor D v supported on the boundary of some toric blowup of (Y , D), and a general vector kv with k ∈ Z ≥0 and v primitive corresponds to the divisor kD v . Two divisors on different toric blowups are identified if there is some common toric blowup on which their proper transforms are the same (equivalently, if they correspond to the same valuation on the function field). Since taking proper transforms under the toric model gives a bijection between boundary components of (Y, D) and boundary components of (Y , D) (and similarly for the boundary components of toric blowups), we see that points of U trop (Z) correspond to multiples of divisors on compactifications of U . 3.3. Another Construction of U trop . We now give another construction of the canonical integral linear structure, this time more closely related to the cluster picture. Given a seed S, consider the non-frozen seed vectors {e i } i∈I\F . Recall that v i := p * 2 (e i ) ∈ U trop := p * 2 (A trop ) ⊂ X trop (cf. §2.5). The integral linear structure on U trop agrees with that of the vector space U trop S (with the lattice N 2,S as the integral points) on the complement of the rays ρ i := R ≥0 v i i ∈ I \ F . By repeatedly mutating, this determines the integral linear structure everywhere. For yet another perspective, consider a line L in U trop S which crosses a ray ρ i as above. Viewed as a piecewise-straight line in U trop with its canonical integral linear structure, L will appear to be bending away from the origin when it crosses ρ i . Lines L which appear straight in U trop will appear to bend towards the origin in U trop S as follows: if u is a tangent vector to L on one side of ρ i which points towards ρ i , then on the other side, u − |u ∧ v i |v i will be a tangent vector pointing away from ρ i . Another way to state this perspective is that the "broken lines" (as in [GHK11] and [GHKK]) in U trop which are actually straight with respect to the canonical integral linear structure are exactly those which bend towards the origin as much as possible. pulls back to a family of rays ρ j , j ∈ Z, projecting to ρ (we arbitrarily choose a ray in U trop 0 to be ρ 0 and then assign the other indices so that they increase as we go counterclockwise). Monodromy About the Origin. We now consider what happens when we parallel transport a tangent vector v in T p U trop counterclockwise around the origin. We use the embedding of a cone in the tangent spaces of its points (which are all identified via parallel transport in the cone), and we use the notation δ i := δ i ρD 1 ,ρD 2 . Example 3.7. Suppose Y → Y consists of a single non-toric blowup on, say, D 1 . Then δ 0 (v 1 ) = δ 1 (v 1 ) = (1, 0). However, δ 0 (v 2 ) = (0, 1) while δ 1 (v 2 ) = (1, 1). We can view parallel transporting counterclockwise around the origin as parallel transporting up one sheet on the developing map, and then the monodromy tells us how to write the transported vector in terms of δ 1 (v 1 ) and δ 1 (v 2 ). Thus, the monodromy is Similarly, the monodromy is in general given by µ = δ 1 (v 1 ) δ 1 (v 2 ) −1 with respect to the basis and developing map {δ 0 (v 1 ) = (1, 0), δ 0 (v 2 ) = (0, 1)}. We may view µ −k as a map U trop 0 → U trop 0 which lifts points up k sheets. Note that the monodromy determines U trop as an integral linear manifold: U trop is the quotient of U trop 0 by this Z-action. µ and µ −1 can always be factored into a product of unipotent matrices as follows: choose a toric model in which k i non-toric blowups are taken on the divisor D vi , for v 1 , . . . , v s ∈ N cyclically ordered counterclockwise. Then we have the factorization where µ −ki vi is given in an oriented unimodular basis (v i , v ′ i ) by the matrix 1 k i 0 1 . More generally, in a basis where v i = (a, b), the corresponding contribution to µ −1 is Now µ can of course be expressed as µ k1 v1 · · · µ ks vs . Alternatively (following from the fact that Aµ v A −1 = µ Av ), the monodromy matrix is given by the product µ = (µ ′ vs ) ks · · · (µ ′ v1 ) k1 of matrices of the form where (a 1 , b 1 ) := v 1 , and for i > 1, (a i , b i ) := (µ ′ vi−1 ) ki−1 · · · (µ ′ v1 ) k1 v i . This can be interpreted by saying that before we can apply the monodromy contribution corresponding to v i , we have to let the modifications we have made so far act on v i . We have that U trop is uniquely determined (as an integral linear manifold, up to isomorphism) by its monodromy, and that a factorization of the monodromy into unipotent elements with cyclically ordered eigenrays as above corresponds to a toric model for a Looijenga pair (up to deformation), and hence to a seed as in §2.4. By eigenray, we mean an eigenline with a chosen direction. 3.5.1. Mutations and Monodromy. We now describe the monodromy of U trop directly in terms of seed data. Use µ i,S to indicate that we are mutating a seed S with respect to a vector e i . We consider the induced map on N 2 , identified with N Y as in §2.4, which we denote by µ i,S . This is not hard to describe-it is given by Equation 1, with each e i replaced by v i := p * 2 (e i ), and (·, ·) replaced by the induced non-degenerate bilinear form (· ∧ ·) on N Y . Assume that the v i 's are positively ordered with respect to the orientation induced by this form. Now we observe that, in the notation of Equation where the product is taken over all i, with the v i 's being ordered counterclockwise as we move from right to left in the product. Note that the v i 's in this formula are not affected by the previous mutations! Alternatively, by Equation 6, we have µ = µ −2 n,S n • µ −2 n−1,S n−1 • · · · • µ −2 1,S 1 , where S 1 := S, and S k := µ −2 k−1,S n−1 (S k−1 ). That is, we apply the inverse mutation twice with respect to one vector, then twice with respect to the next vector in the new seed, and so on. This straightforward way to compute the monodromy is potentially useful because in §4 we classify cluster varieties in terms of their monodromies (among other things). 3.6. Lines in U trop . For us, a line L in U trop will simply mean the image of a linear map L : R → U trop 0 (we abuse notation by letting L denote the map and its image). A line together with such a choice of linear map will be called a parametrized line. The signed lattice distance of a parametrized line L from the origin is given by the skew-form L(t) ∧ L ′ (t), where we use the canonical identification of the vector from 0 to L(t) with a vector in T L(t) . Note that the lattice distance does not depend on t. We will write L >0 to denote that a line L has positive lattice distance from the origin (i.e., goes counterclockwise about the origin), or L <0 to denote that it has negative lattice distance from the origin. We will say that a parametrized line L goes to infinity parallel to q if, for any open cone σ ∋ q, there is some t σ ∈ R such that t > t σ implies L(t) ∈ σ, L ′ (t) = q under parallel transport in σ. Similarly for coming from infinity parallel to q, with t > t σ replaced by t < t σ and L ′ (t) = q replaced with −L ′ (t) = q. We let L(∞) and L(−∞) denote the directions in which L goes to and comes from infinity. We use the subscript q to indicate that a line L goes to infinity parallel to q. For example, L >0 q denotes a line which goes to infinity parallel to q with the origin on its left. We say that an unparametrized line goes to infinity parallel to q if it admits a parametrization which does. In general, a line need not go to infinity at all. In fact, one characterization of U being positive is that every line both goes to and comes from infinity, cf. §4.3. We note that the monodromy about the origin in U trop allows lines to wrap around the origin and self-intersect. We say that a line L wraps if it intersects every ray, except possibly one, at least once. It wraps k times if it hits each ray at at least k times, except possibly for one ray which it may hit only (k − 1) times. Example 3.10. If (Y, D) is the cubic surface introduced in Example 3.5, then for any ray ρ ⊂ U trop , U trop \ ρ is isomorphic as an integral linear manifold to an open half-plane. Both ends of any line will go to infinity in the same direction. If we now make a non-toric blowup on some D ρq , then in the new integral linear manifold, any line will self-intersect unless both ends will go to infinity parallel to q. 3.7. Some Integral Linear Automorphisms of U trop . Assume that U is positive, so lines to infinity on both ends. Given a point q in U trop , define Intuitively, both operations correspond to "negating" a vector in the integral linear manifold, but using different choices of charts. These clearly lift to maps ν + and ν − : , which may be viewed as rotation 180 • clockwise or counterclockwise, respectively. Proof. This follows from ν ± being integral linear, which is clear since 180 • rotations of R 2 are integral linear. We will see in Proposition 5.2 that ν ± are induced by Γ. 3.8. Useful Facts from [Man14]. The following is a restatement of a Lemmas 3.7 and Corollary 3.8 from [Man14]: Lemma 3.12. Let L ⊂ U trop be a line which does not wrap. Let u and v be the directions in which L goes to infinity. Let σ L ⊂ U trop be the closed cone which is bounded by u and v and which does not contain any points of L. Then some compactification of U admits a toric model whose non-toric blowups are all along divisors corresponding to rays in σ L . Furthermore, if one restricts to σ L \ ρ u or σ L \ ρ v , then the choices of blowup points here is uniquely determined. [GHK11] constructs a family V → Spec B mirror to U which admits a canonical B-module basis of theta functions {ϑ q } q∈U trop (Z) . [GHK] shows that if U is positive, then it can be realized as a fiber of V, thus giving theta functions on U . Recall from §2.1 that a global monomial is regular function on X whose restriction to some seed X -torus is a monomial. We also call the restriction to a fiber U ⊂ X of such a function a global monomial. §3.6 of [Man14] observes the following (phrased differently): Lemma 3.13. Take σ L as in Lemma 3.12. For any q ∈ σ L , ϑ q is a global monomial. Assume U is positive, and let V denote a generic fiber of the mirror V. 3.9. The Tropicalization Determines the Charge. One natural question to ask is to what extent U trop determines U . We will see in the next section that in many cases, U is uniquely determined up to deformation by U trop . This is not always the case though: for example, there are two degree 8 Del Pezzo's with an irreducible choice of anti-canonical divisor which have the same U trop but are not deformation equivalent. This subsection shows that U trop does at least determine the number of non-toric blowups. Proof. First note that, for n > 1, toric blowups increase n by 1, decrease Tr(H) by 3, and keep the charge constant, so Equation 8 is unaffected by toric blowups and blowdowns. Similarly, non-toric blowups decrease Tr(H) by 1 and increase the charge by 1, so the validity of the equation is also unaffected by non-toric blowups. Since every Looijenga pair is related to a copy of the toric pair (P 2 , D) by some sequence of toric blowups, toric blowdowns, and non-toric blowups, it now suffices to just check this case. We have c(P 2 , D) = 0, n = 3 and Tr(H) = 3, so the equation holds. An similar formula appears in [GHK]: c(Y, D) = 12 − (n + K 2 Y ). Proof. Let Σ Y and Σ Y ′ be the corresponding fans in U trop . There exists some nonsingular common refinement Σ which is the fan for a toric blowup of both (Y, D) and (Y ′ , D ′ ). The intersection matrices for these two toric blowups are the same, since each can be determined from Σ, so the claim follows from Lemma 3.16. Classification Here we give several equivalent classifications for the possible deformation classes of Looijenga pairs. These classifications are based on the intersection matrix H of D, the intersection form Q on D ⊥ Eff ⊂ D ⊥ ∼ = K 2 (see §2.4.1), the monodromy µ of U trop , the properties of lines in U trop , the global functions on U , the properties of the quiver for a corresponding cluster structure, and various other properties. This may be viewed as a classification of rank-2 cluster varieties up to the notion of equivalence given in Definition 2.14. The classification is not totally new-for example, the cases that we refer to as "no lines wrap" or "some lines wrap" are simply the finite-type or acyclic cases, respectively, in the cluster language. However, we do offer new characterizations of these cases. Throughout this section, D will be called minimal if it has no (−1)-components. 4.1. The Negative Definite Case. The following are equivalent, and have all appeared (along with some other equivalent statements) in some form in [GHK11], [GHK], or [GHK13a]. • The intersection matrix H = (D i · D j ) is negative definite. • Any developing map δ as in §3.4 embeds the universal cover U trop 0 of U trop 0 into a strictly convex cone in R 2 . • The monodromy satisfies Tr(µ) > 2. • All lines in U trop wrap infinitely many times around the origin, meaning that they hit each ray infinitely many times. • The quadratic form Q is not negative semi-definite. • U and its deformations admit no non-constant global regular functions. • D can be blown down to get a surface Y with a cusp singularity. If D is minimal, D 2 i ≤ −2 for all i, and D 2 i ≤ −3 for some i. See Example 1.9 of [GHK11] for the relationship between µ and the cusp singularity on Y . In fact, much of [GHK11] is devoted to deformations of cusp singularities. 4.2. The Strictly Negative Semi-Definite Case. Once again, the following statements are all equivalent and can be found in [GHK11] and [GHK] (or follow easily). • The intersection matrix H is negative semi-definite but not negative definite. • The monodromy µ is SL 2 (Z)-conjugate to a matrix of the form 1 a 0 1 , with a > 0. • Lines in U trop can be circles, or they can wrap infinitely many times around the origin. • If D is minimal, then D ∈ D ⊥ , meaning that either D 2 i = −2 for all i, or D is irreducible with D 2 = 0. • The quadratic form Q is negative semi-definite but not negative definite (since Q(D) = 0). • (Y, D) is deformation equivalent to a Looijenga pair (Y ′ , D ′ ) which admits an elliptic fibration having D ′ as a fiber. As stated above, if D is minimal then it is either irreducible or consists of n > 1 (−2)-curves. The largest possible n here is 9. This follows from Lemma 3.16, which says that the charge is c(Y, D) = 12 − 3n − Tr(H) = 12 − n. The charge is by definition non-negative, giving us n ≤ 12. Furthermore, the classifications below then imply that some lines do not wrap if c(Y, D) ≤ 2, so then n ≤ 9. A case with n = 9 can be explicitely constructed. 4.3. The Positive Cases. As a converse to the above cases, we have that the following are equivalent: • The intersection matrix H is not negative semi-definite. • The developing map for U trop 0 is not injective. • Lines in U trop wrap at most finitely many times, so both ends of each line go to infinity. • The quadratic form Q is negative definite. • U is deformation equivalent to an affine surface. • U is a minimal resolution of Spec(Γ(U, O U )), which is an affine surface with at worst Du Val singularities. • D supports a D-ample divisor. If any of these conditions hold, we say that U is positive. We have several sub-cases: All Lines Wrap/Positive Non-Acyclic Cases. Theorem 4.1. The following are equivalent: (1) Lines in U trop all wrap, but only finitely many times. (2) Every sheet of the developing map is convex, but the developing map is not injective. Proof. (1)⇔(2) is clear from the definitions. (1)⇔(3) follows immediately from Lemma 3.14 (the ring of global regular functions being two-dimensional is equivalent to positivity). For (1)⇒(5), using the construction of U trop from charts in Remark 3.2, we can easily see that having any D 2 i > 0 with D not irreducible would allow a line to not wrap. On the other hand, having every D 2 i ≤ −2 would mean we are in a negative semi-definite case. So if D is minimal and not irreducible, then D 2 i must be 0 for some i. D having more than one additional component would allow a non-convex sheet of the developing map, so the claim follows, except for when D is irreducible. If D is irreducible and D 2 > 4, then the proper transform of D after taking a toric blowup would have positive self-intersection, which we have already ruled out, and D 2 < 1 would mean we are in a negative semi-definite case. For (5)⇒(2), observe that in the D 2 1 = D 2 2 = 0 case, every sheet of any developing map is convex (but not strictly convex). The other cases come from non-toric blowups and toric blow-downs of this, so the sheets of their developing maps will of course still be convex (non-toric blowups make these sheets "more convex"). (6)⇒(7) is also straightforward. For U generic, D ⊥ is generated by classes of the form E i,j1 − E i,j2 (where E i,j denotes the exceptional divisor from a non-toric blowup on D i ), together with a class of the form L − E 1,j1 − E 2,j2 − E 3,j3 , where L is the class of a generic line in P 2 . If we choose all the blowup points on each D i to be infinitely near, and choose the blowup points on different D i 's to be colinear, then D ⊥ is generated by effective divisors with the correct intersections. (7)⇒(1) because Q of type D n or E n implies that Q is negative definite, so by the above characterizations, we are not in an H negative semi-definite case. We also cannot be in a some lines wrap case because, as we see below, Q| D ⊥ Eff in these cases is a direct sum of A ni 's. It now suffices to show that (5)⇒(6) (since (4)⇔(5), this means we are showing that U trop really does determine the deformation type of U in these cases). For the I * 0 case, we have µ −1 = − Id. Such a U trop contains a reflexive polytope with 3 integral points on the boundary, and this implies that U must be an affine cubic surface (cf. Example 5.21 in [Man14]), which we know can be obtained as in Example 3.5. Now for the I * k cases, we can choose a compactification (Y, D) of U with D 2 1 = D 2 2 = −1 and D 2 3 = −1 − k. The divisor C := D 1 + D 2 has C · D 1 = C · D 2 = C 2 = 0, and C · D 3 = 2. By Riemann-Roch, dim |C| ≥ 1. If C is the only singular element of some pencil P 1 ⊂ |C|, then (for U generic in its deformation class) Y \ C is a P 1 -bundle over A 1 , hence has Euler characteristic 2. So then Y has Euler characteristic 5. However, we know from §3.9 that U trop determines the charge c of (Y, D), which in this situation is 6 + k. One checks that the Euler characteristic of a Looijenga pair with n boundary components and charge c is n+ c, which in this case is 9 + k > 5. So |C| must contain other singular curves. These must contain irreducible rational components E 1 , E 2 with E i · D 3 = 1 and E 2 i = −1. Blowing down either of these is a non-toric blowdown and reduces us to the I * k−1 case, so the claim follows by induction. For the IV * case, we have a compactification of U with D = D 1 +D 2 +D 3 , D 2 1 = −1, D 2 2 = D 2 3 = −2. Note that D · D 1 = 1, while D · D 2 = D · D 3 = 0, so dim |D| ≥ 1. Thus, there is some point on D 1 which we can blow up to get a new pair ( Y , D), with exceptional divisor E, such Y admits an elliptic fibration with D being a fiber and E being a section. Such a surface can be obtained by blowing up 9 base-points for a pencil of cubics in P 2 , with E being the exceptional divisor of the final blowup (cf. [HL02]). D then is the proper transform of one of the cubics D in the pencil, so there must have been 3 base-points on each component D i of D. Thus, after blowing E down, we see that Y must contian disjoint (−1)-curves hitting each component of D. Blowing down a (−1)-curve hitting, say, D 2 , reduces to the I * 1 case we have already dealt with. A similar argument works for the III * case using a compactification of U with D = D 1 + D 2 , D 2 1 = −1, D 2 2 = −2, and blowing up a point in D 1 to get a surface with an elliptic fibration. The II * case is also similar, using D irreducible with self-intersection 1 and blowing up some point in D to get a surface with an elliptic fibration. Proof. (1)⇔(2) is Lemma 3.12. For (2)⇔(4), note that for some seed vector e i for a seed S, the set {e i ≥ 0} ∩ U trop is the same as the set (v i ∧ ·) ≥ 0, where ∧ is the symplectic form on U trop induced by [·, ·]. The intersection of these positive half-spaces for all non-frozen e i 's is clearly nonempty if and only if S is as in (2). (1)⇒(5) follows from Lemma 3.13. For (5)⇒(1), note that for a global monomial ϑ q , the tropicalization ϑ trop q is positive somewhere, and so Lemma 3.14 implies that the fibers ϑ trop q = d < 0 are lines which do not wrap. (6)⇒(1) because if every line does wrap (possibly infinitely many times), then we have seen that either Q is not negative-definite or Q| D ⊥ Eff is of type D n or E n . For (2)⇒(6), first note that Q is negative definite on D ⊥ by positivity of U . Now, let (Y, D) → (Y , D) be the toric model corresponding to a seed with all non-toric blowups corresponding to rays in one half of the plane N Y . For any curve C in Y , (C · D i )v i = 0 where v i is the primitive vector in N Y corresponding to D i . If C is the image of an irreducible effective curve C ∈ D ⊥ , then C · D i ≥ 0 for all i, and C · D i can only be positive if there is a non-toric blowup point somewhere in C ∩ D i . Thus, each C · D i must actually be 0, so C must have been supported on an exceptional divisor. Thus, D ⊥ Eff is generated by classes obtained by taking the d ′ i blowups to be infinitely near, and then taking the d ′ i − 1 exceptional divisors which do not intersect D. Let C U denote the union of all cones σ L for lines L which do not wrap, where σ L is defined as in Lemma 3.12. We note that the argument for (2)⇔(4) above can be modified to prove the following: This justifies [Man14] calling C U the cluster complex. No Lines Wrap/Finite-Type Cases. Theorem 4.4. The following are equivalent: (1) No Lines in U trop wrap. (2) No sheet of the developing map is convex. (3) The Laurent phenomenon holds for the X -space, meaning that each X i is a global monomial. Furthermore, the global monomials form an additive basis for the global function on U . (4) The inverse monodromy matrix µ −1 is a Kodaira matrix of type I k , II, III, or IV . (5) Cluster structures for U are of finite type, meaning that they have only a finite number of distinct seeds. (6) For some maximal seed, the corresponding quiver (after removing frozen vectors) is of type A k 1 (k ∈ Z ≥0 ), A 2 , A 3 , or D 4 . (7) The cluster complex C ⊆ X trop contains all of U trop , and in fact is all of X trop (assuming that there are no frozen variables). Proof. (1)⇔(3) follows from Lemma 3.13. To see that (1) implies (5), we need Lemma 3.12, which says that for any line L d<0 q which does not wrap, there are only finitely many (−1)-curves hitting boundary divisors corresponding to rays in the cone σ L bounded by L d<0 q (±∞). Since no lines wrap, we can cover U trop by finitely many cones of the form σ L , and so there are only finitely many (−1)-curves in Y hitting the boundary. Since seeds correspond to certain finite subsets of this collection of (−1)-curves, the claim follows. (5)⇔(6) follows from a well-known result of [FZ03], which says that a cluster algebra is of finite type if and only if the matrix (−|ǫ ij | + 2δ ij ) i,j∈I\F is a finite type Cartan matrix. One easily checks that the only quivers of this type which produce rank 2 cluster varieties are those listed in the statement of theorem, along with types B 2 , B 3 , and G 2 , which are equivalent to types A 3 , D 4 , and D 4 , respectively, in the sense of Definition 2.14. For (5)⇔(7), recall that seeds are in bijection with cones of the cluster complex. For any boundary wall W of any cone in C, both sides of W will always be in C, so if there are only finitely many cones, then C must fill up all of X trop . Conversely, if there are infinitely many cones, then they must "bunch up" near some ray ρ which is not in C. Table 2 lists the cases where no lines wrap, along with their basic properties. We once again use the notation (d 1 , d 2 , d 3 ) to indicate that such a Looijenga pair can be obtained by starting with the toric variety (P 2 , D = D 1 + D 2 + D 3 ), and then blowing up d 1 , d 2 , and d 3 points on D 1 , D 2 , and D 3 , respectively. Remark 4.6. We suggest here that the apparence of Kodaira's matrices may have a deeper geometric significance. The symplectic heuristic behind [GHK11]'s mirror construction (see their §0.6.1) assumes that U admits a special Lagrangian torus fibration over U trop , or at least over a deformation of U trop in which the singularity is factored into several singular points. We expect that, at least in the all-lines-wrap and no-lines-wrap cases, some symplectic deformation or degeneration of U (perhaps U := p 2 (A) ⊂ X or something closely related) will indeed admit a special Lagrangian fibration over U trop . This is known explicitely for the I k cases (cf. [CU13]), and in cases representing moduli of local systems this can be realized (with the singularity factored) as the Hitchin fibration (as explained to me by Andy Neitzke). Furthermore, we hope that U (or at least some analytic open subset of U ) in these cases admits a hyperkähler structure, and that for some rotation of the complex structure, the SYZ fibration over U trop (or over some neighborhood of 0 ∈ U trop ) will become an elliptic fibration (again a standard part of the Hitchin system picture). Doing this without factoring the singularity in U trop would of course require that the monodromy is one of Kodaira's monodromies. Some Lines Wrap and Some Do Not. Proposition 4.7. The following are equivalent: (1) Some Lines in U trop wrap, while others do not. (2) Some (but not all) sheets of the developing map are convex. (3) Cluster varieties corresponding to U are acyclic but not of finite type. (1)⇔(2) is easy, and (1)⇔(3) follows immediately from Theorems 4.2 and 4.4. The equivalence with (4) follows because all the other possibilities have been eliminated by the previous theorems. Cluster Modular Groups In this section, we explicitely describe the action of the cluster modular group Γ on U trop in every positive rank 2 case. However, keeping track of frozen variables will overly complicate matters and will obscure certain meaningful symmetries. We therefore define a new group Γ ′ for which we drop the requirement that frozen vectors are permuted by Γ (we allow frozen vectors to be mapped anywhere). This may introduce more automorphisms than one wishes to consider, so one could also require that elements of Γ ′ do not act trivially on both U trop and on the set of non-frozen vectors. 9 Γ can be recovered by taking the subgroup of Γ ′ which is the stabilizer of the set of frozen vectors (roughly meaning that the corresponding cluster transformations extend over certain partial compactifications). 5.1. The Action on U trop . Let Aut(U trop ) be the group of orientation preserving integral linear automoprhisms of U trop . As in Proposition 2.18, we have a natural map r : Γ ′ → Aut(U trop ). Let κ denote the kernel. Recall that elements of Γ ′ are represented by certain cluster transformations, i.e., compositions of mutations and seed isomorphisms. Elements of κ must act trivially on U trop , so they come from the cluster transformations whose only seed isomorphisms are ones such that if e i → e j , then v i = v j . What we plan to describe is the image G := r(Γ ′ ) ⊆ Aut(U trop ). Note that if all seeds related to S are minimal, then G = Γ ′ . Recall ν ± ∈ Aut(U trop ) from §3.7. We will see that at least these elements are always in G. Furthermore, from our descriptions of G below, one can explicitely check that the conjecture holds for the all-lines-wrap and no-lines-wrap cases. Of course, for these cases, we have shown that U trop determines the deformation class of U . A little more work shows that U trop is even enough to completely identify the intersection of the cluster complex with U trop in these cases (ignoring frozen vectors), and it follows directly from this that every automorphism of U trop in these cases is induced by an element of Γ ′ . We now note that when considering U trop with its canonical integral linear structure, mutating with respect to a seed vector e i for some seed S does not change the positions of any of the v j 's in U trop except for v i . This is because the centers of the blowups corresponding to the e j 's, j = i, are preserved by mutation, and the divisor containing the center is the one corresponding to v j . Thus, we only have to worry aboout what happens to v i . This vector is negated with respect to the vector space structure U trop S . We now interpret what this means in different cases. As in §3.5.1, we use the notation µ i,S to indicate that we are mutating a seed S with respect to a vector e i . We let S i1,...,i k denote the seed obtained from S by mutating with respect to the seed vectors with indices i 1 , then i 2 , and so on up through i k . 5.2. When Lines Do Not Wrap. In the toric case we of course have G = Γ ′ = SL 2 (Z). We saw in Lemma 3.12 that if a line L does not wrap, then (ignoring frozen vectors) there is a unique seed S for which each v i is contained in σ L \ ρ, where σ L is the cone bounded by L and ρ is either boundary ray of this cone. Assume the v i 's are arranged in counterclockwise order v 1 , . . . , v s . Note that any line in U trop which does not intersect any ρ vi is also a straight line in U trop S . L >0 v1 and L <0 vs are two such lines. Thus, µ X e1 has the effect of applying ν + to v 1 , while µ X es has the effect of applying ν − to v s . Now note that v 2 , . . . , v s , v ′ 1 := ν + (v 1 ) are all contained in σ L1 \ ρ v1 , so we can repeat the process, mutating v 2 , then v 3 , and so on. Alternatively, we could have done the reverse, mutating v s first, then v s−1 , and so on. Since ν ± are integral linear automorphisms of U trop by Lemma 3.11, we see that m − := ν − • µ s,S1,2,...,s−1 • · · · • µ 1,S is an element of Γ, and similarly for the reverse, m + := ν + • µ 1,Ss,s−1,...,2 • · · · • µ s,S . We note that r(m ± ) = ν ± . Of course, it might not be necessary to apply all s mutations above before getting a seed isomorphic to the original one. For example, in the type A 2 case of Theorem 4.4, preforming a single mutation produces a seed isomorphic to the original. We may thus obtain fractional powers of ν ± . It is not hard to see that all elements of r(Γ ′ ) must be of this form, except in the I k cases (as we see below). Thus, if not all lines wrap and we are not in an I k case (k ≥ 0), then G is cyclic. For example, if we are in a case where some lines wrap and others do not (cf. Proposition 4.7), then the monodromy has two eigenlines ℓ 1 and ℓ 2 in R 2 , or one eigenline with algebraic multiplicity 2 in the Tr(µ) = −2 cases. Assume for now that Tr(µ) < −2. Then −µ −1 has eigenvalues λ and λ −1 for some λ ∈ (0, 1), and we can say ℓ 1 is the eigenspace corresponding to λ. We can identify U trop with a half-space bounded by ℓ 1 , with the two outgoing rays of ℓ 1 identified. Let C be the cone bounded by ℓ 1 and ℓ 2 with ℓ 1 as the clockwise-most boundary ray. Then the interior of C is in fact C U , the intersection of the cluster complex with U trop -indeed, we see that −µ −1 moves vectors in the interior of C counterclockwise, as one expects ν + to do in C U . Let σ L ⊂ C U be a cone corresponding to a line L which does not wrap, and let ρ be either boundary ray of σ L . Then σ L \ ρ is a fundamental domain for the action of ν ± on C U . We see that there is a similar action giving a periodic structure to the complement of C with ν + moving rays clockwise. The cases where µ is conjugate to −1 a 0 −1 are essentially the same except that λ = 1, ℓ 1 = ℓ 2 , and the complement of C U is just this eigenspace (a single ray in U trop ). So in any case where some lines wrap and others do not, we get G ∼ = Z, with ν ± generating a finite index subgroup. For the II, III, and IV cases, −µ −1 has finite order, and so G will also have finite order. One can explicitely compute G in these cases to get the groups listed in Table 3. plane, with v 2 = (0, 1) and v 3 = (−1, 1), then x ∈ Z corresponds to the automorphism taking v 2 to (−x, 1) and v 3 to (−x − 1, 1). In particular, we have that ±k corresponds to ν ± . For the IV * , III * , and II * cases, we take d ′ 2 = 3, d ′ 3 = 2, and d ′ 1 = 3, 4, or 5, respectively. In the I * V case, when we apply the mutation with respect to v 3 , we can then compose with the seed isomorphism v ′ 3 → v 3 , v ′ 1 → v 2 , and v ′ 2 → v 1 . This is the only nontrivial element of G in this case, so we have G ∼ = Z/2Z. On can check that this non-trivial element is in fact ν + = ν − . In the III * and II * cases, we do not even have this element, and G is trivial. We note that there is an orientation reversing automorphism in each of these three cases which, after mutating with respect to v 3 takes v ′ i → v i , for each i. Thus, one can obtain extra, potentially interesting symmetries of the scattering diagram by considering Γ (as in §2.6) in place of Γ. In the I * 0 , III * , and II * cases, one can check that ν ± are trivial. Thus, in conjunction with what we have seen in the other cases, we have found that: Proposition 5.2. ν ± are induced by the cluster modular group Γ ′ (which we do not require to preserve frozen vectors) in all the positive cases. We have now described G in all the positive cases. We summarize these findings in Table 3.
19,691
2014-07-23T00:00:00.000
[ "Mathematics" ]
Cerebrospinal Fluid Hypocretin-1 (Orexin-A) Level Fluctuates with Season and Correlates with Day Length The hypocretin/orexin neuropeptides (hcrt) are key players in the control of sleep and wakefulness evidenced by the fact that lack of hcrt leads to the sleep disorder Narcolepsy Type 1. Sleep disturbances are common in mood disorders, and hcrt has been suggested to be poorly regulated in depressed subjects. To study seasonal variation in hcrt levels, we obtained data on hcrt-1 levels in the cerebrospinal fluid (CSF) from 227 human individuals evaluated for central hypersomnias at a Danish sleep center. The samples were taken over a 4 year timespan, and obtained in the morning hours, thus avoiding impact of the diurnal hcrt variation. Hcrt-1 concentration was determined in a standardized radioimmunoassay. Using biometric data and sleep parameters, a multivariate regression analysis was performed. We found that the average monthly CSF hcrt-1 levels varied significantly across the seasons following a sine wave with its peak in the summer (June—July). The amplitude was 19.9 pg hcrt/mL [12.8–26.9] corresponding to a 10.6% increase in midsummer compared to winter. Factors found to significantly predict the hcrt-1 values were day length, presence of snow, and proximity to the Christmas holiday season. The hcrt-1 values from January were much higher than predicted from the model, suggestive of additional factors influencing the CSF hcrt-1 levels such as social interaction. This study provides evidence that human CSF hcrt-1 levels vary with season, correlating with day length. This finding could have implications for the understanding of winter tiredness, fatigue, and seasonal affective disorder. This is the first time a seasonal variation of hcrt-1 levels has been shown, demonstrating that the hcrt system is, like other neurotransmitter systems, subjected to long term modulation. Introduction The hypocretin (hcrt, also known as orexin) neuropeptides regulate several homeostatic functions including the sleep/wake cycle, food intake, energy homeostasis, and arousal [1,2]. The hcrt precursor protein is encoded by a single gene from which the two active neuropeptides, hcrt-1 and hcrt-2, are processed. Hcrt neuropeptides are produced exclusively in a small group of neurons, the hcrt neurons, in the lateral hypothalamus [3]. These hcrt neurons project to and activate most of the central nervous system [4], while integrating signals about metabolic and nutritional status, emotional and motivational state, and expectance of reward to evoke the appropriate level of arousal [5]. Dysregulation of hcrt levels has severe consequences for the organism: loss of hcrt neurons causes the sleep disorder, narcolepsy, which is characterized by excessive daytime sleepiness and decreased sleep quality [6]. Patients with this disorder also experience metabolic disturbances and autonomic dysfunction [7]. Narcolepsy with loss of hypocretin, also called Narcolepsy Type 1 can be diagnosed by measuring hcrt-1 levels in cerebrospinal fluid (CSF) [8]. In Narcolepsy Type 1 hcrt-1 is absent in the CSF or concentrations are very low. Hcrt-1 peptide level in the brain is under complex regulation, and has interestingly been shown to respond to playful activities and social interaction [9][10][11]. Furthermore, recent animal studies indicate that the hcrt system is also under the influence of light [12,13]. In Northern countries (as well as countries in the far South) the day length varies substantially over the year. In Denmark it ranges from 7 to 18 hours. The dark winters are associated with tiredness, fatigue, reduced sleep quality, and decreased mood [14][15][16].These findings led us to speculate whether the hcrt system is affected by seasons and if hcrt activity in humans is reduced during dark winters. We, therefore, investigated hcrt-1 levels in CSF of clinical samples collected from patients at the Danish Center for Sleep Medicine over a period of almost 4 years. Hcrt-1 levels can only be reliably detected in the CSF, most often obtained by lumbar puncture. In contrast to many other national strategies, lumbar puncture is a mandatory part of the clinical evaluation of hypersomnia in the Danish healthcare system. We were, therefore, able to collect a dataset stemming from 227 individuals with normal hcrt-1 levels (measured over a period of 45 months) and including their biometric and clinical data. Further, we have, in collaboration with the National Danish Institute of Metrology, obtained data on the local climate conditions matching the time points of individual CSF collections. From this dataset we examined seasonal levels of CSF hcrt-1 and possible correlating factors. Material and Methods A more detailed description of study measures, design, and analysis is provided in the supplementary materials in S1 File. The final dataset can be found in S2 File. Subjects Approval of the study was granted by the Danish board of Health and the Danish Data Protection Agency (#3-3013-898/1). All human data included in this study was obtained from patients' medical records including CSF hcrt-1 levels. CSF had been collected and analysed as a part of the clinical evaluation of hypersomnia at the Danish Center for Sleep Medicine (DCSM), Department of Clinical Neurophysiology, Rigshospitalet, Glostrup, Denmark. By approval, no informed consent was given by the participants, as data were obtained from a clinical registry and analysed anonymously. the standard radioimmunoassay (RIA, Phoenix Pharmaceuticals, CA, USA). Assay quality was monitored by the internal positive control sample included in the assay. Intra-assay variability was accessed by including a Hcrt-1 control from the previous assay in each assay, and additionally, an external reference sample of pooled CSF from healthy individuals was included for normalisation of values between assays and adjustment to the clinical standard level of CSF hcrt-1 [18]. The external reference CSF sample was originally donated by Dr. E. Mignot, Stanford Center for Sleep Sciences and Medicine, Stanford University, USA. Clinical data Patients who had undergone a lumbar puncture at DCSM in the period January 2011 -September 2014 were considered for the present study (n = 382). Exclusion criteria were a diagnose of "narcolepsy with low hypocretin", or hypocretin level <110 pg/mL (corresponding to"Narcolepsy Type 1" according to the International Classification of Sleep Disorders, Third addition [19]), intermediate hypocretin 110-220 pg/mL or no data (i.e. premature termination or absence of patient from clinical examination). Most patients were form the greater Copenhagen area. Climate data Climate data for each day in the study period (day length, average temperature, snow coverage, snow depth, and hours of sunshine) were retrieved from National Danish Institute of Metrology (DMI) upon request. The data were from the Copenhagen area (55°40 0 N 12°34 0 E). Since Denmark is only 452x368 km in size (including a remote island in the Baltic sea), weather conditions are generally similar across the entire country. Data analysis To study seasonal variation, data on CSF Hcrt-1 levels were grouped according to sampling month. Grouped data were fitted with a sine-wave function (wavelength = 12 months) with a nonlinear, least-squares fitting method (Prism 5, GraphPad, CA, USA). In order to access relationship between hcrt-1 levels and relevant variables, we performed a multiple regression analysis (IBM SPSS Statistics 19, IBM Corp., Armonk, NY). Variables considered for the analyses were: Hcrt-1 level, BMI, gender, age, diagnose, leukocyte count, CRP (C-reactive protein) level, MSLT/PSG variables and several climate factors. Because the climate data where highly correlated (S1 Table), these factors could not be included together in the multiple regression analysis. Instead we analysed them in separate models and compared the models. Separate models included the following climate variables: Day length (hours pr. actual day and average hours the preceding three weeks), slope of day length change (min/day), sunshine hours (hours the day before CSF sampling and average hours the preceding three weeks), temperature (average°C across 24 h on the day of CSF sampling and average temperature the preceding three weeks), and snow (yes/no). Days with at least 50% snow coverage or at least 25% snow coverage plus snow the preceding two days were categorized as days with snow, while the rest where called days with no snow (snow n = 22, no snow n = 205). We accounted for diagnose by including a factorial variable with 4 categories: Narcolepsy Type 2, idiopathic hypersomnia, sleep apnea, other. In each model assumptions of linearity, independence of errors, homoscedasticity, unusual points, and normality of residuals were tested. One outlier was removed from the analysis. This was a patient with very high levels of hcrt-1 levels, who had a diagnosis of Narcolepsy Type 2 and major depression. Leukocyte count, CRP (C-reactive protein) levels, MSLT, and PSG data were only available in a subset of patients, so these variables were tested in separate models including the cofactors mentioned above. CSF hcrt-1 levels fluctuate with season A summary of the data extracted from patient hard copy journals and national electronic health records can be seen in Table 1. The table also lists the climate factors included in the study. In total 227 individuals were included in the study. The monthly average of CSF hcrt-1 values was found to vary between months, revealing a seasonal variation in the hcrt-1 levels (Fig 1) with a peak in the Danish summer months and a minimum at winter. A 12 month sine wave predicted the data (r 2 = 0.71) with an amplitude of 19.9 pg/mL [12.8; 26.9] when data from January were excluded. This oscillation corresponds to a 10.6% change which is close to the magnitude of diurnal variation found in healthy subjects (16). The month with the lowest average was November (354.4 pg/ml, n = 13), the highest average was found in May (399.3 pg/ ml, n = 21), corresponding to an increase of 12.7%. However, January data did not follow this trend with the average hcrt-1 level in January being higher than expected from the model (Fig 1). CSF hcrt-1 levels correlate with day length To elucidate possible factors causing the variation in CSF hcrt-1 levels, we included the following variables in our analysis: BMI, gender, age, diagnosis, leukocyte count, C-reactive protein (CRP) level, multiple sleep latency test (MSLT) sleep onset REM periods (SOREMs), MSLT sleep latency, polysomnography (PSG) total sleep length, day length, slope of day length change, sunshine hours (direct radiation exceeding 120 W/m 2 ), and presence of snow. As the hcrt system might by dysregulated in Narcolepy Type 2, despite apparent normal hcrt-1 levels, we decided to include a variable accounting for this diagnosis. Age, gender and diagnosis had no effect on hcrt-1 levels in any of the models, while BMI had a borderline significant effect with lower BMI levels predicting higher hcrt-1 ( Table 2). Our analysis revealed significant effects of day length, sunshine, and temperature, and showed that the model including average day length the preceding three weeks best predicted CSF hcrt-1 values (S2 Table). We included both the values for the day before CSF sampling and also average values for the preceding 3 weeks in order to take a possible longer lasting, modulating influence into account. The final model (including age, gender, BMI, diagnosis, average day length of the preceding three weeks, presence of snow, and number of days from Christmas) statistically significantly predicted CSF hcrt-1 values, F(7,214) = 4.106, p = 0.0003, R 2 = 0.118. In this model longer day length, presence of snow, and proximity to Christmas all significantly predicted higher levels of hcrt-1 in CSF (p = 11x10 -6 , p = 0.007, and p = 0.008 respectively, Table 2 and Fig 2A). None of the clinical MSLT or PSG parameters correlated significantly with hcrt-1 concentrations (S3 Table). There were also no predictive values of leukocyte count and CRP level in the subset of patients where this information was available (S4 and S5 Tables). Higher hcrt-1 levels than expected in January When comparing the average hcrt-1 level, measured on days with snow in the period December to February, the level was significantly higher than the level measured on days without snow coverage in the same period (p = 0.006; Fig 2B). This effect was present in all winter months, except beginning of January, where levels of hcrt-1 were higher than expected from the model also on days with no snow and very little sunlight (Fig 3). The level of hcrt-1 did, however, correlate significantly with the number of days between the CSF sample was taken and the Christmas holidays (Table 2 and Fig 3). Discussion Here we provide data showing a seasonal variation in hcrt-1 peptide level in human CSF. To our knowledge, this is the first time it has been shown that hcrt-1 levels exhibit a seasonal rhythm. We found a 10-12% change in the mean hcrt-1 level from winter to summer, a value which is of the same magnitude as seen in the diurnal variation of CSF hcrt-1 in humans [17,20], and thus expected to have functional significance. We furthermore report correlations between hcrt-1 levels and day length. Data from January did not follow the general trend in the data, an effect partly explained by the presence of snow. All the samples were taken within a clearly defined timespan of the day, where other studies have shown that the hcrt level varies only minimally [17] or insignificant [11] to minimize the impact of the diurnal variation of hcrt-1. Our data show an overall seasonal change with large individual variation suggesting that besides daily fluctuations, the hcrt system is subject to a more long term modulation of baseline signalling, as is the case for other neurotransmitter systems such as the serotonin system [21,22]. This is also supported by studies showing experience-dependent synaptic plasticity and remodeling of hcrt neurons in mice [23,24]. Curiously, we did not find any correlations between hcrt-1 levels in CSF and the included measures of sleep length and daytime sleepiness. It is known from existing literature and general clinical experience that low hcrt-1 in CSF predicts a short MSLT sleep latency and several SOREMs. What is clear from our data is that for patients with a CSF hcrt-1 level >220 pg/ml such correlation is no longer present. There could be several explanations for this observation. It might be that the MSLT is not sensitive enough to pick up subtle changes in daytime sleepiness resulting from having a hcrt-1 level in the low end of the normal range. It is also possible that severe daytime sleepiness and REM disturbances do not occur at all before hcrt-1 levels become abnormally low. Climate factors correlate with hcrt-1 levels Copenhagen is located around 56°northern latitude which gives rise to large variations in day length with about 7 hours in the winter and up to 18 hours during summer. Taking the weather (cloud occurrence) into account, the actual hours of daily sunshine can vary even further. Our data show that both day length and hours of sunshine predict CSF hcrt-1 levels, and the model including day length predicted the hcrt-1 levels slightly better than the model including sunshine. Interestingly, average day length data and average sunlight levels over three weeks predicted CSF hcrt-1 levels better than data from the day before testing only, suggesting that the lumbar CSF hcrt-1 values do not reflect fast fluctuations of central hcrt activity in response to day to day weather conditions, but rather reflecting a long term average over several weeks. This is supported by a previous study showing that hcrt-1 measurements from the same individual taken 1-2 weeks apart were highly correlated [20]. On top of this baseline 5-10% diurnal fluctuation is however still seen [17,20], so a shorter acting clearance of hcrt-1 from the CSF is also present. This has also been demonstrated in dogs, where an increase in CSF hcrt-1 following sleep deprivation was not sustained beyond 24 hours [10]. Hcrt-1 levels and light A possible explanation for our finding of seasonal changes in hcrt-1 levels could be that daylight levels influence hcrt-1 signalling. The independent effect of snow we observe in our A) The relationship between CSF hcrt-1 levels and average day length the preceding three weeks, divided in groups dependant on the presence of snow. In the samples taken on days without snow, there was a significant correlation (linear regression p = 0.0001) between day light and the hcrt-1 level, which was not found in the samples taken on days with snow. B) The effect of snow on hcrt-1 levels in winter month (Dec-Feb). The average hcrt-1 level is significant higher (student t-test, p = 0.009) when CSF sampling was performed on days with snow than on days without snow. doi:10.1371/journal.pone.0151288.g002 dataset could also be explained as a consequence of increased light exposure as a result of reflection from a snow covered surface. This lighting effect would be expected to be give largest impact in periods with very little daylight, and this is also what we see in the dataset. As can be seen in Fig 2, the difference in hcrt-1 levels between days with snow and days without is greater in periods with very little sun. An influence of light on hcrt neurons has already been suggested by animal studies. It has been shown that in the diurnal species, the grass rats, a light pulse stimulates immediate-early gene activity in hcrt neurons [25], whereas dim light housing conditions lowers hcrt immunoreactivity in the brain [13]. In the nocturnal mouse, in contrast, dark pulses (arousal cues for nocturnal species), can activate hcrt neurons [26]. Despite hcrt neurons being primarily active in the dark phase in mice, it has also been shown that light activates mouse hcrt neurons during positively reinforced tasks [12]. This finding is consistent with the lack of the arousing effect of light in humans suffering from narcolepsy [27]. Since no direct pathway from the retina to the hcrt neurons has been found, this is likely an indirect effect. Hcrt-1 levels and arousal Another possible explanation for our finding is that the overall activity and arousal level during the day is higher in the summer months and on days with snow causing an increased release of hcrt-1 and thus higher CSF levels. The hcrt-producing neurons link forebrain structures involved in the processing of emotion and motivation, such as the amygdala, with brainstem regions, such as locus coeruleus, which regulate wakefulness and reward [5], and cues associated with rewards and arousal stimulate the hcrt neurons [28,29]. For example, it has been shown that yard play produces a substantial increase in CSF Hcrt-1 level in normal dogs, whereas comparable treadmill locomotion did not increase Hcrt-1 level beyond baseline [9]. It is therefore possible that increased purposeful behaviour with longer days is the underlying mechanism behind our finding. Many studies have indeed concluded that physical activity levels change with seasonal changes in weather conditions [30]. The patterns are heavily dependent on local weather conditions thus local data are crucial: A activity study of 730 Danish children, using actigraphy, found that daytime activity was higher in spring and sleep length was shorter compared to both autumn and winter [31]. This is supported by a study from Norway reporting the same pattern, with higher activity levels in spring [32]. However, if the amount of physical activity was the primary driver of the observed seasonal variation, we would expect higher hcrt-1 levels in spring compared to fall independent of day length. This is not the case in our dataset (Fig 1, compare March-April to September-October where day lengths are similar). Since we do not have any measures of activity in our cohort, we cannot make any final conclusions regarding this hypothesis. Hcrt-1 and social interaction Increased levels of hcrt-1 in humans have been found during social interactions and in connection with social-induced positive emotions [11]. Indeed, hcrt neurons receive abundant input from the limbic system [33] and several papers have provided experimental evidence for an involvement of the hcrt system in emotion and emotional memory [28,34,35]. It is often speculated that that activation of hcrt neurons by the limbic system maintains wakefulness during emotional arousal. It is therefore possible that seasonal changes in social interaction patterns also account for some of the variation seen in our dataset, most noteworthy the increased hcrt-1 levels in January following the Christmas vacation. Social interaction between a patient and relatives has already been shown to increase hcrt activity [11], thus it could be interesting to study whether larger social gatherings also increase hcrt activity and if so, to what extent. Possible consequences of seasonal fluctuation in hcrt-1 levels It is well established that hcrt neurons can excite serotonin neurons [5,36]. Serotonin-related conditions such as depression show a clear seasonal pattern in humans, which is consistent with plasma serotonin (5-HT) levels undergoing marked changes throughout the year, with maximum values during the summer [21]. Dampening of the natural diurnal hcrt-1 variation in CSF has been found in depressed subjects [17], and people who have attempted suicide have reduced levels of Hcrt-1 in their CSF [37], suggesting that a poorly functioning hcrt system could play a role in mood disorders. It is however still controversial whether hcrt-1 levels are lower in depressed patients, as stress induced depression is sometimes linked to an increase in hcrt activity [38]. Decreased day length and dark winters increase the risk of seasonal affective disorder (SAD) [15,39,40]. It has previously been shown that SAD patients have lower physical activity levels and a blunted 24-h activity rhythm compared to healthy controls. These abnormalities were completely reversed by bright day light therapy [41]. Given that animal data shows an influence of light on hcrt signalling, it is possible that the seasonal changes in hcrt-1 levels at northern latitudes play a role in the ethiology of SAD. Limitations A limitation to this study is that subjects included in this study are not from the normal healthy population. For ethical and practical reasons, the CSF hcrt-1 levels were obtained as a part of a clinical evaluation of hypersomnia. All individuals included must therefore have experienced some sort of subjective sleep disturbance. However, neither diagnosis nor any of the clinical parameters included correlated significantly with hcrt-1 levels, indicating that our finding represents a general trend. Conclusion In conclusion, this study demonstrates a clear seasonal variation in human CSF hcrt-1 levels, with a peak in the summer months and lowest levels during winter. This shows that the activity of the hcrt system changes over the year. We found a significant correlation with day length and presence of snow. Interestingly, an unknown factor caused hcrt-1 levels to be higher in January than expected from the model. Further, cumulative day length over three weeks better predicted hcrt-1 levels compared to day length on the day of CSF sampling. This is suggestive of the presence of long term modulating effects on baseline activity in the hcrt system. These findings add valuable knowledge to our understanding of the role and regulation of the hcrt system, and they have several possible interpretations. Seasonal changes in leisure activities, light exposure, and social interaction could all possibly influence the activity of the hcrt system. Our findings have implications for understanding the mechanisms of seasonal changes in arousal and mood, and could point towards the hcrt system as an important mediator or modifier of day light effects on other neurotransmitter systems such as the monoaminergic systems. Seasonal variations in hcrt-1 levels should further be taken into account when interpreting CSF hcrt-1 values in clinical practice.
5,373.2
2016-03-23T00:00:00.000
[ "Biology" ]
The CHEOPS mission The CHaracterising ExOPlanet Satellite (CHEOPS) was selected on October 19, 2012, as the first small mission (S-mission) in the ESA Science Programme and successfully launched on December 18, 2019, as a secondary passenger on a Soyuz-Fregat rocket from Kourou, French Guiana. CHEOPS is a partnership between ESA and Switzerland with important contributions by ten additional ESA Member States. CHEOPS is the first mission dedicated to search for transits of exoplanets using ultrahigh precision photometry on bright stars already known to host planets. As a follow-up mission, CHEOPS is mainly dedicated to improving, whenever possible, existing radii measurements or provide first accurate measurements for a subset of those planets for which the mass has already been estimated from ground-based spectroscopic surveys. The expected photometric precision will also allow CHEOPS to go beyond measuring only transits and to follow phase curves or to search for exo-moons, for example. Finally, by unveiling transiting exoplanets with high potential for in-depth characterisation, CHEOPS will also provide prime targets for future instruments suited to the spectroscopic characterisation of exoplanetary atmospheres. To reach its science objectives, requirements on the photometric precision and stability have been derived for stars with magnitudes ranging from 6 to 12 in the V band. In particular, CHEOPS shall be able to detect Earth-size planets transiting G5 dwarf stars (stellar radius of 0.9R⊙) in the magnitude range 6 ≤ V ≤ 9 by achieving a photometric precision of 20 ppm in 6 hours of integration time. In the case of K-type stars (stellar radius of 0.7R⊙) of magnitude in the range 9 ≤ V ≤ 12, CHEOPS shall be able to detect transiting Neptune-size planets achieving a photometric precision of 85 ppm in 3 hours of integration time. This precision has to be maintained over continuous periods of observation for up to 48 hours. This precision and stability will be achieved by using a single, frame-transfer, back-illuminated CCD detector at the focal plane assembly of a 33.5 cm diameter, on-axis Ritchey-Chrétien telescope. The nearly 275 kg spacecraft is nadir-locked, with a pointing accuracy of about 1 arcsec rms, and will allow for at least 1 Gbit/day downlink. The sun-synchronous dusk-dawn orbit at 700 km altitude enables having the Sun permanently on the backside of the spacecraft thus minimising Earth stray light. A mission duration of 3.5 years in orbit is foreseen to enable the execution of the science programme. During this period, 20% of the observing time is available to the wider community through yearly ESA call for proposals, as well as through discretionary time approved by ESA’s Director of Science. At the time of this writing, CHEOPS commissioning has been completed and CHEOPS has been shown to fulfill all its requirements. The mission has now started the execution of its science programme. a follow-up mission, CHEOPS is mainly dedicated to improving, whenever possible, existing radii measurements or provide first accurate measurements for a subset of those planets for which the mass has already been estimated from ground-based spectroscopic surveys. The expected photometric precision will also allow CHEOPS to go beyond measuring only transits and to follow phase curves or to search for exo-moons, for example. Finally, by unveiling transiting exoplanets with high potential for in-depth characterisation, CHEOPS will also provide prime targets for future instruments suited to the spectroscopic characterisation of exoplanetary atmospheres. To reach its science objectives, requirements on the photometric precision and stability have been derived for stars with magnitudes ranging from 6 to 12 in the V band. In particular, CHEOPS shall be able to detect Earth-size planets transiting G5 dwarf stars (stellar radius of 0.9 R ) in the magnitude range 6 ≤ V ≤ 9 by achieving a photometric precision of 20 ppm in 6 hours of integration time. In the case of K-type stars (stellar radius of 0.7 R ) of magnitude in the range 9 ≤ V ≤ 12, CHEOPS shall be able to detect transiting Neptune-size planets achieving a photometric precision of 85 ppm in 3 hours of integration time. This precision has to be maintained over continuous periods of observation for up to 48 hours. This precision and stability will be achieved by using a single, frame-transfer, back-illuminated CCD detector at the focal plane assembly of a 33.5 cm diameter, on-axis Ritchey-Chrétien telescope. The nearly 275 kg spacecraft is nadir-locked, with a pointing accuracy of about 1 arcsec rms, and will allow for at least 1 Gbit/day downlink. The sun-synchronous dusk-dawn orbit at 700 km altitude enables having the Sun permanently on the backside of the spacecraft thus minimising Earth stray light. A mission duration of 3.5 years in orbit is foreseen to enable the execution of the science programme. During this period, Introduction In March 2012, the European Space Agency (ESA) issued a call for a small mission opportunity. This new class of mission (S-class) in the portfolio of the science programme of the Agency was introduced to provide the community with additional launch opportunities without interfering with the existing programme based on Mand L-class missions. This translated into strict boundary conditions on the potential mission(s). The selection would be based on scientific excellence, the possibility for fast development, and a total cost to the Agency, including launch, not exceeding 50 MEuros. The CHEOPS (CHaracterising ExOPlanet Satellite) 1 proposal was submitted in response to the call by a Consortium of research institutes located in ESA member states. It was subsequently selected by ESA's Space Programme Committee in November 2012 out of a total of 26 competing submissions. Following the discovery in 1995 of the first exoplanet [30] orbiting a solar-like star and the subsequent detection of over 4000 additional planets, the need for physical and chemical characterisation of these objects is growing. For this, a sample as large as possible of planets orbiting bright stars and for which mass and radius are precisely measured is needed. This need triggered in 2008 the idea of developing a space mission dedicated to searching for exoplanetary transits by performing ultra-high precision photometry on bright stars already known to host planets. Subsequently, a study was conducted in Switzerland in 2010-2011 which concluded that such a concept was indeed feasible but that it would exceed the financial capabilities of Switzerland. A Consortium was built and a proposal submitted in response to ESA's call from small missions. The follow-up nature of CHEOPS, with a single star being targeted at a time, makes this transit mission unique compared to its successful precursors COROT [3], Kepler [23], and TESS [35], or its successor PLATO [34]. This difference is the basis for an original science programme (see Section 2) in which the focus is not on the discovery of additional exoplanets, but rather on the characterisation of a set of most promising objects for constraining planet formation and evolution theories and for further studies by future large infrastructures (e.g. JWST, Ariel, ELTs). This resulted in several challenging requirements on photometric precision and sky visibility (see Section 3) which drove the design of the mission. With its unique characteristics, CHEOPS is complementary to all other transit missions as it provides the agility and the photometric precision necessary to re-visit sufficiently interesting targets for which further measurements are deemed essential [5]. The boundary conditions imposed by ESA on the S-missions briefly mentioned above (see Section 4) translated into several trade-off decisions being taken while selecting and adapting the platform (see Section 5), designing the payload (see Section 6), and developing the ground segment (see Section 9). In this sense, CHEOPS is possibly not the ultimate follow-up mission that could have been flown (if such a thing exists), but it is arguably the best that could be built within the ESA framework under the conditions given. In the laboratory, CHEOPS has met or exceeded all the specifications that could be measured. It was then successfully launched on 18 December 2019 from Kourou, French Guyana, as a secondary passenger on a Soyuz-Fregat rocket. Commissioning ended in March 2020 demonstrating that CHEOPS is meeting all the requirements providing the green light for the science phase. With the successful completion of commissioning, CHEOPS has not only met all the technical requirements but also the programmatic ones namely the overall cost and schedule. A remarkable achievement. The CHEOPS science programme The science observing time on CHEOPS is foreseen to fill a minimum of 90% of the nominal lifetime of the mission following successful commissioning of the satellite, with the remaining 10% split between activities related to spacecraft operations (eg. safe mode and recovery, anomaly investigation, instrument software update, etc.), and a monitoring and characterisation campaign designed to monitor the performances of the CHEOPS instrument throughout the mission. The underlying rules on how the science time is to be used are defined in the CHEOPS Science Management Plan, which foresees an 80%:20% split between the Core or Guaranteed Time Observing (GTO) Programme that is under the responsibility of the CHEOPS Science Team (see Setion 2.1), and the Guest Observers (GO) Programme (see Setion 2.2), under the responsibility of ESA and through which the community can conduct investigations of their choice. The CHEOPS Science Team defines the first, prioritised target list covering observations for the GTO programme for up to the duration of the nominal mission in advance of the definition of the GO programme. These targets are placed on a reserved list that cannot be observed as part of the GO Programme. An update of the reserved target list is allowed throughout the mission, with some restrictions. The GO programme is then built out of annual calls issued by ESA. The targets proposed and approved by an ESA appointed Time Allocating Committee are added to the reserved target list and can only be observed by the proposing team. The guaranteed time observing programme science The GTO is composed of a set of scientific themes each including several observational programmes. This comprehensive scientific effort aims at exploring the diversity in planetary systems through measurements at a signal-to-noise sufficient for constraining theoretical models. The ultimate goal of the mission is to allow for a better understanding of planet formation and evolution as well as of the prospects for finding planets suitable for harbouring life. The GTO themes described in this section have been assembled by the CHEOPS Science Team and participating CHEOPS Consortium Board members over three years preceding the launch. As such, it represents the diversity of scientific interests and expertise present in a group of over 40 scientists who met regularly 3 to 4 times a year. The GTO is technically structured in different science topics to provide easy reading and tracking of the whole CHEOPS science. Each science topic is called a theme and is made of a series of specific programmes. Each programme has its scientific objective, but may share targets with other programmes. The number of orbits allocated to each theme has evolved over time to match the scientific needs of each programme and is expected to continue to change with time. While the number of orbits allocated to different themes may shift, the total number remains bound to the 80% allocated share of the GTO. The following provides a short description of each theme. Search for transits The aim is to search for transits of planets discovered by other techniques, in particular among those detected by radial velocity measurements. Monitoring these systems around the predicted transit times of their planets offers a straightforward way of obtaining both mass and radius for a sample of super-Earths and Neptunes orbiting bright stars. At the heart of the original CHEOPS proposal, it was considered the only path forward before PLATO [34] to find the nearest transiting planets, including rocky bodies within the habitable zone of their host star. Today, with NASA's TESS mission in operation [35], the context has changed significantly leading to an optimisation of the target list with adjustments foreseen following TESS discoveries. At the time of writing, roughly 15% of the GTO orbits are dedicated to this theme. Improve mass-radius relation The mass-radius relation is the first step towards the characterisation of planetary bulk properties [40]. The knowledge of the planetary composition, in turn, is a key element to constrain planet formation models, as it can be used to demonstrate, for example, transport of material in the proto-planetary disc [37]. In this context, the structure of low-mass planets is the most relevant. The goal of this theme is to determine the mass-radius relation of planets, focusing on objects with a small radius (and/or low mass if this one is already known), rare objects (extreme in density, unusual radius, etc.), and planets in multi-planetary systems. This theme opens up fantastic opportunities for synergy with TESS as it provides follow-up options if needed. At the time of writing, roughly 25% of the GTO orbits are dedicated to this theme. Explore This theme includes a set of programmes related to planet detection. They aim at (1) exploring the architecture of systems hosting small planets in relatively long-period orbits via transit timing variation (TTV); (2) studying in details dust clumps in edge-on debris disks around young stars; (3) detecting new planetary systems around bright stars, focusing on both multi-planet systems and systems hosting hot Jupiters; (4) enlarging the parameter space of both planets and host stars, with particular emphasis on hot stars; (5) discerning planet migration scenarios using TTV; (6) searching for exo-Trojans. At the time of writing, roughly 15% of the GTO orbits are dedicated to this theme. Characterise atmospheres High-precision broadband visible photometry of exoplanet occultation and phase-curves have proven particularly insightful for the understanding of atmospheric processes, especially when coupled with infrared measurements. This is especially true for hot Jupiters, to understand whether the exoplanet flux observed at visible wavelengths is due to thermal emission leaking into the shorter wavelengths or to reflected light due to high-altitude condensates or Rayleigh scattering. One example is the hot Jupiter Kepler-7b, which shows a prominent occultation and phase-curve signal in the Kepler data but no signal at all in the infrared, as measured by Spitzer at 3.6 and 4.5 microns [12,19]. This points to Kepler-7b harbouring high-altitude clouds or hazes. Such inference would not have been possible without (broadband) observations at visible wavelengths. Other examples include a recent study [39] which combines phase-curves of multiple hot-Jupiter phase-curves observed by TESS and shows a possible trend between equilibrium temperature, dayto-nightside temperature contrast and recirculation efficiency. Gaidos et al. [17] even showed that combining TESS and CHEOPS broadband observations could achieve the distinction between thermal emission and reflected light for some targets, due to the differences in these facilities' wavelength-dependent sensitivity. At the time of writing, roughly 25% of the GTO orbits are dedicated to this theme. Search for features Planets close to their host star are tidally distorted into an ellipsoidal shape. Potentially, this effect is detectable in the transit light curve which would allow the measurement of the planet's Love number providing further insight into the planet's internal structure [1,18]. The main challenge is to separate ellipsoidal deformation from stellar limb darkening effects. Another potential visible dynamical effect is tidal dissipation resulting in a secular shrinking of the orbit. Measuring orbiting changes would allow us to obtain a direct measure of the quality factor Q of a star [21]. Unexpected features (asymmetry, bumps, etc.) observed in transit light curves could also lead to the detection of moons and rings. At the time of writing, roughly 5% of the GTO orbits are dedicated to this theme. Ancillary science Programmes with relevance to the analysis of exoplanets data, to the interpretation of properties of exoplanetary systems, and particular questions in planetology have been grouped under the theme "Ancillary Science". It includes stellar physics programs (the study of stellar micro-variability, the derivation of precise limb darkening laws as a function of stellar temperature) affecting the measurement of exoplanet parameters, as well questions such as the frequency of planets around evolved stars, or the detection of rings, jets, dense comas and even atmospheres around Centaurs/TNOs in the Solar System. This science is used as a filler programme using otherwise unattributed orbits. Community access to CHEOPS: the guest observers programme The GO programme is administered by ESA, and is open to the world-wide scientific community, regardless of nationality or country of employment and including members of the CHEOPS Consortium. The majority of the observing time is available through annual announcements of opportunity (AOs) soliciting observing proposals. Evaluation and assessment of the proposals, together with recommendations for the award of observing time, are made by an ESA-appointed CHEOPS Time Allocation Committee that works independently of the CHEOPS Mission Consortium. Proposals are selected based on scientific merit, taking into account the suitability of CHEOPS for the proposed observations: they can cover any science topic that can be shown to be both addressable by the performance capabilities of CHEOPS and compatible with the mission's constraints. In order to allow new targets to be proposed by the Community at any time during the mission, up to 25% of the open time (up to 5% of the CHEOPS science observing time) will be allocated as discretionary time, as part of the Discretionary Programme. This will be overseen by ESA. Proposals for the Discretionary Programme need to meet the merit criteria of those submitted for annual AOs, and in addition comprise a single target of high scientific interest that has either been discovered or declared to be of high scientific interest, since the time of the most recent annual call. The first AO for observations to be made in the first year in orbit came out in March 2019, with the announcement of successful proposals made in July 2019. The second AO is foreseen to come out in the fourth quarter of 2020. The Discretionary Programme opened shortly after the end of the In-orbit Commissioning campaign. Requirements and estimated performances As outlined in Section 2, the main science objectives of CHEOPS rest on the ability of the spacecraft to perform high-precision photometric measurements of specific stars already known to host planets. Since CHEOPS is intrinsically a follow-up mission, it is essential that the spacecraft covers as large a fraction of the sky as possible so as to maximise the number of potential targets available. Provided the transit time is known, CHEOPS has no restriction on the orbital period of the planet it can observe. However, given the difficulty of detecting small mass planets at large distances, it was clear that the number of planets with orbital periods larger than 50 days available for follow-up would be small. From these different considerations, a number of requirements on the necessary performance of CHEOPS have been derived and used in the design of the mission. In particular, the photometric precision and sky coverage were the main drivers of the design. We review both of them below. Photometric precision and stability CHEOPS will observe different types of stars of different magnitudes. However, requirements on photometric precision and stability have been derived for magnitudes ranging from 6 to 12 in the V band. While stars brighter than magnitude 6 and fainter than 12 can be observed by CHEOPS, no precision nor stability requirements have been imposed on the measurements of these stars. The following are the top-level science requirements related to photometric precision and stability. Bright stars (Science requirement 1.1): CHEOPS shall be able to detect Earthsize planets transiting G5 dwarf stars (stellar radius of 0.9 R ) with V-band magnitudes in the range 6 ≤ V ≤ 9 mag. Since the depth of such transits is 100 parts-per-million (ppm), this requires achieving a photometric precision of 20 ppm (goal: 10 ppm) in 6 hours of integration time (at least, signal-to-noise ratio of 5). This time corresponds to the transit duration of a planet with a revolution period of 50 days. Faint stars (Science requirement 1.2): CHEOPS shall be able to detect Neptunesize planets transiting K-type dwarf stars (stellar radius of 0.7 R ) with V-band magnitudes as faint as V=12 mag (goal: V=13 mag) with a signal-to-noise ratio of 30. Such transits have depths of 2500 ppm and last for nearly 3 hours, for planets with a revolution period of 13 days. Hence, a photometric precision of 85 ppm is to be obtained in 3 hours of integration time. Photometric stability (Science requirement 1.3): CHEOPS shall maintain its photometric precision for bright and faint stars during a visit (not counting interruptions), with a duration of up to 48 hours (including interruptions). Estimated performances To check whether the requirements above would be met, the overall performance of the instrument has been estimated in a semi-analytical way, considering results obtained from simulations and measurements with the aim of characterising the total noise for a set of benchmarks and comparing them with the science requirements (see [11] and [15]). The sources of the different noise contributors have first been classified as being either astrophysical or instrumental in nature. The total noise budget results from the combination of all these contributors. The identified noise sources (see below) can be assumed to be independent of each other and thus added in quadrature. The total expected noise is then: where N i represents a particular noise source as listed below: 1. Astrophysical noise -photon noise, -zodiacal light, -cosmic rays, -stray light parasitic illumination. Instrumental noises: -Point spread function in combination with the pointing jitter and the flat field uncertainties 2 , -read-out noise (CCD plus analog chain random noise), -dark current shot noise and dark current variation with temperature, -CCD gain and quantum efficiency stability, -analog electronics stability, -timing uncertainty, -quantisation noise of the front-end electronics. Note that the calculation of the total noise does not account for stellar intrinsic variations (flicker noise, stellar oscillations, etc.), contamination due to background stars, severe pixel defects, etc. The stellar flicker noise, although not included in this calculation, is discussed in [36]. For a quantitative estimation of the noise budget, the stars are considered to be in the visual magnitude range between V = 6 and V = 12, and spectral types between G0 (T eff ∼ 5900 K) and M0 (T eff ∼ 3800 K). Figure 1 shows the results of the expected total noise after 3 hours (blue curves) or 6 hours (red curves) of integration. These estimations can be compared to the actual measurements obtained during in-orbit commissioning (Section 11.2). Among the different noise sources, the parasitic Earth stray light, i.e. the sunlight reflected from the Earth's surface, is the one whose contribution can vary the most because it depends upon the pointing direction and on the Sun's position with respect to the Earth at the time of observation. The Earth stray light can vary, in one orbit, from zero (the telescope is flying over a dark region of the Earth surface) to thousands of photons per pixel per second (the telescope is pointing close to the illuminated Earth limb). Even if the post-processing of the images allows for correction of more than 99.5% of the background contamination, the variation along one orbit of the stray light flux can increase dramatically the total noise. In Fig. 1, the noise floor is shown by the red solid curve labelled SL = 0. This curve was calculated for an M0 star considering 6 hours of uninterrupted observations and no stray light contamination during the whole observation. For bright targets, the instrumental noise dominates the total noise while for the fainter ones the photon noise is the main contributor. Therefore, the curve SL = 0, represents the minimum noise expected for a CHEOPS standard target (note that for earlier spectral types the photon noise increases). The red dashed curve also accounts for 6 hours of uninterrupted observations but for a G0 star and a maximum stray light variation of 2 photons/pixel/second along the orbit. The impact of adding this amount of stray light variation to the noise budget is that its associated noise starts to dominate the noise budget for stars fainter than magnitude 9. For faint stars to be observable with high signal to noise ratio, the stray light flux variation has to be really low, as shown in the rightmost dashed blue curve (3 hours of integration time, M0 star and a maximum stray light variation of 0.75 photons/pixel/second). In this case, the photon and instrument noises shape the noise budget for almost the whole magnitude range except in the faint end, where the stray light variation that dominates, contributing with 70 ppm to the total noise for a magnitude 12 star. The leftmost dashed blue curve shows an example (3 hours integration time, G0 star, 10 photons/pixel/second stray light variation) where the stray light contribution to the noise budget dominates over the other noise sources in all cases. Finally, the solid red curve sets the noise ceiling for 6 hours integration time (G0 star, SL = 33 ph/px/s ) where all other noise contributors are negligible compared to the stray light noise. It is therefore evident that small variations in the stray light level along the orbit can increase significantly the total noise. Given the impact that stray light variations can have on the noise budget, the stray light contribution should be accounted for when scheduling an observation. Special care has been taken to guarantee that the schedule solver avoids planning an observation when high stray light variations are expected to impact the images (i.e. observations are scheduled to fall within the green region of the diagram). Orbit and sky coverage Ultimately, the number of targets that CHEOPS will be able to follow depends upon the fraction of the sky that can be observed while meeting the photometric precision described above (see Section 3.1). At the time the mission was proposed, most planets orbiting stars with magnitudes in the V = 6-12 range were detected using radial velocity techniques, the huge number of planets discovered by the Kepler spacecraft being significantly fainter. Another source for targets was expected to be groundbased transit detections, especially from Next Generation Transit Survey (NGTS) located at ESO's Paranal Observatory in the Southern hemisphere. With these considerations in mind, the following are the two top-level requirements defined for sky coverage. Planets discovered by radial velocity measurements: 50% of the whole sky shall be accessible for 50 (goal: 60) cumulative (goal: consecutive) days per year and per target with time spent on-target and integrating the target flux longer than 50% of the spacecraft orbit duration. Planets discovered by ground-based transit measurements: 25% of the whole sky, with 2/3 in the southern hemisphere, shall be accessible for 13 days (cumulative; goal: 15 days) per year and per target, with time spent on-target and integrating the target flux longer than 80% of the spacecraft orbit duration. To comply with this requirement on sky coverage, a Sun-synchronous, dawn-dusk orbit with a local time at the ascending node of 6am was chosen with a possible altitude ranging between 650 and 800 km. Later, the choice converged on a final altitude of 700 km (actual orbit: 7078.848 km mean semi-major axis, alt 700.713 km, with an orbital period of 98.725 minutes). In fact, this orbit represents a tradeoff between meeting the sky coverage requirement, minimising stray light by having the line of sight as much as possible over the dark side of the Earth and reducing the radiation environment. Finally, the availability of a suitable launch opportunity and compliance with space debris mitigation regulations also played a role in orbit definition. While the selected orbit also minimises the Earth occultations of the sky, another restriction associated with the strict control of the temperature of the CCD further limits the sky coverage. The CCD is cooled passively by radiating to deep space any excess heat. To allow for this, the Sun must at all times remain outside a cone of half-angle of 120 • centred on the line of sight of the telescope. This requirement further limits the pointing of the telescope making it impossible to observe close to the ecliptic poles. Finally, during the observation of a target, the flow of usable measurements (images) can be interrupted due to: -the target being occulted by the Earth during part of the satellite's orbit; -the stray light contamination is too high; -cosmic ray hits during the passage through the South Atlantic Anomaly being too high. These interruptions will appear as gaps in the light curve. To conserve bandwidth most of the data taken during these times will not be down-linked to the ground. With all this considered, the sky visible to CHEOPS can be calculated. Figure 2 shows a map of the annual sky coverage. Project implementation approach The CHEOPS mission is a partnership between ESA's Science Programme and Switzerland, with important contributions from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. All these ESA member states constitute the CHEOPS Consortium. Their contributions to the mission in terms of hardware or software are estimated to match the cost-capped ESA budget of 50 MEUR foreseen for small missions. Hence, the total available CHEOPS budget is slightly over 100 MEUR. This amount includes designing, building, launching, and operating the CHEOPS satellite for a nominal period of 3.5 years. Note that orbits with more than 50% interruptions were discarded for computing this map. December, September and June are marked as a reference at the top of the plot to indicate when a certain region in the sky is observable. Zodiacal constellations are over-plotted in grey for reference around the ecliptic (thick black line), together with a few other useful constellations. (In particular, the Galactic plane can be visualised as crossing Sagittarius and following the neck of Cygnus in the Summer sky, while crossing the ecliptic close to Taurus in the Winter sky). Note that the Kepler fields are essentially out of reach for CHEOPS In response to the CHEOPS development challenges, driven by the limited (and cost-capped) budget and by the short development time, a number of measures were adopted in setting up the project organisation and in defining the implementation approach (see also Section 5). The key measures, for different project areas, are summarised below. 1. Project organisation: small size teams were deployed to maintain close coordination as well as enable a faster decision process. 2. Technology readiness: the mission had to be compatible with the re-use of an existing "off-the-shelf" platform and had to include a payload based on available technologies (Technology Readiness Level > 5-6 in ISO scale), ruling out complex development activities and focusing on implementation and flight qualification aspects. 3. Realism & stability of requirements: both the accelerated development schedule and the mission cost ceiling required realistic and stable requirements from the very start of the project. The requirements were consolidated by the System Requirements Review (SRR) and, later in the project, significant efforts were made to avoid modifying or adding requirements. 4. Industrial implementation approach: as described in [32], and in contrast to the M-and L-class missions, a single Invitation to Tender was issued for CHEOPS, covering both a parallel competitive study phase (A/B1) and the implementation phase (B2/C/D/E1, including responsibility for Launch and Early Operation Phase -LEOP and In-Orbit Commissioning -IOC). The prime contractor was selected shortly after the SRR. 5. Early mission concept definition: the industrial procurement approach described above (single tender with a ceiling price covering also the implementation phase) is only feasible when the mission concept and the space segment requirements and early design are mature enough to rule out major changes in later phases. A concurrent engineering approach (in the form of a phase 0/A study performed at the Concurrent Design Facility at ESTEC) was applied in order to achieve the required mission concept and S/C requirements maturity in less than 6 months from proposal selection. 6. Definition of interfaces: the early definition of clear and stable interfaces (instrument-platform, spacecraft-ground, spacecraft-launcher) was requested from all parties as an essential pre-condition to ensure that the different teams could work in parallel and meet their challenging development schedules. Frequent interface technical meetings have been organised with ESA, as mission architect, playing an important coordination role. 7. Review cycle: the CHEOPS project has followed the standard ESA review cycle. However, in order to maintain compatibility with the stringent schedule constraints, the duration of the reviews was compressed compared to larger missions, adapting the number of panels and reviewers to streamline the process. This implementation approach proved effective: it has allowed the selection of the prime contractor and to start the implementation phase (B2/C/D/E1) in April 2014, after only 1.5 years from mission proposal selection, to complete the system Critical Design Review (CDR) in May 2016 (2 years after) and to complete the satellite level test campaign in December 2018 (about 6 years after the proposal selection). Figure 3 summarizes the key project development milestones. It should also be noted that the newly designed CHEOPS payload was delivered to the prime contractor in April 2018, in approximately 5 years and a half from mission selection, and in less than 4 years from instrument level Preliminary Design Review (PDR). The satellite was declared flight-ready in February 2019, less than 3 years after CDR [33]. The successful launch took place on December 18, 2019, nine months after the Qualification & Acceptance Review (QAR). CHEOPS being launched as a secondary passenger, this launch date was entirely driven by the readiness of the prime passenger. Mission design The CHEOPS mission is designed to operate from a dawn-dusk, Sun-synchronous orbit, at an altitude of 700 km. As mentioned above, this orbit was selected during the assessment phase to maximise sky accessibility, to minimise stray-light, and to reduce the radiation environment while ensuring the largest possible number of shared launch opportunities as well as compatibility with existing platforms, qualified for Low Earth Orbit (LEO). The nominal mission design lifetime is 3.5 years with a possible mission extension to 5 years. No consumables are used for nominal operations. Thus, the dominant factors limiting the lifetime of the mission are considered to be linked to radiation damage to the detector as well as overall component degradation due to exposure to cosmic radiation and ageing in general. The platform includes a compact, monopropellant propulsion module, required to perform an initial launcher dispersion manoeuvre, to enable collision avoidance manoeuvres and to comply with the space debris mitigation regulations, re-entering the S/C within 25 years from the end of operations. The spacecraft design is described in the following section. The instrument-to-platform interfaces are based on isostatic mounting of the instrument Baffle Cover Assembly (BCA) and of the Optical Telescope Assembly (OTA), both accommodated on the top panel of the platform; the instrument is thermally decoupled from the rest of the spacecraft (with the exception of two instrument electronic units installed inside the platform); the optical heads of two star-trackers are mounted directly on the OTA structure (to minimise misalignment effects induced by thermo-elastic distortion). The spacecraft configuration was driven by the installation of the instrument on top of the platform, behind a fixed Sun-shield, also supporting the Solar Arrays, and with compact dimensions, so as to fit within different launch vehicle adapters (in particular under ASAP-S on Soyuz and under VESPA on VEGA) and to guarantee the capability to point the line of sight within a half-cone of 60 deg centred around the anti-Sun direction. It should be noted that the Soyuz launcher was selected after the satellite CDR. Spacecraft design The satellite design is based on the use of the AS-250 platform, an Airbus Defence & Space product line designed for small and medium-size missions operating in LEO. Airbus Spain is the prime contractor [2]. The spacecraft configuration (see Fig. 4) is characterised by a compact platform body, with a hexagonal-prismatic shape and body-mounted solar arrays, which also maintain the instrument and its radiators in the shade for all nominal pointing directions within a half-cone of 60 deg centred around the anti-Sun direction. The configuration has been optimized to be compatible with both the ASAP-S and VESPA adapters for a shared launch respectively on Soyuz or VEGA: the spacecraft is just above 1.5 m tall and has a footprint remaining within a circle of 1.6 m in radius. The total wet-mass is close to 275 kg. Such dimensions and mass are in line with the key requirement of maintaining compatibility with different small and medium-size launcher vehicles as an auxiliary or co-passenger. In fact, the spacecraft was designed to be compatible with launch environment requirements enveloping different launchers. This approach proved pivotal for the CHEOPS project, considering that the launcher selection was finalised in 2017 (Soyuz, under ASAP-S). The hexagonal-prismatic platform features vertical beams, corner joints, and lateral panels which can be opened to facilitate the equipment integration (Fig. 4, right side). The three lateral panels in the anti-Sun direction are equipped with radiators procured from Iberespacio (ES). The key avionic units (On-Board Computer, Remote Interface Unit, Power Control & Distribution Unit) and On-Board Control Software (adapted to the mission's specific needs) are inherited from the AS-250 product line. Part of the Attitude and Orbit Control System (AOCS) equipment also belongs to the AS-250 line, with the exception of the reaction wheels (provided by MSCI, Canada), the absence of GPS, and Coarse Sun Sensor, and the down-scaling of the magneto-torquers (Zarm, DE). The two Hydra star trackers have been provided by Sodern (FR) and have been installed on the instrument optical telescope assembly to minimise misalignment to the instrument line of sight induced by thermo-elastic distortion. Two star trackers monitor the position of the stars in the sky to allow for orientation of the spacecraft. For improved performance, their optical heads have been directly mounted on the payload structure. Furthermore, the (AOCS) uses the instrument as a fine guidance sensor. The measured difference between the real centroid position of the target star and the expected position is fed back to the AOCS to improve the pointing stability in particular for longer observations. The requirement on the pointing precision in this mode hacentreds been set 4 RMS over a 48h observing period. In practice, during commissioning a pointing accuracy of order 1 could be achieved. During observations the instrument line of sight is aligned to the apparent position of the targeted star. Rotation of the platform around the line of sight was implemented so that the instrument radiators, needed to maintain thermal balance of the instrument, are always directed towards deep space thereby receiving as little as possible infrared and reflected light from the Earth. Finally, for safety reasons and to minimise stray light, the platform is allowed to point the instrument in any directions lying within a half-cone of 60 deg centred around the anti-Sun pointing direction. The electrical power is provided by 3 solar arrays, two lateral panels, and a central one. The total geometric area is about 2.5 square meters. The Photo-Voltaic Assembly (Leonardo, IT) is based on 3G30 solar cells, which are installed on 3 sandwich panels with a Carbon Fiber Reinforced Plastic skin; the assembly is sized for an average power of just below 200 W (in nominal mode) and takes into account ageing and degradation effects over the mission lifetime. The CHEOPS spacecraft includes a compact, mono-propellant Propulsion Module (PM) from Arianespace Group (DE), inherited from the Myriade Evolution design and including a Hydrazine tank with a capacity of 30 litres from Rafael (IS), 4 small 1N thrusters, 1 pressure transducer and 2 pyro-valves (one of them for passivation at end of line). The PM has been assembled and tested as a separate sub-system before integration on the satellite. Its mechanical interface, directly on the Satellite Interface Ring, has been designed to ensure maximum modes decoupling between the PM and the rest of the S/C. The propellant tank has been sized for providing a total delta-V in excess of the total of 110 m/s required to perform: a) launcher dispersion correction manoeuvre, b) collision avoidance manoeuvres and c) final de-orbiting at the end of the operational phase. The telecommunication sub-system is based on a redundant S-band transceiver (COM DEV, UK) [2,32], with two sets of RX and TX antennas provided by RYMSA (ES) and located respectively at the +Z and -Z end of the S/C (see Fig. 4). The CHEOPS payload The CHEOPS payload consists of a single instrument: A high performing photometer measuring light variations of target stars to ultra-high precision [4]. The photometer is operating in the visible and near-infrared range (0.4μm to 1.1μm) using a backilluminated CCD detector run in Advanced Inverted Mode Operation (AIMO). The instrument is composed of four different units. The telescope together with the baffle is mounted on top of the platform while the Sensor Electronic Module (SEM) and the Back End Electronics (BEE) are hosted inside the platform body. The telescope design is based on an on-axis Ritchey-Chrétien optical configuration with a 320 mm diameter primary mirror. Considering the central obscuration of the primary mirror, the effective collecting area of the system is 76 793 mm 2 . Optical telescope assembly (OTA) The OTA hosts the optics as well as the detector and the read-out electronics of the instrument. The optical configuration consists of a Ritchey-Chrétien telescope and a Back End Optics (BEO) to re-image the telescope focal plane on the detector and to provide an intermediate pupil where a pupil mask is placed for stray-light rejection. In order to keep the baffling system long enough in front of the telescope a very fast (almost F/1) primary mirror feeds the secondary making an intermediate F/5 telescope focal plane. The BEO enlarges the final effective focal length to about F/8.38 or an effective focal length of 2681 mm at the nominal wavelength of 750 nm. Because of the short focal length of the primary mirror special care has been devoted to procedures for the Assembly, Integration and Verification (AIV) of the Ritchey-Chrétien configuration, because of the high sensitivity of the secondary mirror alignment. This has been accomplished trough the realization of a demonstration model equivalent from the optical point of view to the flight one. The AIV procedures have been tested and refined on the model [7,8] and later incorporated into the actual flight telescope [6]. In the development phase an holographic device able to smear out uniformly on a squared region the light of the observed star has been considered [26] but it has been discarded because it was non compatible with the timescale of the project and the technological readiness level of such an approach. The 0.32 • field of view is translating into a 1 / px plate scale on the detector (13 μm pixel 1k × 1k, AIMO). Figure 5 illustrates the OTA/BCA assembly CAM/CAD cut view. The TEL group with the primary and secondary mirror, the Figure 6 shows the flight telescope with its mirror installed (right panel) and the fully assembled and integrated instrument just prior to its delivery in the CHEOPS Laboratory at the University of Bern (left panel). Finally, Fig. 7 shows the instrument after integration on the platform at Airbus Defense and Space in Madrid. The focal plane module hosts the CCD and the readout analogue and digital electronics. The detector is a single frame-transfer back-side illuminated E2V AIMO Fig. 7 Picture of the payload after integration on the platform with the star tracker optical heads (red covers). Image credits Airbus DS Spain CCD47-20. The image section of the CCD has an area of 1024 × 1024 pixels, while a full-frame image including the covered margins is 1076 columns and 1033 rows as represented in Fig. 8. The sides and top margins of the CCD consist of 16 covered columns of pixels (called dark pixels) and 8 columns of "blank" pixels (they are Fig. 8 Schematic view of the CCD elements not real pixels but electronic registers) at each side. On the side opposite to the output amplifier, there are 4 virtual columns of overscan pixels (left side margin in the figure). These pixels are not physical pixels of the CCD but generated by 4 additional shift-read cycles. Due to the restricted telemetry capabilities of CHEOPS, it is not possible to download all full-frame images. Instead, window images of 200×200 pixels centred on the target star plus the corresponding covered margins are cropped from the full-frame image. Note that to further save bandwidth, the on-board software will crop the corners of the window images so that circular window images (of diameter 200 pixels) are actually sent to the ground. Figure 8 shows a schematic of the CCD, full-frame image, and window image. The nominal operating temperature of the CCD was initially set at 233 K and its temperature stabilised within 10 mK while the read-out electronics is stabilised to 50 mK. During in-orbit commissioning it was decided to lower this operating temperature to 228 K to reduce the number of hot pixels (see Section 11). The excess heat generated by the focal plane module is radiated to deep space using two passive radiators located on top of the OTA. The requirement for temperature stabilisation comes from the need for having a very stable and low noise CCD read-out. Low operating temperatures minimise the read-out and dark noise in the system while temperature stability minimise dark noise variation and especially the CCD gain variability, changes in quantum efficiency, as well as the analogue electronics overall (e.g. CCD BIAS voltage stability). To improve photometric performances by mitigating the effect of pointing jitter, avoiding saturation for bright targets, and to mitigate imperfect flat-field correction, CHEOPS has been purposely defocused. The size of the point-spread function (PSF), that is the radius of the circle enclosing 90% of the energy received on the CCD from the target star, has been estimated during the design phase to be within 12 px to 15 px. The actual value as measured during in-orbit commissioning is 16 px (see Section 11.1), a value reasonably close to the expected value. Note that de-focusing the telescope has also some drawbacks. During construction, due to the divergence of the beam once out of focus, it made aligning the focal plane to the desired PSF size more challenging. Precise photometry is also more difficult to obtain in very crowded fields and for very faint stars for which the light is spread over many pixels. Baffle and cover assembly (BCA) The BCA is key to the stray light mitigation strategy adopted for the payload. The baffle design has been adapted from the CoRoT baffle but scaled down in size and adjusted for the on-axis design of the CHEOPS telescope. The baffle consists of a cylindrical aluminum tube with a number of circular vanes and black coating for stray-light rejection. The baffle is terminated by a cover assembly which protects the optics from contamination during assembly, integration, and tests as well as during launch and early operations phase. The cover release is a one-shot mechanism based on a spring-loaded hinge and a launch lock mechanism. The launch lock makes use of a Frangibolt actuator design. Figure 6 shows an image of the instrument. The view shows the fully assembled BCA/OTA and a telescope view with cover open after mirror installation. On the left side of the BCA the hinge can be seen, Fig. 7. The Frangibolt is enclosed on the right hand side. The BCA and OTA are separately mechanically mounted on the top deck of the platform. The BCA to OTA interface is sealed by a Vetronite seal in order to close off the complete optical cavity. The baffling system of the instrument, including the OTA baffling is designed to be able to reject stray light impinging at an angle greater than 35 • with respect to the optical axis by several orders of magnitude. The level of stray light suppression required by the baffling system is in the range of 10 −10 to 10 −12 depending upon the incidence angle. To meet this stray light rejection requirement, cleanliness is of key importance as dust will scatter light into the optical path. Therefore all the assembly, integration, and tests were performed in a class 100 clean room. Target optics cleanliness in orbit is 300 ppm. We note that the desired straylight rejection factor was evaluated by simulations only as measurements were not feasible given the magnitude of the rejection factor. Sensor electronics module (SEM) The SEM is located inside the platform and controls the focal plane module. It is physically decoupled from the FPM to minimise the heat load on the telescope assembly. The SEM is hosting the Sensor Control Unit and a Power Conditioning Unit. It interfaces the Back End Electronics (BEE) and is commanded from it. The functions and characteristics of the SEM can be summarised as follows [5]: For a high precision photometer, the CCD gain (the ratio between the number of electrons per pixel and the number of counts per pixel) needs to remain extremely stable. As part of the flight model CCD selection process and later the instrument calibration, high-precision color-dependent flat fields, the characteristics of the detection chain, gain, full-well capacity, read-out noise, have been measured in the laboratory at the Universities of Geneva and Bern [9,38]. Also, the instrument distortion has been characterised as well and the variation of the point-spread function across the field explored [9]. Figure 9 shows the CCD mounted inside the cryostat used for the flight unit characterisation. The CCD gain sensitivity depends upon temperature and operating voltage. Both dependencies were studied and characterised. In the case of temperature, it is in the order of 1 ppm /mK of noise. The operating voltages dependency is slightly more complex and depends on which voltage is being looked at. Generally, the dependency of these contributors are between 5 and 40 ppm/mV (see [11]) for details). This fact led to the need for ensuring very stable temperatures and voltages for the CCD during measurements which turned out to be a major challenge to the instrument. Predictions, on the CCD gain sensitivity contributions to the overall noise budget (together with the T-dependence of the quantum efficiency) based on calibration measurements, show that the contribution is of the order of 3 to 10 ppm. For details on the instrument calibration, the reader is referred to [11]. In summary, the SEM/FPM represents the camera of the CHEOPS instrument that is controlled by the BEE as a higher-level computer. Back end electronics (BEE) The BEE is the main computer of the instrument. On one hand, it provides power to the entire payload and data interfaces to the platform on-board computer, on the other hand, it interfaces with the SEM. Similarly to the SEM, it contains a Data Processing Unit (DPU) and a Power Supply and Distribution Unit that provides conditioned power for itself and the SEM. The DPU hardware is based on the GR712, which contains two LEON3 processors and provides the space wire interface to the SEM and MIL-1553 interfaces towards the spacecraft. The DPU mass memory from 3D-Plus provides a FLASH memory in the configuration of 4 Gbit times eight-bit. For the effective operation of the processor, four components are used to provide 32-bit access and error detection and correction. The BEE is hosting the main instrument software, the In-Flight Software (IFSW), which provides a high-level control of the SEM and FPM. Additionally, its main functions are to perform the data handling, the centroiding of the stellar image that is used by the AOCS, and the data compression. Flight software and data processing This section describes the basic observation sequence and IFSW functionality. The data processing is generally performed by the IFSW running on the BEE but some processing (e.g. windowing) is already performed by the SEM. Nominal observations A normal science observation by CHEOPS will always use the same automated onboard procedure called "nominal science". Once the instrument is in the appropriate mode ("pre-science"), for which the CCD is switched on and stabilised to operational temperature together with the front-end electronics, the main instrument computer parameters are updated for the specific observation, and the S/C has slewed to the target direction, the following steps will be performed: 1. Target acquisition The instrument acquires full-frame images, identifies all the stars in the region of interest (ROI), and performs pattern matching against the uploaded star map pattern using the angle method. A second, magnitude-based, algorithm is used for bright stars. Once located, the measured offset between the target star and the line-of-sight of the telescope is communicated to the AOCS system and a pointing correction is made. This process continues until the location offset is smaller than the predefined target acquisition threshold or until a maximum number of iterations has been reached. Once the target is successfully acquired, the IFSW makes an autonomous transition to the next step. For more details on the target acquisition see [24,25]. 2. Calibration frame 1 The telescope takes one full-frame image 3 . This image is sent to the ground to characterise the stellar field. 3. Science observation The instrument observation mode is changed to "window mode" and images are taken with a cadence equal to the exposure time (except for exposure times shorter 1.05 s). For each image, the instrument computes the centroid in a configurable ROI at the center of the sub-window (default 51x51 px) and communicates the position offset to the AOCS system. Note that after step 1 the star should be located very close to the center of the sub-window. After the requested number of window images have been acquired, the instrument makes the transition back to "pre-science" mode. 4 On-board science data processing CHEOPS is observing one star at a time intending to download all the acquired raw images without on-board data processing. However, the exposure time determines the cadence at which subsequent images are being taken and therefore ultimately sets the total amount of data to be downloaded. This total amount may occasionally exceed the daily available data down-link rate of 1.2 Gb/d making on-board data processing unavoidable. To reduce the total amount of data to be down-linked to the ground, we crop the full-frame images and generate circular "window images" of 200 pxs diameter centred on the target star as shown in Fig. 8. For images taken with roughly 30 seconds or longer exposure times, no further data reducing action is necessary and all the raw "window images" can be downloaded at a cadence equal to the exposure time. For images taken with exposure times shorter than 30 seconds, we reduce the total data amount by stacking on-board all individual images acquired within 60 seconds by co-adding them pixel-by-pixel. Only the stacked images are downloaded. For example, if the exposure time is 15 seconds, four images will be stacked on-board and the resulting staked image down-linked at a cadence of one stacked image every 60 seconds. To mitigate the loss of information associated with this process, small "imagettes" of radius 25 pxs, centred on the target star, are cropped from the individual "window images" before stacking and are down-linked individually. This is done to facilitate, for example, the correction of cosmic ray hits inside the PSF. Figure 10 sketches the on-board handling of the science data: panel A represents the full-frame image and the elements that are cropped out of it; panel B shows the circular window image, the margins' structure (see Section 6.1) and the "imagette" that is extracted in case stacking is necessary; and panel C shows the data structure that is down-linked in case of stacking. Note that if no stacking is performed "imagettes" are not extracted as they are not needed. The sequence of steps performed on-board on the science images can be summarised as follows: If the exposure time is shorter than 20-30 s (final numbers are still being evaluated), the stacking of images is required and the following steps are performed. 2a. The pixel within a small region centred on the target stars are extracted. These "imagettes" are preserved at a full cadence (i.e. equal to the exposure time). 2b. The windowed images and corresponding margins are co-added pixel-by-pixel. Finally, the following steps are performed in all cases. Fig. 10 This figure illustrates the on-board data processing. Panel A shows a full-frame image (1024x1024 plus CCD margins). Due to the limited bandwidth capabilities, all full-frame images cannot be downlinked to the ground. Therefore, circular window images with the target on their centre are cropped. The same is done to the corresponding section of the margins (details in panel B). If the exposure time of the image is longer than 30 seconds, all window images and margins are sent to the ground without any further on-board manipulation. However, if the exposure time is shorter than 30 seconds images have to be stacked on-board. In that case, small "imagettes" containing only the PSF of the target star, are also cropped but they are down-linked to the ground without stacking. For example, as illustrated in panel C, if the exposure time is 15 seconds, one stacked window image resulting from co-adding four images will be down-linked, together with the corresponding stacked margins and the four "imagettes" 3. The (stacked) image, the (stacked) side margins, the individual "imagettes" (if stacking was performed) plus additional housekeeping information (temperatures, voltages, etc.) are collected in a file and compressed using arithmetic compression. This file is called the "compression entity". The compression achieved depends upon the entropy of the image, which is a function of numerous factors such as, for example, the number of stars in the background, exposure time, size of the image, etc. In practise, the compression factors actually measured, range between 2.5 and 3.3. 4. This compression entity is sent to the SC mass memory via PUS service 13. The detailed treatment of all side margins, imagettes, etc. is highly configurable and can be adjusted to match available bandwidth and data reduction needs. Model philosophy and verification approach The CHEOPS verification approach was defined to remain compatible with the programmatic boundaries applicable to the project. The main drivers for the definition of the CHEOPS verification approach were the schedule constraint (spacecraft had to be ready for launch by end 2018) and had to allow for independent verification of instrument and platform to the largest possible extent. Also, a complete qualification and acceptance cycle was required for the instrument, taking into account the new design and limited heritage at unit level, while the platform was benefiting from a considerable level of flight heritage. The platform equipment underwent an early confirmation of qualification against the CHEOPS environment or definition of any required punctual delta qualification, while the platform functional test specification and test procedures are based on AS250 product line, with adaptations for CHEOPS specifics. The instrument functional verification at spacecraft level was focused on the verification of the interfaces with the platform and functional testing. Instrument optical performances were not verified at satellite level, but regular health checks performed during the satellite test campaign. Accounting for the drivers defined above, the following model philosophy This approach enabled an independent verification of the instrument and the platform as stand-alone elements, complemented by regular interface tests. The instrument models set the timing for the satellite verification, while the platform models were ready with enough margins to maintain the tight schedule. The availability of the instrument EMs and EQMs (electronic units) proved essential to anticipate integration activities and to test interfaces ahead of the availability of flight units. To support the verification of the Radio Frequency compatibility between the spacecraft and the CHEOPS ground segment, a representative radio frequency suitcase, including the Honeywell S-band Transceiver EM, was used to perform Radio-Frequency Compatibility Tests with all foreseen ground stations. The ground segment The CHEOPS Ground Segment is composed of the Mission Operation Centre (MOC) located at Torrejón de Ardoz (INTA, ES), the Science Operation Centre (SOC) at University of Geneva (CH) and two Ground Stations located respectively at Torrejón (main station) and Villafranca (backup station) (ES). The MOC and SOC are national contributions from the Mission Consortium (respectively from Spain and from Switzerland). All operations, including Launch and Early Orbit Phase (LEOP) and In-Orbit Commissioning (IOC), have been executed from the MOC. ESA has provided the Mission Control System and the spacecraft simulator as part of the satellite procurement contract and also provides Collision Avoidance services through the European Space Operation Center (ESOC) located in Darmstadt (DE). The Mission Planning System, driven by essentially time-critical science observation requirements, has been provided by the Consortium and is part of the SOC. As for the other elements of the mission, the operational concept of CHEOPS reflects the fast-track and low-cost nature of the mission, following two basic principles: 1) maximum reuse of existing infrastructure and operational tools, 2) high levels of both on-board autonomy and automation in the operations, to minimise the required manpower. The responsibility for LEOP and in-orbit commissioning lay with ESA as the mission architect. During LEOP, the Kiruna and Troll ground stations were made available by ESA to complement the Torrejón and Villafranca stations, providing additional passes and enabling an early acquisition of the spacecraft after the separation from the launcher. The nominal duration of the LEOP was 5 days, concluding when the spacecraft was safely in the nominal operational orbit and ready for starting payload operations. A 2.5 months IOC phase has followed the launch with nominal science operations having started on April 18, 2020. After commissioning, the responsibility for the mission operations were handed over to the Consortium with an ESA representative following the operational aspects. ESA will remain directly involved in critical decisions, such as for example collision avoidance manoeuvres and the de-orbiting of the CHEOPS satellite at the end of the mission. Mission operations centre (MOC) The MOC is hosted and operated by the Instituto Nacional de Técnica Aeroespacial (INTA), located in Torrejón de Ardoz near Madrid, Spain (Fig. 11). The MOC responsibilities for CHEOPS mission include mission operations and spacecraft disposal at end of mission, including: -Mission control planning rules; -Implementation of the activity plan; -Spacecraft and instrument control and monitoring; -Orbit and attitude determination and control; -Management of failure and anomalies; -Spacecraft deactivation (decommissioning phase); The MOC test and validation philosophy reflects the small mission character of the mission by keeping the level of documentation minimal and demonstrations of the capabilities by simulations, tests, etc. For the same reasons, it has been automated to a large extent to limit the operation costs. Science operations centre (SOC) The Science Operations Centre (SOC) is under the responsibility of the University of Geneva and is physically located at the Department of Astronomy of the University. Seven additional institutes or industrial partners from the CHEOPS Mission Consortium countries have made important contributions to the SOC. The ultimate objective of the CHEOPS (SOC) is to enable the best possible science with the CHEOPS satellite by delivering the following elements: -Mission planning after spacecraft commissioning; -Science operations visibility, reporting, and knowledge management; -Support to the CHEOPS science team and, in exceptional cases, indirect support to guest observers; -Science operations system study, design, and requirements; -Mission planning tool development; -Quick look analysis to monitor the performance of the instrument; -Development and maintenance of the data reduction pipeline to process raw data and deliver science ready data products (images and light-curves); -Mission data archiving; -Science data distribution. The CHEOPS operational concept is based on a weekly cycle where the SOC generates the sequence of activities to be executed on-board and the MOC uplinks at once the corresponding commands to the spacecraft for a week of autonomous in-flight operations. Typically, around 30 individual targets are observed weekly for durations ranging from one hour to one week (median visit duration is about 8 hours). From a mission planning perspective, the key element resides in the stringent time-critical nature of most observations that have to be scheduled (transits and occultations). To maximise the scientific return of the mission, the SOC uses a genetic algorithm to optimise the short-and long-term planning of observations. The goodness of these schedules is evaluated with a merit function that accounts primarily for the scientific priority of the planned observations and the GTO/GO time balance, and to a lesser degree for the filling factor, the on-source time, and the completion rate of individual observation requests. An overall filling factor in excess of 95% is readily obtained with this approach. The Data Reduction Pipeline [20] is run automatically once triggered by the processing framework. There is no interaction with external agents and there is no interactive configuration of the pipeline. The complete processing can be separated in 3 main steps: 1) the calibration module which corrects the instrumental response, 2) the correction module in charge of correcting environmental effects and 3) the photometry module which transforms the resulting calibrated and corrected images into a calibrated flux time series or light curve (see [20]). Each of these modules consists of successive processing steps which are run sequentially as the output of one step is used as an input of the next one. In addition to the reduced light curves and their associated contamination curve, the calibrated and corrected images, the pipeline generates a visit processing report. This report allows the user to get direct insight into the performance of each step of the data reduction. A Project Science Office (PSO) has been established to serve as an interface between the Science Team and the SOC to, for example, verify and adapt to the proper format, the target list, and other information passed to the SOC. The PSO also provides the instrument reference files derived from the observations of the Monitoring and Characterisation Programme. The PSO is also the interface to ESA for the Announcement of Opportunities under the responsibility of ESA. The PSO provides support to the AOs. Finally, an Instrument Team tracks the contributions to the noise budget and establishes the Instrument Operations Plan. Derived from the data taken during the on-ground calibration activities the Instrument Team will provide a set of instrument reference files that will be mainly used for the data reduction inside the SOC. An updated set of instrument reference files will be provided at the end of the IOC when the initial instrument performance was assessed in-flight. The operational role of the Instrument Team is the resolution of instrument anomalies and to implement changes of the on-board software resulting from the follow-up activities or following requests from the SOC or the PSO. Launch and early orbit phase (LEOP) At 8:54:20 UT on December 18, 2019, CHEOPS was successfully launched as a secondary passenger on a Soyuz-Fregat rocket from Kourou in French Guiana (see Fig. 12). The primary payload in terms of mass was the first of the second generation of Cosmo-SkyMed dual-use radar reconnaissance satellites for the Italian government. The 2.2-metric-ton satellite, manufactured by Thales Alenia Space for the Italian Space Agency, separated from the Fregat upper stage 23 minutes after launch. CHEOPS with its wet mass of 273-kilogram entered its intended near-polar duskdawn Sun-synchronous orbit following separation 2 hours 24 minutes after launch. Finally, three additional CubeSats were also on-board as auxiliary payloads. EyeSat, a 3U CubeSat (5 kg) student satellite, ANGELS, a 30-kilogram technology miniaturisation test satellite (both launched for CNES), and ESA's OPS-SAT which will be testing and validating new techniques in mission control and on-board satellite systems. The execution of LEOP and IOC operations, under ESA responsibility as Mission Architect, were delegated to the platform contractor Airbus Defence & Space Spain. All activities have been performed from the mission operation centre at INTA near Madrid (Spain) supported by the Troll (Antarctica) and Kiruna (Sweden) ground stations in addition to the stations of Torrejón and Villafranca in Spain. At 11:43 UT on day 1, the first telemetry acquisition by the Troll Ground Station in Antarctica arrived at the mission operation centre as planned. This started 4.5 days of activities aiming at ensuring that the satellite can be put into safe mode waiting for in-orbit commissioning activities to start early January. As a first measure to prevent condensation from early out-gassing, the temperature of the focal plane assembly was increased above the nominal operation value. Two-way Doppler measurements for orbit determination were started showing that the error in orbit injection was less than 300 m in the semi-major axis. Star trackers were started and configured and overall power convergence achieved. On day 2, the normal mode of the attitude and orbit control system was achieved and the main survival-redundant equipment checks performed while the pyrovalves for orbit control manoeuvres opened. On day 3, all ground stations and satellite equipment were declared healthy and with prime units in use. The execution of the calibration orbit control manoeuvre was successfully done. The automation system for INTA ground stations (Torrejón and Villafranca) and for the CHEOPS operation centre was tested. On day 4, the satellite was in its final operational orbit after 3 orbit control manoeuvres (one for calibration and two for correction). The difference in semi-major axis between the targeted and achieved orbit is less than 30 m. No additional manoeuvres for orbit maintenance will be needed during CHEOPS's lifetime. Finally, on day 5 all the nominal units as well as all the redundant units critical for the survival of the satellite were declared working and healthy. A total of 198 g of propellant was used during this phase. LEOP was completed on 22 December without recording major anomalies and with all subsystems using the nominal equipment chain. The spacecraft was declared ready for in-orbit commissioning and put in safe mode on December 22, 2019, with only basic spacecraft maintenance taking place until the start of the activities at the mission operation centre on January 7, 2020. In-orbit commissioning (IOC) The IOC activities started at the MOC on January 7, 2020. Teams from ESA, Airbus, INTA, the University of Bern and Geneva were assembled at MOC during the first few weeks of these activities which were divided in four phases: 1. IOC-A: Instrument switch-on (January 7 -January 29, 2020) This first phase was dedicated to switching on both the nominal and redundant chains of the instrument. Further activities were dedicated to confirm the thermal control performances of the instrument. Furthermore, the calibration of the dark current and the CCD pinning curves were performed while the cover of the instrument was still closed. This phase was completed on January 29 with the successful opening of the cover. Monitoring & Characterisation and AOCS performances verification (January 30 -February 26, 2020) After opening the cover, the first images were taken without the payload in the tracking loop and the offset between the telescope's line-of-sight and the star trackers measured and corrected. Detailed calibration of star tracker's optical heads with respect to the spacecraft's reference frame was performed based on instrument data but without enabling payload measurements to enter the AOCS control loop in order to allow for an accurate pointing even without instrument measurements. Subsequently, observations with the Payload in the Loop for tracking the target star have been performed. This has led to a full characterisation of the pointing abilities of the system. Several additional detailed characterisation were performed during this phase aiming at defining the pointspread function (see Section 11.1), the extent of stray light as a function of angle of incidence, the amount of dark current, the number of hot pixels, the measure of the gain stability, as well as an update of the location of the South-Atlantic Anomaly. While overall the IOC activities went rather smoothly, not surprisingly there were many issues that arose which required analysis and additional measurements. As an example, one can mention the surprising measure of significant stray light on the detector even though the cover of the telescope was still closed. In this configuration, only dark images were expected as the optical cavity was supposed to be completely shielded from the outside. After considerable investigation and many additional measurements with different telescope pointings, the root cause could be identified. The small hole in the cover that was used to illuminate the CCD one last time before shipping the satellite to Kourou to verify that the detector was still working was leaking even though it had been taped closed. It was concluded that either the tape fell off during launch or that it was more transparent than expected in the infrared. As no leaks from anywhere else in the system could be detected, the problem was no longer relevant after the opening of the cover. A more difficult issue, which is still present, was the evidence of a much larger number of hot pixels than expected based on the COROT data (of order 70% more). Also, most of these hot pixels being telegraphic, their correction is somewhat more difficult. To reduce their number, the operating temperature of the CCD has been reduced in several tests from 233 K to 228 K and finally to 223 K. Because the latter lower temperature might create temperature stability issues in some extreme pointings and because all calibrations had been carried out at 233 K, it was decided to operate the CCD at the intermediate temperature of 228 K which resulted in a decrease of hot pixels by a factor 3. Point-spread function (PSF) As mentioned in Section 6.1, the telescope has been deliberately defocused to mitigate the effect of the jitter in the satellite pointing and the saturation of pixels for bright stars. A key activity during IOC-B was the exact measurement of the actual size and shape of the PSF defined as the region where 90% of the total energy received from the star in form of light is being deposited. The detailed knowledge of the PSF is of importance to ensure the required photometric performances. Figure 13 shows the actual CHEOPS PSF as measured during IOC-B. The measured PSF shows an expected triagonal deformation which originates from the strain on the mirror stemming from its three-point fixation mechanism. Such a mechanism was used to ensure that the telescope could withstand high loads as Fig. 13 CHEOPS has a de-focused PSF where 90% of the total energy is inside a radius of 16 pixels. In this figure the PSF flux distribution in white light is shown as measured during the in-orbit commissioning the launch vehicle was not known at the time of the design. To some extent some engineering choices have been driven by the stability of the final PSF rather than by its symmetry. Being now measured in the absence of gravity, the PSF appears, as expected, significantly more symmetric than the one measured during calibration in the laboratory. It was also noted that the size of the PSF of 16 pxs is larger than the expected range of 12 pxs to 15 pxs. Finally, following the predictions obtained by comparing thermoelastic models [27] with the laboratory measurements, the actual PSF is much smoother than the ones recorded on the ground. This smoothness coupled with the spreading of the light over a larger area translate in a factor 4.5 reduction in the intensity of the brightest spots of the PSF as compared to the laboratory measurements. A larger and smoother PSF has two consequences depending upon the magnitude of the target stars. For bright stars it is extremely positive as it reduces the risk of pixel saturation while for faint stars it makes distinguishing signal from noise more difficult. As a consequence, for very faint targets, hot pixels or cosmic rays can lead to degradation in the pointing accuracy when using the instrument as a fine guidance sensor. As the pointing of the payload without this feedback loop is very accurate on its own, it was decided no to use the payload in the loop for stars fainter than magnitude 11. Photometric precision and stability As mentioned in Section 3.1, the requirement for bright stars (science requirement 1.1) called for a photometric precision of 20 ppm (goal: 10 ppm) in 6 hours of integration time for G5 dwarf stars with V-band magnitudes in the range 6 ≤ V ≤ 9 mag. To illustrate the verification of this requirement, we show the light curve obtained over 47 hours of observation of HD 88111, a magnitude V = 9.2 and effective temperature T eff = 5330 K star for which GAIA [16] provides a radius of 0.9R . This star was chosen as a well known stable star and hence ideally suited to verify the achievable precision. The exposure time was 30 seconds without stacking images and the photometry was obtained using a circular aperture of 30 px in radius (Fig. 14). The photometric precision and stability is estimated by finding the transit depth that can be detected with a signal-to-noise ratio of 1. This is essentially the same method used to calculate the Kepler combined differential photometric noise value [10]. For a six hour period of observation, the achieved photometric precision is 15.5 ppm; well within the precision requirement outlined in Section 3.1. A similar precision is obtained after analysis of any six hour period during the 47 hours of observation. This precision is achieved without any detrending and therefore reflects the intrinsic stability of CHEOPS. The star TYC 5502-1037-1 was chosen to test the faint end of the photometric precision of CHEOPS. This is a V = 11.9 magnitude star, with an effective temperature of T eff = 4750 K and a radius of R = 0.7R . Estimating the photometric precision of this observation was not as straight forward as for HD 88111. The first observation made was inadequate for a precision analysis as the window location was chosen too close to one of the margins of the CCD and a hot pixel appeared in the PSF during the visit. A second 3 hour observation was made later on. A 75 ppm precision was achieved for this observation, which makes this case compliant with science requirement 1.2. Note that, as in the case of HD 88111, no detrending of the data was performed. In summary, measurements taken during commissioning demonstrate that CHEOPS meets the photometric precision requirements on both the bright and faint stars. The determination of the actual detailed photometric performances of CHEOPS is ongoing work that will be carried out based on the analysis of actual science targets by the CHEOPS Science and Instrument Teams over the coming months. Finally, even though stray light was no issue in the results reported in this paper, its effects were carefully studied by means of dedicated observations. The function measuring the rejection of stray light, the point source transmission (PST), could be estimated by observing close to the Moon and was found to be in the range 10 −9 to 10 −12 for incoming light with an angle of incidences greater than 35 degrees. These measured values are within the error bars of simulations of the optical system carried out early on during the design phase of the instrument. It is worth mentioning that unexpected stray light was detected in some images taken along a line of sight close to the un-illuminated Earth limb. After analysis, this was attributed to atmospheric glow. Since this effect is unpredictable it is nearly impossible to correct, and affected images have to be discarded. KELT-11b: radius determination of a bloated planet During IOC-D a few stars known to host planets were targeted by CHEOPS as part of the end-to-end validation of the operational process. The giant planet KELT-11b was among these targets. Discovered by the KELT survey in 2016 [31], the radius of the planet measured by transit photometry is R p = 1.37 +0. 15 −0.12 R J up while its mass derived from radial velocity data is M p = 0.195 ± 0.18 M J up . The planet orbits the evolved subgiant star HD 93396 of magnitude V = 8 in a period of P = 4.736529 ± 0.00006 days. As evidenced by these data, KELT-11b is a so-called bloated giant planet having about 20% of the mass of Jupiter but a radius larger by almost 40%. To analyse the data we used pycheops 5 version 0.7.0. pycheops has been developed specifically for the analysis of CHEOPS data and uses the qpower2 algorithm [29] for fast computation of transit light curves. Optimisation of the model parameters is done using lmfit 6 and emcee [14] is used to sample the posterior probability distribution (PPD) of the model parameters. The observed data comprise 1500 flux measurements in a photometric aperture with a radius of 29 arcsec from images with exposure times of 30 s covering the transit of KELT-11 b on 2020-03-09. We excluded 8 exposures that provided discrepant flux measurements and 101 exposures observed in a narrow range of spacecraft roll angle for which there is excess scatter in the flux (∼ 200 ppm) caused by scattered moonlight. Our model for the observed flux, f (t) is of the form f (t) = F (t; c) T (t; D, W, b, T 0 , h 1 , h 2 ) S(t; S 0 , ω 0 ) + (σ w ), where F(t; c) is a scaling factor, T (t; D, W, b, T 0 , h 1 , h 2 ) is the transit model computed using qpower2, and S(t; S 0 , ω 0 ) is a model of the intrinsic stellar variability. The random noise (σ w ) is assumed to be white noise with a variance σ 2 i + σ 2 w , where σ i is the error bar on a flux measurement f (t i ) provided by the CHEOPS data reduction pipeline. The vector of "de-trending" coefficients c is used to compute a linear model for instrumental effects, F (t; c) = B · c, where the matrix of basis vectors B includes the estimated contamination of the photometric aperture using simulated images of the CHEOPS field of view (lc contam), plus 6 functions of the form sin(nφ) and cos(nφ) where n = 1, 2, 3 and φ is the spacecraft roll angle. The stellar noise is assumed to have a power spectrum of the form S(ω) = √ 2/π S 0 /[(ω/ω 0 ) 4 + 1]. The likelihood for a given set of model parameters {θ }, L(f (t); {θ }; S 0 , ω 0 , σ w ), is calculated using celerite [13]. The parameters of the model for the transit at time T 0 of a star with radius R by a planet of radius R pl in a circular orbit of semi-major axis a and inclination i are D = (R pl /R ) 2 = k 2 (depth), W = (R /a) (1 + k) 2 − b 2 /π (width), and b = a cos(i)/R (impact parameter). We fixed the value of the orbital period for this analysis. We assume uniform priors on cos(i), log(k) and log(a/R ). The stellar limb darkening is modelled using the power-2 law with parameters h 1 = 0.715 ± 0.011 and h 2 = 0.442 ± 0.05 taken from [28]. Next to the transit signal, we observe stellar variability with an amplitude of approx. 200 ppm, correlated over timescales between 30 minutes and 4 hours. We attribute these variations to the effects of stellar granulation, as their amplitude and frequency behaviour is in excellent agreement with the empirical relations derived from Kepler data [36]. We model them using the GP described above, with the following priors imposed on the hyper-parameters: logS 0 = −23.4±0.6, log ω 0 = 5.6±0.3 and log σ w = −10.4 ± 3.7. These values were found from a fit to the residuals from an initial least-squares fit. The time scale and amplitude of the stellar variability for models with these priors is similar to that seen in other stars a similar spectral type to KELT-11 [22]. The best-fit model for these KELT-11 data is shown in Fig. 15 and the model parameters with their standard errors (calculated from the mean and standard deviation of the PPD) are given in Table 1. The RMS residual from the fit is 198 ppm. With the value for k = 0.0463 and its uncertainty 0.0003 as shown in Table 1 and adopting a value for the radius of the star KELT-11 of R = 2.807 ± 0.036 R we obtain a radius for the planet KELT-11b (using R = 696342 km) of R pl = k R = 90500 ± 1747 km which translates (taking R J up = 69911 km) into R pl = 1.295 ± 0.025 R J up . This value is consistent with the value obtained earlier [31] but with an error bar 5 times smaller. We note that the error bar provided includes contributions from both the error associated with the measurement of k and from the one associated with R : R pl = ∂R pl /∂k k+ ∂R pl /∂R R = 0.00839R J up +0.0167R J up = 0.025R J up . This shows that it would be still possible to significantly reduce the error on the planet radius by improving on the estimates of the stellar properties. Summary and conclusions CHEOPS is the first small-class mission in ESA's science programme. As such, it is a demonstrator showcasing the ability of ESA and its Member States to develop fast, low-cost science missions. As the first of its kind, CHEOPS had to pave the way in many respects in order to maintain the aggressive schedule and remain within budget. At the end of 2018, after only 6 years from the initial mission proposal selection, 4 years from the satellite PDR and 2.5 years after the satellite CDR, the fully integrated CHEOPS satellite has completed all planned tests. The satellite QAR The basis vectors are normalised so that value of these coefficients corresponds to the amplitude of the instrumental noise correlated with the corresponding parameter. The mean stellar density is ρ was passed in February 2019, marking flight readiness. Finally, CHEOPS was successfully launched as a secondary passenger on a Soyuz-Fregat rocket from Kourou (French Guiana) in December 2019. End of March 2020, in-orbit commissioning was over and the satellite declared to meet all its requirements. The responsibility for the routine science operations was handed over to the CHEOPS Consortium by ESA. The mission reached this milestone within schedule and budget. As of April 18, 2020, the routine science operations including Guaranteed Time Observations and Guest Observations have begun. Early results support what was already surmised from the commissioning results, namely that CHEOPS is indeed the precision photometric machine it was designed to be. 1
20,202
2020-11-05T00:00:00.000
[ "Physics", "Geology" ]
MBD4 Interacts With and Recruits USP7 to Heterochromatic Foci ABSTRACT MBD4 is the only methyl‐CpG binding protein that possesses a C‐terminal glycosylase domain. It has been associated with a number of nuclear pathways including DNA repair, DNA damage response, the initiation of apoptosis, transcriptional repression, and DNA demethylation. However, the precise contribution of MBD4 to these processes in development and relevant diseases remains elusive. We identified UHRF1 and USP7 as two new interaction partners for MBD4. Both UHRF1, a E3 ubiquitin ligase, and USP7, a de‐ubiquinating enzyme, regulate the stability of the DNA maintenance methyltransferase, Dnmt1. The ability of MBD4 to directly interact with and recruit USP7 to chromocenters implicates it as an additional factor that can potentially regulate Dnmt1 activity during cell proliferation. J. Cell. Biochem. 116: 476–485, 2015. © 2014 The Authors. Journal of Cellular Biochemistry published by Wiley Periodicals, Inc. M BD4 is the only methyl-CpG binding protein that possesses a C-terminal glycosylase domain. It has been associated with a number of nuclear pathways including DNA repair, the DNA damage response, initiation of apoptosis, transcriptional repression, and DNA demethylation Hendrich et al., 1999;Cortellino et al., 2003;Screaton et al., 2003;Rai et al., 2008;Ruzov et al., 2009;Meng et al., 2011;Thillainadesan et al., 2012]. A naturally occurring frameshift mutation in MBD4 results in a truncated protein, lacking its intervening region and glycosylase domain, occurs in human colon and other carcinomas that exhibit microsatellite instability (MSI), generally associated with defects in mismatch repair (MMR) [Bader et al., 1999;Riccio et al., 1999;Bader et al., 2007]. However, mutant mice targeted to create an Mbd4 null mutation did not exhibit increased tumorigenesis, reduced survival rates, or increased MSI; although a 2-3 fold increase in C:T mutation at CpG sites was observed Wong et al., 2002]. Loss of MBD4 function also does not affect MMR-dependent tumorigenesis [Sansom et al., , 2004, however, it plays a role in mediating the apoptotic response resulting from exposure to DNA damaging agents or inactivation of the maintenance methyltransferase, Dnmt1 Ruzov et al., 2009;Loughery et al., 2011]. Depletion of DNMT1 in cancer cell lines can result in decreases in MMR protein levels, including MBD4, possibly mediated by their physical interaction [Ruzov et al., 2009;Loughery et al., 2011;Laget et al., 2014]. MBD4 strongly associates with heterochromatin [Hendrich and Bird, 1998;Ruzov et al., 2009], and has also been shown to physically interact with and recruit the MMR protein, MLH1 (MutL homolog 1), to heterochromatin sites during MMR-dependent apoptosis Cortellino et al., 2003;Ruzov et al., 2009]. These findings suggest that MBD4 may have additional roles [Cortellino et al., 2003], possibly through unknown protein associations of MBD4 that can contribute to genome stability. Indeed, MBD4 interacting proteins such as Fas-associated death domain protein (FADD), MLH1, and DNA methyltransferase 1 (DNMT1) potentially link genome surveillance and DNA repair with apoptosis during cell proliferation and DNA replication [Screaton et al., 2003;Ruzov et al., 2009]. UHRF1 (Ubiquitin-like, with PHD and RING finger domains 1, also known as Np95 and ICBP90) interacts with and recruits DNMT1 to hemi-methylated DNA to facilitate methylation of daughter strands [Bostick et al., 2007;Sharif et al., 2007]. It also has a binding specificity for 5-hydroxymethylcytosine (5hmC)-containing DNA that is similar to its affinity for 5-methylcytosine DNA [Rajakumara et al., 2011]. UHRF1 is strongly linked to heterochromatin replication and formation [Papait et al., 2007[Papait et al., , 2008, which may depend on its specific localization during cell proliferation to the chromocenters (via modified histones) that undergo large-scale reorganization and progressive clustering at heterochromatin regions [Papait et al., 2008;Nishiyama et al., 2013]. UHRF1 is preferentially expressed in the cells undergoing proliferation [Papait et al., 2008], during which UHRF1 forms a complex with Dnmt1 and USP7 (ubiquitin specific peptidase 7, herpes virus-associated, also known as HAUSP) [Felle et al., 2011;Qin et al., 2011;Ma et al., 2012]. USP7 regulates the stability of UHRF1 via its deubiquitylase activity [Felle et al., 2011;Qin et al., 2011;Ma et al., 2012], and modulates the enzymatic activity of Dnmt1 on the UHRF1 platform [Bostick et al., 2007;Sharif et al., 2007;Felle et al., 2011;Qin et al., 2011]. However, the nuclear distribution pattern of USP7 per se, is generally diffuse [Holowaty et al., 2003;van der Horst et al., 2006] and is similar to that of MLH1 [Ruzov et al., 2009], which contrasts with the strong heterochromatin and chromocenter-associated expression of UHRF1 during cell proliferation [Dunican et al., 2013]. USP7 can be recruited and relocated to nuclear foci by its protein partners [Everett et al., 1997;Daubeuf et al., 2009;Zaman et al., 2013], and previous studies have suggested a direct interaction between USP7 and UHRF1 [Felle et al., 2011;Ma et al., 2012]. However, this interaction may be transient and stabilization of the trimeric USP7/UHRF1/DNMT1 complex has been suggested to depend on their mutual engagement on chromatin [Felle et al., 2011]. In this study we performed biochemical isolation and identification of MBD4 associating proteins, and we found that MBD4 specifically interacts with UHRF1 and USP7. We characterized a novel interaction domain in the intervening region of MBD4 and demonstrate a direct role for MBD4 in recruiting USP7 to chromocenters. NUCLEAR EXTRACT PREPARATION AND IMMUNOPRECIPITATION Nuclear extract was prepared from HEK293T cells according to [Pradeepa et al., 2012]. HEK293T were transfected with SF-TAP-hMBD4 and were lysed with a hypotonic lysis buffer (0.05% NP-40, 10 mM HEPES, 1.5 mM MgCl 2 , 10 mM KCl, 5 mM EDTA, and complete protease inhibitor cocktail (Roche), pH 7.4, 30 Â 10 6 cells/ ml). Cytosolic fractions were discarded and the separated cell nuclei were lysed in a nuclear extract buffer (20 mM HEPES, 300 mM NaCl, 20 mM KCl, EDTA-free complete protease inhibitor cocktail (Roche), pH 7.4, 30 Â 10 6 cells/ml) with or without MNase (Nuclease S7; Roche) as indicated in the result chapter. A final concentration of 5 mM EDTA was used to stop the chromatin digestion if MNae is added, and the sample was centrifuged at 20,000 g for 30 min twice to get post nuclear supernatants. 50 ml sepharose beads covalently conjugated to FLAG-specific mAb (Sigma) or GFP beads (ChromoTek) were added to samples and incubated for 2 h with rotation at 4°C. Beads were washed three times with ice-cold nuclear extract buffer containing 0.05% NP40, and once with pure ice-cold PBS. Bound proteins were eluted by boiling in sample buffer, and the eluted samples were loaded on a large 10% SDS-PAGE (BioRad) and separated, followed by Western Blot analysis. MASS SPECTROMETRY (MS) ANALYSIS For MS analysis, gels were stained with Colloidal Blue (NuPAGE, Invitrogen). Several chunks of bands of diverse molecular weights were excised from the experimental and control lane of a corresponding molecular weights. The gel chunks were sent to St. Andrews Mass Spectrometry services for analysis. The gel chunk was excised and cut into 1 mm cubes. These were then subjected to in-gel digestion, using a ProGest Investigator in-gel digestion robot (Genomic Solutions, Ann Arbor, MI) using standard protocols [Shevchenko et al., 1996]. The MS/MS data file generated was analyzed using the Mascot 2.1 search engine (Matrix Science, London, UK) against the NCBInr database Feb 2011 (12852469 sequences) with no species restriction. TRANSIENT TRANSFECTION, RECIPROCAL PULL-DOWN ASSAY AND WESTERN BLOT ANALYSIS Lipofectamine 2000 (Invitrogen) was used in accordance with manufacturer 0 s instructions. For reciprocal pull-down assays, cell nuclear lysates (without chromatin digestion) from 293 T cells that were transiently transfected with plasmids containing FLAG-UHRF1 or GFP-Mlh1 were mixed with purified 6HIS-TF-UB human MBD4 mutants that were pre-incubated with Ni-NTA agarose (Invitrogen) overnight at 4°C on a rotating wheel. After 1 h incubation at 4°C, the mixture containing agarose beads was loaded onto a column, followed by extensive washes (4Â) with RIPA buffer followed by pure ice-cold PBS. The Ni-NTA agarose was then immediately boiled in loading buffer for western blotting analysis. Western blots were probed with the following primary antibodies: anti-FLAG antibody (Sigma; mouse F1804, rabbit F7425), anti-GFP (Roche, mouse 11814460001), anti-MCherry (Chromotek, RFP antibody [5F8]), anti-6HIS antibody (gift from Anne Seawright, MRC Human Genetics Unit). YEAST TWO-HYBRID The Y2H assays were performed as previously described [Dellaire et al., 2002]. The full-length mouse Mbd4 was cloned into a GAL4 BD-Bait construct pGBKT7 (Clontech). Yeast strains carrying each plasmid were mated with a strain pretransformed with a mouse embryonic day 11.5 cDNA library cloned into pGADT7 (Clontech). Bait and library clone interaction was identified by b-Galactosidase assays and appropriate dropout selections and confirmation of restreaking. Confirmed colonies were picked for following colony PCR and plasmid rescue followed by sequencing confirmation. MBD4 FORMS COMPLEXES WITH UHRF1 AND USP7 In order to identify MBD4 interaction partners, we carried out affinity purification of FLAG epitope-tagged MBD4 from 293 T cells using nuclei extracts prepared with micrococcal nuclease (MNase) digestion (Fig. 1A). We identified UHRF1 and USP7 as two novel interacting proteins by mass spectrometry of the excised regions ( Fig. 1A, two brackets). To verify the specific interaction between MBD4 and the two protein partners UHRF1 and USP7, and to clarify if their associations are chromatin-interaction dependent, we preformed immunoprecipitations (IPs) with FLAG epitope-tagged human MBD4 in MNase digested nuclear extracts, as well as with GFP epitope-tagged mouse MBD4 in nuclear extracts without chromatin digestion (Fig. 1B, UHRF1 left & USP7 right). In addition, a reciprocal IP was carried out with FLAG epitope-tagged human UHRF1 in MNase digested nuclear extracts (Fig. 1B, UHRF1 lower lane). The presence of UHRF1, USP7 as well as MBD4 in a reciprocal IP was tested by Western Blot analysis. MBD4 IP showed coprecipitation of UHRF1 and USP7 in both conditions with and without chromatin digestion (Fig. 1B, UHRF1 upper two lanes), suggesting these interactions are direct and independent from chromatin binding. Human and mouse MBD4 exhibited similar Immunoprecipitation of ectopic proteins with FLAG epitope-tagged (MNase treated nuclear extract) or GFP epitope-tagged (no MNase treated nuclear extract) with anti-FLAG or anti-GFP antibodies. Co-precipitated proteins were detected by Western Blot analysis. 3% of the input (Lane 1) and the antibody coupled IP (Lane 2) were loaded. The migration of the molecular weight is indicated on the left. affinity with UHRF1 (Fig. 1B, UHRF1 upper three lanes) and USP7 (Fig. 1B, USP7), and MBD4 showed a similar interaction by IP with human and mouse versions of UHRF1 (Fig. 1B, UHRF1 upper three lanes), suggesting their interaction is specific and conserved between human and mouse. In addition, a reciprocal IP of UHRF1 precipitated MBD4 (Fig. 1B, UHRF1 lower lane). A co-IP of MBD4 and Mlh1 confirmed their positive interaction (Fig. 1B, positive control), while the negative control p75 was not precipitated with MBD4 (Fig. 1B, negative control), consistent with the previous reports Pradeepa et al., 2012]. Collectively, our data show that MBD4 can interact with UHRF1 and USP7. MBD4 INTERACTS WITH UHRF1 COMPLEX THROUGH ITS INTERVENING REGION To identify the protein domains of MBD4 mediating the association with UHRF1 complex, we purified the recombinant MBD4 proteins containing the MBD domain (amino acid 1-156), the MBD and the intervening region (amino acid 1-408), the glycosylase domain and its known upstream interaction region (amino acid 408-580), and the intervening region and the downstream interaction region (156-455) ( Fig. 2A & B). The four truncated versions of MBD4 were designed to represent the MBD, the intervening region, and the glycosylase domains of MBD4, and to overlap with each other to minimize the interacting regions that may be responsible for partner protein associations (Fig. 2A). These recombinants were 6xHistagged fusion proteins that have a thermostable protein called 'Trigger Factor' as well as ubiquitin at its C-terminal region (Fig. 2B, upper), which allows for the isolation of soluble fusion proteins [Thapa et al., 2008]. The four versions of human MBD4 were cloned into the prokaryotic expression vector, induced and purified from Escherichia coli; individual protein elutions were collected for the respective mutants (Fig. 2B, lower). Reciprocal in vitro pull-down experiments were performed using the purified proteins to identify the domains interacting with FLAG epitope-tagged UHRF1 complex purified from 293 T cells (Fig. 2C). GFP epitope-tagged Mlh1 was selected as positive control, because the interaction region of MBD4 responsible for its association with Mlh1 was previously identified in a previous yeast two-hybrid (Y2H) study [Millar, 2002]. Mutant 3 and particularly mutant 4 of MBD4 efficiently co-precipitated Mlh1 (Fig. 2D, upper left). An IP with Mlh1 also indicated an interaction with MBD4 recombinants 2, 3, and 4 ( Fig. 2D, upper right). These data reveal that the interaction region of MBD4 responsible for the association with Mlh1 resides in the intervening region and glycosylase domain of MBD4 (Fig. 2D, lower cartoon). Amino acids 410-455 corresponds to the overlapping region between mutants 3 and 4 and likely represents the minimum requirement for the association (Fig. 2D, lower cartoon). This was consistent with the previous Y2H study in which amino acid 415-420 of MBD4 was mapped as MBD4 minimum interaction region with Mlh1 [Millar, 2002]. We then tested the interaction region of MBD4 with UHRF1 (Fig. 2E). Mutants 2 and 4 of MBD4 efficiently co-precipitated UHRF1 in vitro, while mutants 1 and 3, containing the MBD and glycosylase domain respectively, were not capable of interacting with UHRF1 ( Figure 2E, upper left). Reciprocal IP of UHRF1 strongly co-precipitated mutant 2 of MBD4, by contrast, the other mutants containing MBD and glycosylase domains were very weak (Fig. 2E, upper right). Therefore, we were able to map the interaction region of MBD4 to its intervening region (Fig. 2E, lower). The intervening region of MBD4 does not contain an obvious functional domain [Meng et al., 2011], in addition, our data show that MBD4 recombinant 4 successfully co-precipitated UHRF1 (similar to the binding of MBD4 recombinant 2) (Fig. 2E, upper left), which contrasts to the very weak binding affinity in the reciprocal assay (Fig. 2E, upper right). This supports the view that the intervening region of MBD4 constitutes a major protein interaction region. The naturally occurring MBD4 truncation at amino acid 313 in MMRdeficient human carcinomas presumably has the potential to affect the protein interaction profile of MBD4 in addition to losing the catalytically active C-terminal glycosylase domain ( Fig. 2A, black arrow). Taken together, our in vitro studies show that MBD4 can directly interact with UHRF1, and the intervening region of MBD4 can mediate the association. MBD4 TIGHTLY COLOCALIZES WITH UHRF1 AT CHROMOCENTERS IN HETEROCHROMATIN REPLICATION AND FORMATION Previous studies have found that MBD4 and UHRF1 respectively localize to heterochromatic sites in mouse cells [Hendrich and Bird, 1998;Papait et al., 2007;Karagianni et al., 2008;Ruzov et al., 2009;Nady et al., 2011;Dunican et al., 2013;Gelato et al., 2014], suggesting they may be involved in heterochromatin regulation and maintenance. To determine if MBD4 occupies the same cellular space with UHRF1 at heterochromatin, and if their association affects particular cellular phenotypes such as chromatin organization, we performed co-transfection followed by immunofluorescence (IF) microscopy to determine their potential co-localization and subcellular distribution (Fig. 3). We used mouse CMT93 cells, a colon cancer cell line that has prominent heterochromatin sites as evidenced by DAPI staining, to test our hypothesis that MBD4 and UHRF1 might associate with each other at heterochromatin sites. Ectopic GFP epitope-tagged mouse MBD4 protein exhibited a strong signal that was coincident with DAPI bright spots in the nucleus (Fig. 3A, left & white arrows in right magnification); this resembles the endogenous mouse MBD4 distribution [Hendrich and Bird, 1998;Ruzov et al., 2009]. The DAPI bright spots correspond to methylated satellite DNA, a natural methylated ligand for MBD4 [Ruzov et al., 2009]. Additional non-heterochromatic staining of ectopic MBD4 was also observed in CMT93 cells (Fig. 3A, MBD4 light green staining). FLAG epitope-tagged human UHRF1 showed a condensed nuclear distribution that also co-localized with DAPI bright dots (Fig. 3B, left & right magnification), resembling the location of endogenous UHRF1 protein [Dunican et al., 2013;Gelato et al., 2014]. In CMT93 cells, ectopic expression of either MBD4 or UHRF1 induced minor heterochromatin clustering (Fig. 3A & B, white arrows); a cellular phenomenon previously reported for overexpression of UHRF1 or MeCP2 [Brero et al., 2005;Papait et al., 2007Papait et al., , 2008. UHRF1 has been demonstrated to regulate cell proliferation [Jenkins et al., 2005], and is expressed at high levels in proliferating cells [Papait et al., 2007]. Consistently and intriguingly, co-overexpression of UHRF1 and MBD4 resulted in marked large-scale reorganization events occurring at heterochromatic sites (Fig. 3C, upper & lower magnification), manifested by either bridging (Fig. 3C, left), fragmentation (Fig. 3C, middle), or major clustering (Fig. 3C, right); possibly implying heterochromatin reformation (Fig. 3C, left) or perhaps replication (Fig. 3C, middle) occurs at their co-localization sites. The cells with such cellular phenotypes are concomitant with a marked increase in cell size (Fig. 3C vs. A & B). In all cases, ectopic MBD4 tightly co-localized with UHRF1 at chromocenters in CMT93 cells (Fig. 3C i, Figure 1B). Specifically, MBD4 and UHRF1 tightly occupied the same cellular space which may perturb chromocenter dynamics during heterochromatin replication and formation (Fig. 3C, lanes of MBD4, UHRF1 & Merge in i, ii & iii, black circle or square). Decondensing heterochromatin or remodeling at chromocenters was manifested by less intense DAPI bright spots, but did not lead to changes in the tight co-localization of MBD4 and UHRF1 (Fig. 3C, middle, DAPI lane & ii vs. i, iii, DAPI lane, white square or circle). Interestingly, we observed some moderate co-localization of MBD4 and UHRF1 that may be indicative of dividing cell nuclei (Fig. 3Ciii, white arrows); forming ring-like clusters that are adjacent to their strong staining at chromocenters (Fig. 3Ciii, black arrows). This may represent an intermediate transition, in which MBD4 and UHRF1 are involved in large-scale reorganization of heterochromatin (Fig. 3C iii, black arrows in DAPI lane). MBD4 DIRECTLY RECRUITS USP7 TO CHROMOCENTERS Consistent with our finding that MBD4 directly interacts with USP7, we also identified USP7 as an interactor in a Y2H assay using mouse Mbd4 protein as bait (Fig. 4A), supporting the possibility that the interaction between MBD4 and USP7 is direct. The cDNA fragment of USP7 was aligned to USP7 coding sequence, and it overlaps with Cterminal TRAF-like domain of USP7 (Fig. 4A, vertical gray shadow), suggesting that this may be one route through which MBD4 directly interacts with USP7 at the vicinity of the TRAF-like binding domain of USP7. Despite different cellular systems and conditions used, previous studies have reported a diffused distribution of USP7 within cell nuclei as well as some cytoplasmic staining [Holowaty et al., 2003;van der Horst et al., 2006]. We determined the subcellular localization of ectopic expression of MCherry epitope-tagged USP7 in the CMT93 cell model, and our data is consistent with the previously reported nuclear distribution of USP7 as diffused (Fig. 4B, left). More specifically, we characterized that the prominent DAPI bright spots were largely excluded from the nuclear distribution of USP7 (Fig. 4B, left & right magnification, white arrows), implying USP7 may require additional heterochromatin-associating interaction partner(s) to be recruited to the chromocenters where UHRF1 is tightly bound. Indeed, a number of studies have shown that USP7 can be recruited and relocated to particular nuclear loci via physical interactions with its protein partners, to effectively participate in cellular processes such as DNA damage and repair, apoptosis, and innate immunity response [Everett et al., 1997;Daubeuf et al., 2009;Zaman et al., 2013]. Moreover, MBD4 has been reported to possess a recruitment function by which it leads to the re-distribution of diffused MLH1 to accumulate at DAPI bright spots that are associated with heterochromatic chromatin in MEF cells [Ruzov et al., 2009], and we have observed the same phenomenon in the CMT93 cell model (Supplementary Figure 1 A). This suggested to us that MBD4 might possess the ability to participate in the UHRF1/ Dnmt1/USP7 trimeric complex by facilitating the recruitment of USP7 to the chromocenters. To address this question, we studied the subcellular localization of USP7 in the presence of ectopic MBD4 (Fig. 4C). GFP epitope-tagged MBD4 was detected exclusively at chromocenters (Fig. 4C, green staining), to which UHRF1 was shown in Fig. 3 to be tightly bound. Strikingly, chromocenter-binding (C) GFP-MBD4 recruits MCherry-USP7 exclusively to chromocenters in all the CMT93 cells co-transfected. In all the immunostaining images in C, the cells exhibit marked heterochromatin remodeling. The insets on the right (B) or below (C. i, ii, iii) correspond to magnifications of the areas indicated by the two parallel white lines. White arrows indicate triple co-localization of MBD4, USP7 and DAPI bright spots, while black arrows indicate colocalization of MBD4 and USP7 with diminishing or disappearing DAPI bright spots. Scale bars, 10 mm. MBD4 was able to recruit USP7 to chromocenters in all cotransfected CMT93 cells tested (Fig. 4C, MBD4, USP7 and Merge, white arrows & Supplementary Figure 1 C). In agreement with the above observations of co-localization of MBD4 and UHRF1 in Fig. 3, the MBD4-induced USP7 relocation and their co-localization at chromocenters also leads to a degree of heterochromatin reorganization that was not evident in the single transfections (Fig. 4C, upper, left & right vs. middle DAPI lane & disappearing heterochromatin spots indicated by black arrows in 4 C ii & iii). We observed heterochromatin clustering and diminution (Fig. 4C i DAPI lane, white arrows), chromocenter linkage and fragmentation ( Fig. 4C ii DAPI lane, white arrows), and marked heterochromatin remodeling at chromocenters (Fig. 4C iii DAPI lane, white arrows). Collectively, our data show that MBD4 directly interacts with and recruits USP7 to chromocenters during interphase, where MBD4 and UHRF1 can also be tightly co-localized and this is concomitant with large-scale reorganization of heterochromatin. DISCUSSION During cell proliferation, UHRF1 may act as a recruitment platform to facilitate faithful inheritance of DNA methylation patterns [Bostick et al., 2007;Sharif et al., 2007]. Mounting evidence indicates that USP7 can regulate the protein stability of UHRF1 as well as Dnmt1 through its deubiquitylase activity [Sharif et al., 2007;Ma et al., 2012]. In this report, we have shown that MBD4 can directly interact with and recruit USP7 to the UHRF1 platform at heterochromatin-associated chromocenters. Importantly, regulation of the interaction between UHRF1 and deubiquitylase USP7 has been shown to be cell cycle dependent [Ma et al., 2012]; UHRF1 is expressed and protected by USP7 from auto-ubiquitinylation during G1 and S phases of the cell cycle, and M phase-specific phosphorylation of UHRF1 expels USP7 from UHRF1 platform [Ma et al., 2012], leading to proteasomal degradation of UHRF1 and perhaps Dnmt1 as well [Felle et al., 2011;Ma et al., 2012]. MBD4 may have a role in this process via its interaction with all three components. Recent evidence indicates that the MBD4 protein is essential for cell survival following oxidative stress [Laget et al., 2014]. MBD4 and DNMT1 can be recruited at sites of oxidation-induced DNA damage, where they may participate in DNA repair or cell death pathways [Ruzov et al., 2009;Laget et al., 2014]. Although cell models and physiological conditions are different, our MBD4 interaction data suggest that UHRF1 and USP7 might also participate in these pathways. Our current study provides a foundation for future functional studies of these new MBD4 interactions and roles in related pathways of cell proliferation, stress response or DNA methylation machinery. The intervening region within MBD4 was previously viewed as a functional desert [Meng et al., 2011], our characterization strongly suggests that this region may possess a significant potential for novel protein interactions, which may augment MBD4 0 s wellcharacterized methyl-CpG binding and glycosylase repair functions [Millar, 2002;Screaton et al., 2003;Meng et al., 2011]. Motif and primary sequences of the intervening region are poorly conserved between lower and higher vertebrates, suggesting additional protein structure and/or motifs acquired in the latter may attract new functional interactions. This might link to recurrent frameshift mutations in MBD4 found in a number of human cancers exhibiting MMR deficiency resulting in human MBD4 protein truncation at amino acid 313, which would presumably deconstruct the interaction function of intervening region in addition to loss of the glycosylase domain of MBD4. Recent studies have documented the overexpression of MBD4 partner proteins, UHRF1 and USP7, in a variety of human cancers, which often correlate with a poor outcome [Unoki et al., 2009[Unoki et al., , 2010Mudbhary et al., 2014]. In addition, MBD4 activation has been shown to be a consequence of RON overexpression that results in reprogrammed DNA methylation at specific target genes, which is associated with metastasis and poor patient outcomes [Cunha et al., 2014]. Future studies will be required to address if the MBD4 protein interaction with UHRF1, USP7, and the previously identified partner DNMT1 have novel functions with respect to targeting specific methylation patterns at defined loci as well as in maintaining genome-wide methylation patterns at chromocenters. This interaction may also impact on nuclear organization, chromatin remodeling, and histone modifications in relevant cancers [Jones, 2012;Cunha et al., 2014].
5,734.4
2015-01-20T00:00:00.000
[ "Biology", "Medicine" ]
Gene set analysis for longitudinal gene expression data Background Gene set analysis (GSA) has become a successful tool to interpret gene expression profiles in terms of biological functions, molecular pathways, or genomic locations. GSA performs statistical tests for independent microarray samples at the level of gene sets rather than individual genes. Nowadays, an increasing number of microarray studies are conducted to explore the dynamic changes of gene expression in a variety of species and biological scenarios. In these longitudinal studies, gene expression is repeatedly measured over time such that a GSA needs to take into account the within-gene correlations in addition to possible between-gene correlations. Results We provide a robust nonparametric approach to compare the expressions of longitudinally measured sets of genes under multiple treatments or experimental conditions. The limiting distributions of our statistics are derived when the number of genes goes to infinity while the number of replications can be small. When the number of genes in a gene set is small, we recommend permutation tests based on our nonparametric test statistics to achieve reliable type I error and better power while incorporating unknown correlations between and within-genes. Simulation results demonstrate that the proposed method has a greater power than other methods for various data distributions and heteroscedastic correlation structures. This method was used for an IL-2 stimulation study and significantly altered gene sets were identified. Conclusions The simulation study and the real data application showed that the proposed gene set analysis provides a promising tool for longitudinal microarray analysis. R scripts for simulating longitudinal data and calculating the nonparametric statistics are posted on the North Dakota INBRE website http://ndinbre.org/programs/bioinformatics.php. Raw microarray data is available in Gene Expression Omnibus (National Center for Biotechnology Information) with accession number GSE6085. Background Molecular biology, which is targeted at studying biological systems at a molecular level, has provided rich information of individual cellular components and their contributions to biological functions over the last 50 years. Our understanding of genes and their functions has been accelerated in the last decade by microarray experiments, which identify genes that are induced or repressed in a specific biomedical condition [1][2][3]. The multiplicity and heterogeneity of these gene expression profiles revealed that even a simple biological process or a molecular function in a cell requires co-operations of hundreds or even thousands of genes. Nonetheless, decoding this kind of gene interaction and networking in a biological process is hampered by the complexity of biological systems. Instead of looking at individual genes, researchers started to interpret biological phenomena in terms of groups of genes, or gene sets. For example, Segal et al. (2004) mined a large number of cancer expression profiles and deduced 456 cancer-related modules (gene sets) which are selected by combining with the knowledge of transcriptional pathways and gene ontology [4]. The development of new statistical tools enables us to test whether a gene set is activated in the microarray dataset of interest. An important contribution is made by Subramanian et al. (2005) who proposed the gene set enrichment analysis (GSEA) to assess the significance of a set of genes. Their idea is that the genes that cooperate in a biological function have similar patterns in transcriptional levels such that the statistical power of assessing a gene set is higher than that of individual genes [5]. GSEA relies on permutation tests to identify the significant gene sets that have distinct gene expression between treatment groups. It works in three steps. First, all genes are ranked according to their statistics for the treatment effect. For example, a t-statistic can be used to compare two classes of samples. A score is assigned to each gene set using a weighted Kolmogorov-Smirnovlike statistic that sums up the ranks of the genes. Secondly, the class labels of the samples are permuted for a number of times, and gene set scores are calculated for each new label assignment. The permutation of sample labels preserves the inherent correlation between genes. Because the permutation is conducted under the null hypothesis of no treatment differences, the P value of each observed score can be determined empirically by the null score distribution. Thirdly, if more than one gene set is tested, the P values should be adjusted for multiple tests. GSEA is often applied for hundreds of gene sets, for which the false discovery rate (FDR) is recommended. Ever since GSEA was introduced, it has drawn a wide attention from the biomedical and biostatistical communities. A number of alternative and extended versions of gene set analysis method (GSA) have been proposed in the last few years that use a variety of score systems and randomization procedures to resample data [6,7]. For instance, Efron et al. proposed a GSA method, which is based on a more powerful statistic maxmean to score gene sets [8]. In the case of two sample classes, maxmean is the maximum absolute value between the average of the positive t-statistics, and also the average of the negative t-statistics. Before permutation test, the maxmean score should be restandardized by centering and scaling its mean and standard deviation using randomized gene sets. Despite their enormous success, all these aforementioned GSA methods have limited applications in microarray samples with dependence. A permutation test has to rely on the assumption of sample independence. This assumption presents a barrier to extend GSA to the fast-growing area of longitudinal microarray experiments, which repeatedly profiles the gene expression of a same object over time. Longitudinal microarray experiments allow researchers to investigate dynamic behavior of biological processes, such as cell cycles, cell proliferation, oncogenosis, and apoptosis. The temporal component is an inherent part of the study. Such time course experiments pose novel challenges for statistical analyses because effective methods have to take into account both a large number of genes and within-gene correlations. Most of the analyses in literature carry out repeated measures analysis of individual genes followed by FDR control [9][10][11][12]. It is desirable to apply repeated measures analysis methods, such as a linear mixed effects model (LME) or generalized estimating equations (GEE), to gene sets. Tsai and Qu (2008) assessed subsets of genes by applying a non-parametric time-varying coefficient model [13]. The within-gene correlation was taken into account by the quadratic inference function (QIF) that is derived from GEE. Both LME and GEE achieve their asymptotic distributions when the number of replications goes to ∞. However, the large sample size assumption is usually not applicable due to the high cost of microarray experiments. Rather, there is often a relatively large number of genes in a gene set compared to the sample size, a curse of dimensionality problem. An effective GSA method should also be robust against deviation from the normal distribution because gene expression data may be largely skewed, and the normal or log-normal distribution does not provide a close fit to the data [14,15]. Furthermore, to allow variability between genes, heteroscedastic correlation structures should be assumed for different genes. In this paper we propose a GSA method for assessing the expression patterns of gene sets from longitudinal microarray data. The method employs a couple of novel nonparametric statistics that work for small sample size as long as we maintain a relative large number of genes in a set (large p, small n). The method is robust with respect to non-normality and heteroscedastic correlation structures. To ensure extensive application, unbalanced designs are allowed in our model. For example, unbalanced data may occur when the data are pooled from different versions or manufacturers of arrays. The genes in a signal transduction pathway are often highly correlated in that the expression of one gene is regulated by the other gene in this pathway. To ensure an unbiased analysis, we need to take into account the correlation among genes. Permutation method has been widely used in GSA to provide a robust test that preserves between-genes correlations. For example, Tsai and Chen (2009) used permutation test with the Wilks' Λ statistic for their multivariate analysis of GSA [16]. To take into account the correlations among genes within a gene set, we also present a permutation-based test for our proposed statistics. The outline of this paper is as follows. Our main results are presented in section Results and Discussion. In subsection Model and Hypotheses, we describe the model and assumptions. In the subsection of Simulation study, we present the simulation results of type I error estimates and power analysis for our proposed methods. In subsection Results on real data, we describe an application of our method to a recent longitudinal microarray study in which the gene expression profiles of murine T cells in the presence or absence of interleukin-2 (IL-2) were repeatedly collected. A number of functional gene sets were tested to investigate IL-2 signaling over time. The test statistics and their asymptotic results for a large number of genes but small replications are provided in subsection Test statistics of section Methods. Subsection Permutation tests described the permutation-based test with our proposed nonparametric statistics. Finally, we provide mathematical proof for the asymptotic results of our test statistics in Appendix. Model and hypotheses In a longitudinal design for microarray studies, global transcriptional levels of each object were repeatedly measured at multiple time points under various conditions, such as different drug doses, genotypes, and chemical environments. Our goal is to find whether the transcription levels of a set of genes show a dynamic pattern that differs between conditions. We enumerate all the conditions using i = 1,..., I and refer them as treatments. If the number of genes in a gene set is relatively small compared to the number of sample replications, the methods for repeated measures analysis, such as LME and GEE, are able to test the variation among treatments under certain distributional assumptions. Both LME and GEE provide efficient model parameter estimates when the assumed covariance matrices can be estimated consistently. However, when the number of genes plus the number of time points is much larger than the number of replications, consistent estimates of the large covariance matrices are no longer available, especially if multiple large covariance matrices need to be estimated when empirical evidence suggests heteroscedasticity is present. We will focus our e ort on the latter case. For a gene set, let X ikl = (X i1kl ,..., X iJkl )' be the transcriptional levels of the k th gene (or probe) of the l th replicate in treatment i, where k = 1,..., K, l = 1,..., n ik , and i = 1,..., I. The expression of this gene is measured at J time points with subscript j to enumerate the j th repeated measurement. Denote μ ik = E(X ikl ) and Σ ik = Var(X ikl ) = (s i, k, jj' ) J×J to be the gene specific mean and covariance matrix. Each individual gene has its own transcriptional activity, therefore, each gene has its unique correlation structure. The heteroscedastic covariances for different treatments and different genes allow us to take into account of the different mechanisms that different genes respond to a treatment. This is more realistic than assuming a common covariance matrix in that many of the genes are not responsive to a specific stimulus while the responsive genes could exhibit different temporal dependence. An example is that a stimulus specific regulator gene or transcription factor tends to be activated at the early stage of the stimulus and the downstream genes of the regulator will respond at a later stage. We leave the joint distribution of X ikl unspecified and assume the observations from different treatments or replicates are independent. μ ik be the mean expression profile for the i th treatment. Let a be the I × J matrix with i th row beingμ i The hypothesis of no effect for the contrast of the treatments can be stated as where L 1 is a p × I contrast matrix with full row rank, 1 J is the J-dimensional vector of ones, and 0 p is a p-dimensional vector of zeros. The contrast matrix is convenient to assess the effect of a specific treatment factor if the treatment consists of multiple factors. Typical contrast matrix for a single treatment factor with I levels is an (I -1) by I matrix L 1 = (1 I-1 | -diag(I -1)), where the first column is 1 I-1 , a column vector of ones, and the remaining columns are -diag(I -1), the negative of the identity matrix of dimension (I -1). For I = 3, the above L 1 is This particular contrast matrix basically specifies that all the treatment means some treatments averaged over the whole time period and over all genes are identical. Differences could arise if the mRNA transcriptions of some genes are activated or inhibited by the treatment. Genes could have distinct expression trends over time. The hypothesis of no effect for a contrast among the treatment by time interactions can be expressed as where Vec() function transforms a matrix into a vector by concatenating all columns, P I is the projection matrix I −1 1 I 1 I , and L 2 is a q × (IJ) contrast matrix with full row rank. An example of the contrast matrix is the Kronecker product M I ⊗M J that specifies that all interactions are zero, where M I = (1 I -1 | -diag(I -1)). For example, with I = 3, J = 4, the Kronecker product contrast matrix for interaction effect is We present a summary of notations that are used in the rest of the manuscript. Denote σ 2 i,k,j = Var(X ijkl ), and We consider a couple of novel nonparametric statistics for hypotheses testing. A linear mixed effects model (LME) and generalized estimating equations (GEE) are often used for testing hypotheses (0.1) and (0.2) by assuming an appropriate correlation structure. The statistics for both LME and GEE achieve their asymptotic distributions when the number of samples goes to infinity. Thus, theoretically LME and GEE are not suited to large p, small n problems such as microarray data. This motivated us to propose new statistics that converge to their limiting distributions when the number of genes goes to infinity. The statistics should be robust for nonnormal distributions, heteroscedastic correlation structures, and unbalanced experiment designs. Two novel Wald statistics are proposed for null hypotheses (0.1) and (0.2) in the method section. Their asymptoticity is proved in Appendix. Simulation study This section will present our simulation study to evaluate the proposed nonparametric test statistics (NP) in various settings. First, we calculate the estimated type I error rate at level 0.05 for our nonparametric statistics. The type I error will be examined for samples generated from normal, exponential, Poisson and Cauchy distributions after introducing within-subject correlations. Second, we will compare the power of the NP statistics with linear mixed-effects model (LME) and generalized estimating equations (GEE). The type I error and the power analysis are used to validate our NP statistics. Thirdly, we will calculate the estimated type I error and power of the permutation test with our statistics for correlated genes and compare the results with GEE on data from normal, exponential, and Poisson distributions. All calculations and simulations were carried out with R programming and the results were based on 1000 iterations. The LME and GEE methods were implemented by using gls and geese functions from R packages nlme and geepack, respectively ( [17,18]). (a) Type I error rate analysis based on asymptotic distribution with simulated data In this section, we evaluate the specificity of our proposed test (NP) based on type I error rates for simulated data from various distributions. The number of time points per gene we simulated is either 2 or 5. As balanced design is only a special form of unbalanced design, here we only consider unbalanced design in that four fifths of genes having 4 replications and the remaining one fifth of genes having 6 replications. First, we examined the proposed test statistic for no gene expression variations across treatments. A data matrix X of n rows and J columns were randomly generated with each row representing observations from the same gene over J time points. The n is the sum of the number of replications for all genes across all treatment groups. The rows were generated from identical distribution such that the null hypothesis of no expression changes across treatments is satisfied. To allow a wide variety of data types, we use normal, exponential, Poisson, and Cauchy distributions to generate random samples. For normal, exponential, and Poisson distributions, the mean of random data was set to 2. The normal distribution was given a standard deviation of 1. The Cauchy distribution had a location parameter of 0, and a scale parameter of 1. Unstructured within-gene correlations were then generated from a uniform distribution on (0, 0.5). Identical unit variance is used for data under the null hypotheses. We used the Cholesky decomposition (via R function chol) to produce the lower half triangular matrix h for the covariance matrix Σ. Thus the data matrix Y = Xh has the desired covariance structure and it is used for subsequent data analysis. The matrix Y had equal means across rows. However, at different time points (across columns), the values from the same gene could vary. Table 1 gives the estimated type I error rates for data with unstructured correlation using the asymptotic distribution of the test statistic for treatment. For normal, exponential, and Poisson distributions, the error rates for at least 5 genes and 2 or 5 time points were in high agreement with the expected level a = 0.05. The error rate for Cauchy distribution failed to converge to 0.05 as the number of genes increases. This happens because the test requires finite fourth central moments while Cauchy distribution does not have finite moments. The next test was concerned with the interaction of treatment and time effect. Under the null hypothesis of no interaction, we generated random data as follows. Given the value y ij for probe i at the j th time point, the random observation at the (j + 1) th time point can be obtained by where ε ij is a random variable with mean 2(1 -r). Thus the mean of X i, j+1 is 2, which is the same as that of X ij . For the Poisson distribution, we first generated the mean values with the iterative algorithm (0.3), and then used the means to generate random integer numbers. An unstructured correlation was introduced to the repeated measures for each gene similarly as was generated for the test of no treatment effects. The type I error rates at a level 0.05 were shown in Table 2. Normal, exponential, and Poisson distributions had error rates close to 0:05 when the number of genes was above 50. Cauchy distribution did not converge to 0.05. (b) Power analysis based on asymptotic distribution with simulated data To evaluate the proposed NP statistics, we calculated the estimated power curves for three methods, NP, LME and GEE. Data were simulated for 4 treatment groups and 3 replicates. As shown in Tables 1 and 2, the number of genes being 50 or above achieves expected error rates. Therefore, we used 50 genes for all of the power analysis in this subsection. Each gene was repeatedly measured at 4 time points. Random data were generated in the similar way as for type I error simulation study. Log-normal distribution was assumed so that the data were first generated by a normal distribution and were then taken exponential transformation. An unstructured correlation was introduced between time points for each gene as described in the simulation study subsection. For LME and GEE, gene expression levels were modeled as the response variables with treatment group and time as fixed effects. The variable subject, which provides measurements for all genes at all time points, are modeled as a random effect. Unstructured correlation structure cannot be estimated in LME and GEE model fitting due to the number of replications being small. In this part of the simulation, compound symmetry correlation structure was assumed for LME and working independence correlation structure was used for GEE. First, we conducted a power analysis for the treatment effect. The means of the normal distributions are different between the treatment groups under alternative hypothesis, and the standard deviation of the normal distribution for each gene is randomly generated by a uniform distribution in (0, 3). The mean differences Δ between groups range from 0 to 2.5 to generate the power curves. Thus in each experiment, the logarithm of the mean of treatment group 2 is Δ higher than that of group 1, and that of group 3 is Δ higher than group 2, and so on. The three power curves for NP, LME, and GEE were shown in Figure 1. NP outperformed GEE and NP for all Δ. When Δ = 0.7, NP has 91% power, whereas LME has 60% power and GEE has 70% power. Next, we conducted power simulation analysis for the test of no treatment and time interaction. The results were similar to that for the treatment effect. So we do not present the results here. (c) Type I error and Power analyses for the permutation test We further conducted simulation study for the permutation test with our NP statistics by generating random data that had both within-gene correlation over time and between-gene correlation within a gene set. Random data were generated for two treatment groups with three time points. In order to show the effects of sample size on the power, the number of replicates for a group varied from 5 to 50. Random data were generated in the same way as for power analysis of NP statistics described earlier except that an AR(1) correlation structure with correlation coefficient 0.5 was introduced to gene-gene relationship. Gene sets with 20, 50 and 100 genes were generated following normal, exponential and Poisson distributions. Since linear mixed effects model is not valid for exponential or Poisson distributions, we compare the permutation-based NP statistics with GEE. For this part of the simulation, gene expression levels were modeled as the response variables while fixed effects of treatment, time, treatment by time interaction, and gene index are included in the GEE model. The variable subject is modeled as a random effect and AR(1) correlation structure was assumed for GEE. The type I error estimates are reported in Table 3 and power estimates are given in Figure 2 as the mean differences Δ between the two treatment groups increases. It is clear from Table 3 that GEE has inflated type I error when the number of replications in each treatment group is small. The permutation test on our NP statistic has very reliable type I error rate. The result of power comparison in Figure 2 shows that the permutation test with our NP test statistic consistently has higher power in all simulation settings. This happens because the NP test statistics are particularly suitable for large p, small n settings. GEE has lower power even though specification of AR(1) structure for GEE gives some advantage to it. In real data analysis, exploring and finding the correct correlation structure for GEE is itself a challenge. The differences in performance seem to be less evident when sample sizes are small (the last column of the plots in Figure 2). However, we remark that in this case the power of GEE was most likely overestimated because of the type I error inflation (see Table 3). As the number of genes increases, the powers of both permutation NP test and GEE increase. They both show better performance for normally distributed data than data from exponential and Poisson distributions due to the skewness of exponential distribution and more variations associated with Poisson distribution than the normal data we generated. Results on real data We apply the proposed method to a recent time course microarray study of mouse immune response. Cytotoxic Estimated Power NPT LME GEE Figure 1 The power curve of NP statistic based on the asymptotic distribution compared to LME and GEE. The empirical powers of the NP statistics for testing of no treatment effect based on the asymptotic distribution compared to LME and GEE are given here. The powers were estimated at level 0.05. Δ is the log-scale mean difference between successive treatment groups. cytokine molecule, Interleukin-2 (IL-2) [19]. The gene expression profiles with IL-2 stimulation have identified approximately 3000 IL-2-regulated genes in human T cells [2,[20][21][22][23]. A time course microarray study was carried out in Sandia National Laboratories to investigate activated genes by IL-2 during T cell proliferation and differentiation [24]. The murine T cell line CTLL-2 was cultured in the presence or absence (control) of IL-2 stimulation. Each treatment group has 3 independent cell cultures. For each culture, cells were harvested at 2 time points, 4 h and 8 h for microarray processing with Affymetrix Mouse Genome 430 2.0 Array. The light intensities of gene expressions were log-transformed and quantile-normalized prior to be analyzed by the proposed gene set method [25]. We used the C2 collection of gene sets from the Molecular Signature Database (MSigDB) of Broad Institute. C2 collection is curated from various sources such as online pathway database, biomedical literature, and knowledge of domain experts [26]. The collection contains 1892 gene sets. Since our previous simulation studies showed that at least 50 genes are required for a gene set to achieve sufficient statistical power and appropriate type I error rate, 548 sets out of 1892 gene sets were selected that consist at least 50 genes. The distribution of the number of genes from the 548 gene sets was shown in Figure 3. In order to identify the gene sets that are regulated by IL-2, we used NP to test for the interactions of treatment and time, and the main effect of IL-2 treatment. The P value of each gene set was converted to false discovery rate (FDR) with R package fdrtool [27,28]. With a FDR threshold at 5%, 285 gene sets showed significant treatment×time interaction, whose biological implications need to be further investigated. Of the remaining 263 gene sets, 20 sets were identified to be significantly differentially expressed by the treatment effect test. Thus, totally 283 gene sets are responsible to IL-2. The 20 selected gene sets for the treatment effect were reported in Table 4. There were totally 1,760 distinct genes involved in the 20 gene sets. T lymphocyte activation by IL-2 culminates many cellular processes, including blastogenesis, cell cycle progression, DNA replication and Mitosis [2]. Many of the selected gene sets are known to participate in these complicated biological functions. The gene set, VANASSE BCL2 TARGETS, consists of genes that are differentially expressed in murine CD19+ B cells overexpressing Bcl-2, a key gene regulating apoptosis. This confirms the antiapoptotic effects of IL-2 that proliferate T cells [29]. Conclusions With the fast advancement of high throughput genomics technology and increased complexity of array Test statistics (a) Heteroscedastic test of no treatment effect To test H 0 (treatment), we consider a Wald-type test statistic: where D A = ( X 1... , . . . , X I . . .) , a n d W A converges to a Chi-square distribution when the number of genes goes to infinity (see Appendix). The degrees of freedom of the limiting distribution is the same as the rank of matrix L 1 . Power comparisons for the permutation test of no treatment effect compared with GEE. The power curves for using permutation tests for treatment effect are given here. The powers were estimated at level 0.05. G is the number of genes, n is the number of replicates in the two treatment groups, and Δ is the mean difference between the treatment groups. (b) Heteroscedastic test of no treatment and time interaction effect The test statistic is for no contrast effect among the interactions of treatment and time is given by where D AB = ( X 11·· , X 12·· , ..., X ij·· , ..., X IJ·· ) , and V AB is the estimated covariance matrix for D AB . The estimated covariance of X ij·· and X i 1 j 1 ·· is given at the ((i -1)J + j) th row and ((i 1 -1)J + j 1 ) th column of V AB . If i ≠ i 1 , the values is zero. If i = i 1 , the value is given by W AB also converges to a Chi-square distribution when the number of genes goes to infinity (see Appendix). The degrees of freedom for the Chi-square distribution is the same as the rank of matrix L 2 . Permutation tests The nonparametric statistics given in (0.4 and 0.5) take into account the within-gene correlations among multiple time points. The correlations among genes within a gene set are unknown. We are not able to incorporate them into our statistics unless the genes are ordered in a manner such that the correlations between genes diminishes with a certain rate as their distance increases. It is unrealistic to make such an assumption for a gene set whose member genes have no known ordering. Furthermore, it is possible that all genes in a gene set are highly correlated. For example, if gene A is a transcription factor and the other genes in the gene set are its downstream genes regulated by A in a pathway, all genes will have high correlations. Failure of incorporating between-genes correlations would bias our statistics. We use a permutation-based test with the proposed nonparametric statistics to avoid bias. Specifically, we performed 400 permutations for the treatment group labels of the subjects. For each permutation, we randomly assign n i subjects with measurements from all genes at all time points to have group label i, where i = 1,.., I. We do not permute the genes or time points to keep their original correlations. The proposed statistics are calculated for a given gene set for each permuted sample. All statistics are then ranked, and the percentage of the statistics greater than that from the raw data gives the P value. It is interesting to note that the asymptotic distributions of our test statistics are only applicable when the number of genes is large. The permutation tests can be applied even when the number of genes is small. where the limit of Na'V AB a = N ijj1k a ij a ij1 σ i,k,jj1 /(n ik K 2 )exists since it is a nonnegative quadratic form and ( ik n −1 ik )( ik n ik )/K 2 converges due to the bounded nature of n ik . The asymptotic normality in (0.8) can be shown by Lyapounov's Theorem. Write Note that where the inequalities follow from Hölder's inequality, and the last equality holds due to the finite moment condition. This completes the proof.
7,201.6
2011-07-03T00:00:00.000
[ "Biology", "Computer Science" ]
What do we know about cosmography In the present paper, we investigate the cosmographic problem using the bias–variance trade-off. We find that both the z-redshift and the y=z/(1+z)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y=z/(1+z)$$\end{document}-redshift can present a small bias estimation. It means that the cosmography can describe the supernova data more accurately. Minimizing risk, it suggests that cosmography up to the second order is the best approximation. Forecasting the constraint from future measurements, we find that future supernova and redshift drift can significantly improve the constraint, thus having the potential to solve the cosmographic problem. We also exploit the values of cosmography on the deceleration parameter and equation of state of dark energy w(z). We find that supernova cosmography cannot give stable estimations on them. However, much useful information was obtained, such as that the cosmography favors a complicated dark energy with varying w(z), and the derivative dw/dz<0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text {d}}w/{\text {d}}z <0$$\end{document} for low redshift. The cosmography is helpful to model the dark energy. Introduction Cosmic accelerating expansion is a landmark cosmological discovery of recent decades. Till now, a number of dynamical mechanisms have been proposed to explain this mysterious cosmological phenomenon. However, its natural essence is still not known to us. The theoretical attempts include dark energy, or modified gravity, or violation of cosmological principle. The first paradigm is based on the belief that an exotic cosmic component called dark energy probably exists in the form of the cosmological constant [1] or scalar field [2,3] and possesses a negative pressure to drive the cosmic acceleration. The modified gravities do not need an exotic component but a modification to the theory of general relativity [4,5]. Violation of the cosmological principle is usually in the form a e-mail<EMAIL_ADDRESS>of the inhomogeneous Lemaître-Tolman-Bondi void model [6][7][8]. Different from the above dynamical templates, cosmic kinematics is a more moderate approach in understanding this acceleration. It only highlights a homogeneous and isotropic universe at the large scale. In this family, kinematic parameters independent of the cosmic dynamical models become very essential. For example, the scale factor a(t) directly describes how the universe evolves over time. The deceleration factor can immediately map the decelerating or accelerating expansion of the universe. Collecting some kinematic parameters, the authors in Refs. [9,10] created the cosmography via a Taylor expansion of the luminosity distance over the redshift z. Mathematically, this expansion should be performed near a small quantity, i.e. a low redshift. Using the standard theory of complex variables, Cattoën and Visser [11] demonstrated that the convergence radius of Taylor expansion over redshift z is at most |z| = 1. For high redshift, z > 1, it fails to reach convergence. Nevertheless, many observations focus on the high-redshift region. For example, the supernova in a joint light-curve analysis (JLA) compilation can span the redshift region up to 1.3; the cosmic microwave background (CMB) even can retrospect to the very early universe at z ∼ 1100. To legitimate the expansion at high redshift, they introduced an improved redshift parametrization y = z/(1 + z) [11]. Thus, cosmography in the y-based expansion is mathematically safe and useful, because of 0 < y < 1, even for the high redshift. Later, some other methods of redshift were also proposed [12]. When confronted with observational data, the cosmography study encounters some difficulties. Initially, the SNIa data were used to fit with the cosmography [13,14]. Then some auxiliary data sets [15] were also considered. The output indicated that fewer series truncation lead to smaller errors but a worse estimation; and more terms lead to more accurate approximation but bigger errors. That is, cosmography is in the dilemma between accuracy and precision. The crisis naturally turns around the question of where the "sweet spot" is, i.e., the most optimized series truncation. In previous work [13,15,16], one found that estimation up to the snap term is meaningless in the light of the F-test. For the y-redshift, it presents bigger errors of the parameters [13]. In spite of different observational data sets being used, most of the results were consistent. Recent work [17] investigated the cosmography using the baryon acoustic oscillations (BAO) only. From the simulated Euclid-like BAO survey, one found that future BAO observation also favored a best cosmography with a jerk term. Because it only requires the homogeneity and isotropy of the universe, cosmography was frequently used to deduce or test the cosmological models. Recently, it was used to test the CDM model [18], but it turned out that the parameter j 0 = 1 is ambiguous for different orders of expansion; it is not enough for a test. Reconstructing dark energy in f (R) gravities, one found that there exist extra free parameters, which cannot be constrained by the cosmography. The analysis was based on the mock data generated by a unified error of magnitude σ μ = 0.15. In the following, we will test the constraint of these mock data with flat errors. Following the work in Ref. [18], these mock data were generated assuming the same redshift distribution as the Union 2.1 catalog [19], but under the fiducial model from the best-fit ones by JLA data. Although the cosmography has been widely investigated, we still are left with a lot of questions. On the one hand, we do not present a repetitive work using more data, but numerically excavate more detailed information as regards the accuracy and precision in the convergence problem. The new approach we use is the bias-variance trade-off. Moreover, we will try to investigate whether future measurement can solve the serious convergence issue. On the other hand, before the use of cosmography, we should make certain what information it can provide and what it cannot provide. Although many types of observational data were used to fit the cosmography, our goal in this paper is to understand the convergence problem from another side, i.e., the geometric or dynamical measurement. Future surveys with high-precision may present a different constraint. To understand the above questions, we need the help of a future WFIRST-like supernova observation and a dynamical survey of the redshift drift. Different from the geometric measurement, the redshift drift is desirable to measure the secular variation ofȧ(t) [20]. In contrast, geometric observation usually measures the integral ofȧ(t). Interestingly, this concept is also independent of any cosmological model, requiring only the Friedmann-Robertson-Walker universe. Taking advantage of the capacity of E-ELT [21][22][23], numerous works agreed that this future probe could provide an excellent contribution to understand the cosmic dynamics, such as the dark energy [24,25] or modified gravity models [26]. More importantly, it can be extended to test the fundamental Copernican principle [27] and the cosmic acceleration [28]. However, studies of the redshift drift on kinematics have been scarce. This paper is organized as follows: In Sect. 2, we introduce the cosmography. In Sect. 3 we present the observational data. According to the goals introduced above, we analyze the problem of cosmography in Sect. 4, and explore its values in Sect. 5. Finally, in Sect. 6 a conclusion is drawn and a discussion presented. Cosmography Cosmography is an artful combination of kinematic parameters via the Taylor expansion with the hypotheses of largescale homogeneity and isotropy. In this framework, the introduction of the cosmographic parameters of interest is appropriate. The Hubble parameter accurately connects the cosmological models with observational data. The deceleration parameter directly represents the decelerating or accelerating expansion of the universe. The jerk parameter and the snap parameter are often used as a geometrical diagnostic of dark energy models [29,30]. An important feature should be announced: it is that jerk has been a traditional tool to test the spatially flat cosmological constant dark energy model in which j (z) = 1 all time. The lerk parameter is an higher order parameter to indicate the cosmic expansion. With the above preparation, the Hubble parameter in cosmography can be expressed as [9,14] where the subscript "0" indicates that cosmographic parameters are evaluated at the present epoch. According to the differential relations with the Hubble parameter, the luminosity distance in the study of cosmography can be conveniently expressed as [11,14] d cos where As introduced above, cosmography at high redshift, z > 1, fails to converge. To solve this trouble, a y-redshift hence introduced [11] For the new redshift, we can simplify it as y = 1 − a(t). Obviously, it is 0 < y < 1 for the current observational data. One benefit from the y-redshift is that it can extend the expansion to the high redshift region. This is important for the study of cosmography. The reason is that the model of cosmography in Eq. (7) is theoretically valid for redshift z < 1. With the import of the y-redshift, many observational data, such as supernova with higher redshift and even CMB, can be used to fit and study the cosmography. For example, it is reduced to y = 0.56 for the supernova at maximum redshift z = 1.3 in the JLA compilation. Moreover, its value is y = 0.999 which guarantees the safe use of the early CMB data. As described in Ref. [11], it even can extrapolate back to the big bang. The other physical significance is y-redshift also can bring us back to the future universe, but breaks down at y = −1. In the y-redshift space, the luminosity distance is with In the following analysis, one test we should do is concerned with the improvement of the y-redshift. In our cosmographic study, we need the help of the dynamical redshift drift. The story should start from the redshift. In an expanding universe, we observe at time t 0 a signal emitted by a source at t em . The source's redshift can be represented through the cosmic scale factor Over the observer's time interval t 0 , the source's redshift becomes where t em is the time interval-scale for the source to emit another signal. It should satisfy t em = t 0 /(1 + z). As a consequence, the observed redshift variation of the source is Taking the first order approximation with Eq. (14), the physical interpretation of redshift drift can be exposed thus: where a dot denotes the derivative with respect to cosmic time. Obviously, we should note that the secular redshift drift monitors the variation ofȧ during the evolution of the universe. For the distance measurement, it commonly extracts information content via the integral of a variant ofȧ. Theoretically, the Hubble parameter, a function ofȧ, may be more effective in probing the cosmic expansion. However, its acquisition in observational cosmology currently is indirect from the differential ages of galaxies [31][32][33], from the BAO peaks in the galaxy power spectrum [34,35], or from the BAO peaks using the Lyα forest of quasars (QSOs) [36]. For the redshift drift, we note that it is a direct measurement of the cosmic expansion and can become true via multiple methods [37]. In terms of the Hubble parameter H (z) =ȧ(t em )/a(t em ), we simplify Eq. (15) to What we should highlight is its independence of any prior and dark energy model. With regard to this unique advantage, many analyses have demonstrated that the redshift drift is not only able to provide much stronger constraints on the dynamical cosmological models [26,38], but also to solve some crucial cosmological problems [39,40]; it even allows us to test the Copernican principle [27]. Observationally, it is convenient to probe the spectroscopic velocity drift which is of the order of several cm s −1 year −1 . The signal is naturally accumulated with an increase of observational time t 0 . A Taylor expansion tells us that the redshift drift should be υ Using the Taylor series of Hubble parameter in Eq. (6), we can put Eq. (18) into practice, For the y-redshift, it is simplified to Recently, a Taylor expansion of the redshift drift was also provided in the varying speed of light cosmology [41]. One can reduce it from the non-mainstream scenario to the classical case. In this paper, we mainly use it to provide a numerical constraint on the cosmography, to test its constraint power on the cosmic kinematics. Observational data In this section, we introduce the related data in our calculation. Current data we use are the canonical distance modulus from JLA compilation. In order to test whether future SNIa observation can alleviate or terminate the tiresome convergence problem, we produce some mock data by the Wide-Field InfraRed Survey Telescope-Astrophysics Focused Telescope Assets (WFIRST-AFTA). 1 The dynamical redshift drift is forecast by the E-ELT. The parameters can be estimated through a Markov chain Monte Carlo method, by modifying the publicly available code CosmoMC [42]. As introduced in Sect. 2, the cosmography is independent of dynamical models. Therefore, we fix the background variables, and we relax the cosmographic parameters as free parameters in our calculation. Current supernova One important reason of why the supernova data were widely used is its extremely plentiful resource. In this paper, we use the latest supernova JLA compilation of 740 data set from the SDSS and SNLS [43]. The data are usually presented as tabulated distance modulus with errors. In this catalog, the redshift spans z < 1.3, and about 98.9% of the samples are in the redshift region z < 1. In our calculation, we also consider the whole covariance matrix. Future supernova In the study of cosmology, forecasting the constraint of future observations on the cosmological model is quite useful for theoretical research. Estimation of the uncertainty of the observational variable is a core matter. In a previous cosmography study, one usually uses several mock data from a conceptual telescope or satellite [15], or extrapolation from current observational data [44]. To be more reliable, in the present paper, we plan to use a current program. The WFIRST-2.4 not only stores tremendous potential on some key scientific programs, but also enables one to make a survey with more supernovae in a more uniform redshift distribution. One of its scientific issues is to measure the cosmic expansion history. According to the updated report by the Science Definition Team [45], we obtain 2725 SNIa over the region 0.1 < z < 1.7 with a bin z = 0.1 of the redshift. The photometric measurement error per supernova is σ meas = 0.08 magnitudes. The intrinsic dispersion in luminosity is assumed to be σ int = 0.09 magnitudes (after correction/matching for light-curve shape and spectral properties). The other contribution to the statistical errors is gravitational lensing magnification, σ lens = 0.07 × z mags. The overall statistical error in each redshift bin is then where N i is the number of supernova in the ith redshift bin. According to the estimation, a systematic error per bin is Therefore, the total error per redshift bin is In our simulation, the fiducial models are taken from the bestfit values by current supernova on the cosmographic models. We should note that although we have considered the various error sources, it is still difficult to provide the total covariance matrix of future WFIRST-like current supernova data. It may inevitably underestimate the errors of cosmographic parameters. However, this forecast is helpful for us in our study of whether future observation can improve the convergence problem. Redshift drift As suggested by Loeb [37], the redshift drift probe can come true via the wavelength shift of the QSO Lyα absorption lines, emission spectra of galaxies, and some other radio techniques. There, the ground-based largest optical/near-infrared telescope E-ELT will provide a continuous monitor from the Lyα forest in the spectra of high-redshift QSOs [46]. These spectra are not only immune from the noise of the peculiar motions relative to the Hubble flow, but also they have a large number of lines in a single spectrum [47]. According to the capability of E-ELT, the uncertainties of the velocity drift can be modeled as [21,47] with q = −1.7 for 2 < z < 4, or q = −0.9 for z > 4, where the signal-to-noise ratio S/N is assumed as 3000, the number of QSOs N QSO = 30 and z QSO is the redshift at 2 < z < 5. Following previous work [23,24,26,38], we can obtain the mock data assumed to be uniformly distributed among the redshift bins: z QSO = [2.0, 2.8, 3.5, 4.2, 5.0] under the fiducial model from the best-fit ones by JLA data. With no specific declaration, the observational time is set as 10 years. Problem of the cosmography The convergence problem has always been a top priority in the study of cosmography. According to the requirement of a Taylor expansion, we will, respectively, perform a related calculation for data at z < 1 and y < 1. In this section, we will analyze the convergence issue in current observational data, and forecast the constraint of future measurement. To ensure the physical meaning of the constraint, we should apply a prior on the Hubble parameter, in our calculation for both the z-redshift and the y-redshift. Convergence issue in current data Using the JLA compilation, we obtain the cosmographic parameters up to the fourth order. We show the corresponding results in Figs. 1, 2 and 3 and Table 1. For the z-redshift, we can roughly distinguish the data and cosmographic models by residuals in Fig. 3. On the one hand, most of the data locate at the low redshift, and fit well with the models. On the other hand, some of the data at high redshift present a little bigger residual with the cosmographic models. Thus, more low-z data make the cosmography study more precise. From the constraints in Table 1, we note that all of the constraints on the parameter q 0 in 1σ confidence level are negative, which shows a recent accelerating expansion. However, some recent work has tried to find a slowing down of the acceleration. Moreover, this novel phenomenon has attracted much attention [48][49][50][51][52][53], including the recent work of [54]. Using the dark energy parameterizations, one found that the Fig. 3 Residuals between the cosmographic distance modulus with different orders and the observational SNIa data. The vertical coordinates μ(z) = μ cos (z) − μ obs (z) denotes the residuals cosmic acceleration may already have peaked, and the expansion may be slowing down from the deceleration parameter being q 0 > 0. In recent work [55,56], a model-independent analysis on this interesting subject was presented, using the powerful Gaussian processes technique. It was found that no slowing down is detected within 2σ C.L. from current data. Moreover, we analyzed the inconsistency in Ref. [56]. We further deduced what physical condition should be satisfied by the observational data [57]. These results are consistent with the cosmographic constraint from JLA data. Comparison between Figs. 1 and 2 shows that degeneracies among the parameters in the y-redshift are similar to the z-redshift. The difference is that it provides much bigger errors on some parameters. Taking the parameter l 0 as an example, we find that its absolute value shows an increasing trend. Moreover, its relative error in the y-redshift 2478.44/2056.97 is also bigger than that of the z-redshift 158.08/149.54. This is consistent with the result in previous work, namely, the y-redshift brings about worse constraints. In recent years, much work has focused on the question of which series truncation fits the data best. In previous work [13,15,16], one introduced the F-test to find the answer, by favoring one model, and assessing the other alternative model. Although it showed that expansion up to the jerk term is a better description for the observational luminosity distance, the cosmographic problem is still vague. It is difficult for us to escape from the maze of cosmography in the accuracy and precision. We should underline that a small error does not mean a credible description, and a large error is not necessarily a bad thing. For further analysis of this issue, we recommend the bias-variance trade-off [58]; we have where μ cos (z i ) is the reconstructed cosmographic distance modulus in different series truncations,μ(z i ) is the fiducial value, σ (μ cos (z i )) is the uncertainty of reconstruction. Obviously, the bias-variance trade-off can reveal more detailed information. The term bias 2 describes its accuracy (about the deviation from the true values), the variance conveys the precision (about the errors) of the constraint. Theoretically, minimizing risk corresponds to a balance between bias and variance. In cosmology, this promising approach has been widely utilized to find an effective way of obtaining information as regards the dark energy equation of state w(z) [59,60]. In order to investigate the influence from fiducial model on the risk, we respectively consider the fiducial CDM model with m = 0.305 and wCDM model with w = −1.027 in the combination of JLA and complementary probes [43]. Accuracy Accuracy is a deviation from the true value, which can be expressed by the bias square. In Fig. 4 we show the bias 2 of current data at the basis of fiducial CDM model. First, we find that all of the bias are small, which indicates that the cosmographic models fit well with the JLA data. It implies that cosmography is sufficiently accurate to describe the observational JLA data. This is not difficult to understand; about 99% of the JLA data are at low redshift. Thus, application of the JLA data in cosmology would be a very useful strategy. Second, we see that bias square slightly increases for higher order. Finally, importantly, we find that the z-redshift and y-redshift both favor the second order, which indicates that expansion up to the jerk term is in best agreement with the true values. Precision Precision is usually statistic, and it represents errors. Variance, the set of errors, is independent of the fiducial model. In Fig. 5, we plot the variance of the cosmographic model in the z-redshift and y-redshift. First, variances at low order in these two redshift spaces are both small, almost zero. It indicates that current observational data can present a precise measurement on the parameters q 0 and j 0 . Second, we note that variance at the third order starts to increase rapidly, especially for the y-redshift, which means that current data cannot give a physical measurement on the s 0 term, even higher orders. However, it has enough information for us to infer that the universe will continue to accelerate or slow down. Third, we should admit that variance in the yredshift space at the fourth order is larger than that of the z-redshift. Risk Risk is used to balance the bias square and variance, and to find which series truncation is the best description of the observational data. Due to the model dependence of the bias square, in this section we also investigate the influence of different fiducial models on the final risk analysis. In Fig. 6, we plot the risk for fiducial CDM model and wCDM model. From the comparison between two panels, we first find that risk affected by the fiducial model is so little. They both favor the idea that cosmography up to the j 0 term is a better choice to describe current JLA data. This consequence is consistent with our previous work via the F-test [15]. It also proves that the risk analysis is a stable and scientific tool to analyze the convergence problem. From the bias-variance trade-off, we conclude that the JLA data is so precise that the cosmographic model at zredshift and y-redshift both can present an estimation with Forecasting The above analysis shows that cosmography at high order suffers from an unphysical estimation, i.e., a large variance. We anticipate that future observation is able to give tighter constraints on the cosmography, with the improvement of observational precision, thus leading to a relaxation of the convergence problem. In this section, we forecast the constraint from future WFIRST and redshift drift on cosmography. In order to test the constraint from the mock distance modulus with flat errors σ μ = 0.15, we also generate some data following the work in Ref. [18]. The comparison in Table 1 shows that future measurements can improve the constraints. For example, compared with σ l 0 = 2478.44 from the JLA sample, the redshift drift gives a more robust constraint on the parameters, e.g., σ l 0 = 1299.17, almost improving by double than current JLA data. Due to future measurements mainly focusing on the high-redshift region, we make a comparison of the variance for the y-redshift in Fig. 7. On the one hand, we find that all the future observations can improve the constraints at low order with high significant. Especially, the redshift drift can present an error σ q 0 ∼ 10 −5 for the first order model. On the other hand, the future observations, including the mock data with σ μ = 0.15 all improve the constraint at high order dramatically. Variances in these cases are much smaller than current data. Thus, we can see that the bias-trade off is effective to estimate the cosmographic problem. The future observations also have the potential to solve the cosmographic problem. Values of the cosmography In previous work, cosmography has been widely used to reconstruct some special cosmological parameters, because of its model independence. In this section, we are interested in investigating its values to report what information we can obtain from the cosmography, and what we cannot get. Deceleration parameter The deceleration parameter is important for its sharp sense on the cosmic expansion. Especially, its negative (positive) sign immediately indicates the accelerating (decelerating) expansion. However, it is not an observable quantity temporarily. Most studies were performed in multiform parameterized q(z). Therefore, a model-independent analysis is appreciated. In the right panel of Fig. 8, we plot the reconstructed deceleration factor over y-redshift using the best-fit values of supernova data. We find that the q(y) in various series truncations are quite different. They strongly depend on the order of Taylor expansion. Therefore, it is difficult to obtain a model-independent and stable estimation on the deceleration parameter via cosmography. In order to find the reason why we cannot obtain a stable estimation of the deceleration parameter, we also compare the cosmographic distance modulus and Hubble parameter for different orders. For the distance modulus, they are almost indistinguishable at all redshifts, which indicates that cosmography fit well with the observational data. However, for the Hubble parameter, we can only obtain a relatively stable estimation at low redshift y < 0.2. For high redshift, they gradually deviate from each other. When we extract the information of the deceleration parameter, we only can obtain a similar estimation at redshift y ≈ 0.1. This comparison tells us that despite the cosmographic models fitting well with the data, their contradictions become more and more prominent, with our increasing requirement on the cosmic expansion study. Therefore, it is difficult to obtain a more detailed expansion history via cosmography. In fact, Fig. 8 implies that a dynamical measurement may be useful to solve this contradiction. In our previous work [15], we found that inclusion of Hubble parameter data can lead to stronger constraints on the cosmographic parameters. In Ref. [61], the authors also showed that the distance indicator cannot directly measure q 0 with both accuracy and precision. However, the redshift drift possibly does it. Therefore, it is reasonable for us to anticipate that inclusion of the dynamical redshift drift could present a much more stable evaluation on cosmic expansion history. Dark energy equation of state In previous work, cosmography was often used to reconstruct the dynamical cosmological model. For example, with two cosmographic parameters (q 0 , j 0 ), one can derive the constant equation of state (EoS) dark energy model [62], However, it needs a background model. In order to get an undamaged map of dark energy, our study is concerned with the normal cosmological model, In our analysis, we do not impose any style of dark energy, but the common w(z). Solving Eq. (27), we obtain where the prime denotes the derivative with respect to redshift z. For Eq. (28), we note that the denominator may be zero for H (z) 2 = H 2 0 m (1 + z) 3 . This case may lead to a singularity in the EoS reconstruction. In Fig. 9, we plot the reconstruction of dark energy with different cosmographic series. In order to investigate the influence of matter density parameter, we relax parameter m from 0.25 to 0.35. On the one hand, we find that cosmography has all favored a cosmological constant EoS recently. On the other hand, however, a reliable estimation of w(z) is difficult to obtain. In addition, we find that w(z) at redshift z ∼ 0.8 shows a sharp change, independent of the matter density parameter. Usually, it is difficult to determine the EoS constant or varying model independently. A model-independent analysis of the derivative of EoS can be studied using the cosmography In Fig. 10, we plot the derivative w (z) by cosmography with different series. We also consider the parameter m in a wide region. At first, we find that w (z = 0) is not zero in different cosmographic models. This indicates that a constant EoS dark energy model may be inappropriate. Moreover, w (z) at low redshift are generally negative. Thus, the cosmography may suggest a varying EoS and more complicated dark energy candidate. A linear EoS like w(z) = w 0 + w a z may be improper. According to the above picture, we infer that cosmography may favor a dark energy with w(z) = −1 + w a z + w b z 2 + · · · , where w a < 0 and w b = 0. However, an accurate determination on the derivative w (z) needs more data to join in, because it also strongly depends on the cosmographic series truncation. Conclusion and discussion In the present paper, we analyze the problem of cosmography using the bias-variance trade-off, and investigate its values. To solve the convergence issue in cosmography, an improved redshift y = z/(1 + z) was introduced. Using the bias-variance trade-off, we find that the y-redshift produces bigger variances at high orders. For the low cosmographic order (i.e., first and second order), y-redshift does not bring about bigger errors, but a nearly identical variance as z-redshift. For the JLA data, we find that most of them are distributed in the low-redshift region with high precision. Therefore, the z-redshift is sufficiently accurate to describe the data. Although the y-redshift does not present a smaller bias than the z-redshift for the JLA data, it still can ensure the correctness of cosmography at high redshift. Minimizing risk, it suggests that expansion up to the j 0 term is the best choice for current supernova data, regardless of the z-redshift or y-redshift. We also test the influence of the fiducial model on risk analysis. The comparison demonstrates that it is trivial. Although a previous F-test also obtained a similar result, our paper is not a repeated work using more data. Our analysis finds a deeper insight in the convergence issue. First, it not only can tell us the convergence problem is in the accuracy or the precision, but also can provide us more objective information about how serious the divergence problem is. Because if the crux lies in the accu-racy, the convergence problem maybe still cannot be solved even though more data were included. Second, in previous work, most focus was on the pursuit of a "sweet spot", which has masked the physical meaning of the y-redshift. In our study, we not only find it is influenced by the distribution of data, but also forecast whether future observations can solve the convergence problem. Our analysis in Fig. 8 and Sect. 5.1 also indicates that the dynamical measurement is a potential clue to solve this problem. Our forecast finds that future WFIRST and redshift drift can significantly improve the constraints. Therefore, inclusion of dynamical measurement such as Hubble parameter data, redshift drift, etc. may be able to improve the constraint in accuracy and precision with high significance. As studied in our previous work [15], inclusion of the H (z) data can lead to stronger constraints on the cosmographic parameters. This discovery is helpful to understand or solve the convergence issue of supernova data. This is because a dynamical probe like the canonical redshift drift can provide direct measurement to the cosmic expansion history. While distance measurement is geometric. As studied in Ref. [63], the luminosity distance determines the EoS w through a multiple integral relation that smears out much information. For the redshift drift, it not only directly measures the change of Hubble parameter, but also can be realized via multiple wavebands and methods [37,64]. Moreover, it is immune from extra systematic errors, and does not need photometric calibration, etc. Recently, a test in German Vacuum Tower Telescope demonstrates that the Laser Frequency Combs also have an advantage with long-term calibration precision, accuracy to realize the redshift drift experiment [65]. Our investigation also promotes the study of the values of cosmography. In previous work, most attention were focus on a special model. However, our analysis presents an almost undamaged map of dark energy. It breaks the limitation of extrapolation to other models. Setting the dark energy w(z) as free, we find that cosmography cannot give reliable estimations on q(z) and w(z). However, we find that it does not favor a constant EoS, but a complicated w(z), such as w(z) = −1 + w a z + w b z 2 + · · · , where w a < 0 and w b = 0. These estimations are useful for modeling the dark energy. Cosmography has been an useful tool with great potential to study the cosmology. For the dark energy, it was usually reconstructed by parametrization, such as the Chevallier-Polarski-Linder [66,67], Jassal-Bagla-Padmanabhan [68]; or the non-parameterization, such as the Gaussian processes [55,69], principal component analysis [60,70]. Cosmography is another model-independent method to assess dark energy models. Moreover, cosmography has also been widely used in another fields, such as to test the power of supernova data [71]. Therefore, we have to say that cosmography is an important method to study the cosmology. Our study provides a straightforward and scientific reference. Of course, we will also devote ourselves to improving the cosmography study in our future work. We would like to study the influence of the inclusion of BAO and CMB data on cosmography. Throughout previous work, we find that many different observational data or combinations favor the best cosmography to second order. In our future work, we will also be interested in exploring their subtle relations to further understand the cosmography. Moreover, we also have an interest in improving the cosmographic problem by proposing some other physical redshift.
8,127
2017-06-29T00:00:00.000
[ "Physics" ]
Investigation of the Energy Efficiency of Electrohydrodynamic Drying under High Humidity Conditions Electrohydrodynamic (EHD) drying is an emerging drying technology that is based primarily on the phenomenon of ionic discharge between two electrodes. e aim of this study was to experimentally investigate the energy e†ciency of the EHD drying based on the corona discharge current. Moreover, we aimed to compare the energy e†ciency of the EHD drying with that of other available drying technology options under the condition of high humidity, which mimics Indonesia’s environment as a potential target for implementation, given that it is of interest to investigate the feasibility of the EHD drying in a real environment. e results show that the dominant parameters of the EHD drying rate are the corona discharge current and the moisture content zones of the target objects. Findings from this study show that the EHD drying technology could be e‹ectively used to replace the conventional drying process in developing countries having high humidity environments and facing energy consumption challenges. Introduction Electrohydrodynamic (EHD) drying is an emerging technology that has recently gained attention owing to its potentially wide applications in agriculture, especially for drying [1]. In addition to low energy consumption [2], EHD drying also o ers various advantages with respect to the quality of the dried product-lower shrinkage [3,4], preserved color [3][4][5], and nutrient content [6], and a higher rehydration ratio [3]. e working principle of EHD drying is based primarily on the phenomenon of ionic discharge. Two electrodes are required for the set-up, with a high potential di erence applied between them. Ions generated are then transported to the collecting electrode by the in uence of the electric force. As the ions move to the other electrode along the electric eld, the ions collide with other air molecules, creating a ow of air called an ionic wind [7,8]. e ionic wind disturbs the saturated boundary layer around target objects and causes water molecules to evaporate from the target objects. Many researchers have previously attempted to unravel the physics of EHD drying as a conjugate phenomenon interlinking many physical parameters to drying. It is anticipated that an understanding of the relationship between parameters would aid in the industrial implementation of EHD drying. However, many studies have been mostly solitary, which o er only a partial understanding of the phenomenon: the e ect of applied voltage on the drying performance [9], the e ect of humidity and pressure [10][11][12], the e ect of cross-wind [12][13][14], the e ect of AC current on drying [15], and analysis of the drying of various agricultural products [16][17][18], to name a few. e e ect of electrode geometries on the drying performance has also been widely investigated [3-5, 8, 19, 20]. In the studies, the shape and arrangement of the emitter and collecting electrodes are investigated. is research typically compares three di erent emitter electrode shapes which are needles, plate, and two di erent collecting electrodes which are plate and mesh [19]. Recently, Martynenko et al. raised a hypothesis on the importance of the corona discharge current to the drying performance [21]. e authors argued that the mass transfer is directly proportional to the ionic wind velocity, which is proportional to the current density. Conveniently, the corona discharge current is easily measured; therefore, it can act as a promising parameter to offer insights into the design considerations of EHD drying. In addition, in the same publication [21], which was revised more recently [22], the same group presented a diagram that neatly displayed how various factors relate to each other and how they eventually determine the mass transfer in EHD drying. Martynenko et al. [21] and Kudra and Martynenko [22] provided critical insights for future development and applications of EHD drying. However, the analysis has not yet been linked to the energy aspect of EHD drying, whereas energy efficiency is a major factor to consider when it comes to potential implementation in real environments. e importance of investigating the energy efficiency of EHD drying is highlighted in a recent review on research about EHD drying [23]. erefore, in this research, the objective was to investigate the energy efficiency of EHD drying based on the corona discharge current. Several factors were also considered, especially the effect of different moisture contents in the samples on the drying rate. Additionally, it is imperative to consider an energy scenario in a real environment. In this study, we selected Indonesia as a potential target for implementation. Mimicking the condition of high humidity, we then carried out the energy analysis by comparing it with other available off-the-shelf drying technology options, such as hot air grain dryers, to assess the feasibility of EHD drying. Investigation of the Effect of Corona Discharge Current Value on Drying Speed and Energy Efficiency of Drying in EHD Drying. e aim of this experiment was to investigate the effect of corona discharge current on the drying rate and energy efficiency of EHD drying. Figure 1(a) shows a schematic of the experimental system for EHD drying experiments. e voltage electrode had 16 needles spaced 20 mm apart. e interelectrode distance (distance from the tip of the needle to the ground electrode) d was 20 mm. Multiple needle-like electrodes were chosen among various electrode types, such as single-needle and wire electrodes [18,24] because multiple needle-like electrodes can be used in industrial settings where a wider area is needed for drying. A Faraday cage was used to prevent stray electromagnetic waves from affecting the measurement. e corona discharge current generated by the EHD was measured using a digital multimeter (PC773, Sanwa Electric Instrument Co., Ltd., Tokyo, Japan). Samples were prepared from paddy rice. e experimental procedure for the EHD drying experiment was as follows: (1) e dry weight (W dry ) of the samples were measured. e dry weight of the sample (W dry ) was defined as the weight of the sample when the desiccant (silica gel) was used to sufficiently absorb moisture until measured weight did not change anymore. e sample dry weight was 10 g; (2) the samples were soaked in water for 24 h to moisten them until the moisture content was approximately 30% (Figure 1(c)); (3) the water-impregnated samples were evenly spread on the ground electrode to form a circle of 7.5 cm in diameter. e sample is partially spread over two layers with no gaps; and (4) a high voltage was applied while maintaining the temperature and humidity and measuring the change in weight of the sample (W wet ). e moisture content based on wet weight (moisture content) MC wb was derived as follows: e applied voltages ranged from 0 kV (natural drying) to 4.6 kV∼5.8 kV. e corona discharge started around 4.4∼4.5 kV. Drying experiments were conducted in 0.1 kV increments from 4.6 kV∼5.2 kV and in 0.2 kV increments from 5.2 kV∼5.8 kV. Experiments were conducted by changing the applied voltage, and the drying rate (%/h) relative to the measured corona discharge current value was derived for each moisture content zone (high-moisture content zone, 26%∼25%; medium-moisture content zone, 21%∼20%; low-moisture content zone, 16%∼15%). e range was chosen because in Indonesia, the moisture content of paddy rice immediately after harvest is about 25% (rainy season) to 22% (dry season) [25]. e purpose of grain drying is to reduce the moisture content to 15%∼14%, which is suitable for storage. e drying rate is a unit used in the field of grain drying to express the rate of drying and is calculated as the decrease in moisture content per unit time. e drying rate per unit of electricity (%/mW·h) was obtained as an indicator of the energy efficiency of drying with the following equation where T is the measured time needed for evaporating 1% of moisture content (sec), V is the applied voltage (kV) and A is the mean value of measured current (μA). At 5.8 kV, the maximum value of the corona discharge current was 12.22 μA, approaching the order of 10 −5 A, which is the current value that shifts to glow discharge. e temperature and humidity were kept at 23.1 ± 1.6°C and 50.0 ± 2.0% relative humidity (RH) during the experiment of the effects of corona discharge current on the drying rate. Taking the environment of Indonesia as an example of that of a developing country, the experiment compared the drying performance of the EHD drying and conventional drying methods under high humidity conditions. In the postharvest drying of paddy, the drying process was carried out at a drying rate of approximately 0.8 to 1.3%/h to prevent cracking, which is caused by very rapid moisture movement within the grain [26]. Based on the experiment of the corona discharge current, the voltage was set so that the drying rate at a low moisture content (16%∼15%) would be within the range of the drying rate. At the low moisture content zone, evaporation of water content from the inside of the paddy is considered the main factor. A sample of paddy rice that had been moistened for 24 h was used. e moisture content of the treatment sample was between 15% and 25%. To simulate a high-humidity environment, the temperature and humidity within the experimental set-up were maintained at 23.4 ± 1.4°C and 68.8 ± 0.9% RH in the experiment of comparing efficiency between EHD drying and sun drying. Figure 2 shows the relationship between the drying rate and the corona discharge current in the moisture content zone. Figure 2 shows that in all moisture content zones, the drying rate increased with increasing corona discharge current, but the increase was moderate. In particular, when the moisture content was low (16%∼15%), the dry loss rate remained almost unchanged from a corona discharge current value of approximately 2.6 μA. Effects of Corona Discharge Current on the Drying Rate. e results also suggest that in high-moisture content zones, the relationship between the corona discharge current and drying rate fits the square root approximation, as reported by Kudra and Martynenko [22]. However, in the lowmoisture content zones, the relationships were almost constant. is result suggests that the moisture content zones of drying targets play a significant role in determining the drying rate. Figure 3 shows the relationship between the dry decay rate per unit power and the corona discharge current. e corona discharge current is the average value of each moisture content zone. e results support the fact that the energy required for drying increases with low moisture content and that evaporation of internal moisture is the main factor in the late stage of drying. e smaller the corona discharge current, the higher the energy efficiency for promoting the drying rate. is result suggests that the minimum corona discharge current that can satisfy the desired drying rate is the optimal corona discharge current value in terms of energy efficiency because the increase in the drying rate becomes slower as the corona discharge current increases. e result (cf. Figure 2) shows that the drying rate depends on the moisture content zones of the target objects under the same corona discharge current. e results also clearly show that the energy efficiency is higher when the corona discharge current is smaller. is means that maximizing the total corona discharge current might not be the best strategy for developing the EHD drying in low-energy contexts, such as those in developing countries having high humidity environments. e results might differ depending on the target materials. Future research should investigate the energy efficiency of other target materials. Journal of Engineering Figure 4 shows the change in moisture content over time in the EHD drying experiment conducted under conditions simulating the Indonesian environment. When the applied voltage was 5 kV, the average value of the corona discharge current was 2.81 μA. During the experiment, the drying rate at a low moisture content (16%∼15%) was 1.17%/h, which was within the drying rate range of 0.1%∼0.3%/h to prevent cracking. Figure 4 shows that EHD drying under the Indonesian environment took about 6.2 hours for drying of 25%∼15% moisture content. Under the conditions of 70% RH, the drying rate in the low moisture content zone (16%∼15%) reached a range of 0.8%∼1.3%, which was suppressed to prevent rice cracking, indicating that the drying rate was sufficient for practical use. Table 1 shows the results of the comparison of the drying rate between the EHD drying experiment under the conditions that mimic the Indonesian environment and the sun drying experiment in Indonesia. e average drying rate in EHD drying was between 25% and 15% moisture content. e maximum drying rate was between 26% and 25%, and the minimum drying rate was between 16% and 15%. Table 1 shows that the drying rate of EHD could maintain a high degree of dryness, while that of sun drying varied greatly depending on the time of day. e distribution of drying speeds reflects the difference, owing to the time period. A previous study reported that the drying speed is rapid in the morning and slow in the afternoon [18]. e energy required to dry 1 kg of water was 239 kJ/kg in the EHD drying under Indonesian environmental conditions when the evaluation range was a moisture content of 22%∼ 15%. Table 2 shows a summary of the results of experiments in which the energy consumption for drying was measured using commercially available products [27,28]. Table 2 shows that the average energy required to dry 1 kg of water is 4,531 kJ/kg water on average. ese results indicate that the EHD drying might be able to accelerate the drying process with approximately 1/20 of the energy of the grain dryer while achieving the same drying rate as that of the grain dryer. Comparison between EHD Drying and Sun Drying. Drying experiments were conducted under the condition that a sample of 10 g was accumulated on a 100 mm square ground electrode. e current density enables a 1 kg sample to accumulate on a 1 m square when the EHD system is simply extended horizontally. Even assuming that the EHD device can be stacked at a height of 50 mm without gaps, the processing capacity will be limited to 80 kg within a square of 1 m width, 2 m depth, and 2 m height. Considering that a small grain dryer, for example, KDR9N-SA, Kubota Corporation, Japan [29], has the capacity of drying 300∼900 kg of grain, the current laboratory-level EHD device can process only 9% to 27% of the grain of a similar size grain dryer. Although there is a decrease in energy efficiency in industrialization, the comparison implies that the current EHD device is inferior to the grain dryer in terms of processing capacity. is suggests that there is room for investigating ways to increase the density of grains, such as appropriate electrode configurations, to increase the capacity of the EHD dryer. In contrast to sun-drying, 2∼4 cm of accumulation is assumed because sun drying occurs through solar radiation [25]. Given that the EHD dryer can accelerate the drying process even with accumulation, EHD drying is a technology that can fully replace sun drying in terms of processing volume. Conclusion In this study, we examined whether the grain drying method using EHD drying technology could be used for industrial application in agricultural sites of developing countries having high humidity environments where the energy efficiency of drying is important from the viewpoint of power supply. We experimented with the drying rate and its energy efficiency with respect to the corona discharge current value and confirmed that the dry rate increased, and the energy efficiency decreased as the corona discharge current increased. e results also show that the drying rate varies depending on the moisture content zones of the target objects. e results suggest that the dominant parameters for promoting EHD drying are the corona discharge current and moisture content zones of the target objects. As an example of a developing country, the experiment was conducted under conditions assuming the environment of Indonesia. It was confirmed that EHD drying could maintain a high dry lapse rate in a stable manner, as opposed to sun drying, where the dry rate varies by nearly four times depending on the time of day. In addition, it was confirmed that drying could be promoted with approximately 5% of the energy while exhibiting the same lapse rate as that of the grain dryer using both hot air and far-infrared rays. It was clarified that the EHD drying technology is effective for developing countries having high humidity environments with energy problems from the viewpoint of low energy consumption as a technology to replace the conventional drying process. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Additional Points Practical Application. Drying is an effective way to store foods such as grains. However, drying by heat requires high energy, and drying by sunlight is highly influenced by the environment. is study examined whether the grain drying method using electrohydrodynamic (EHD) drying technology could be used for industrial application in agricultural sites of developing countries having high humidity environments where the energy efficiency of drying is important from the viewpoint of power supply. It was confirmed that the EHD drying could maintain a high dry lapse rate in a stable manner, as opposed to sun drying, where the dry rate varies by nearly four times, depending on the time of day. e result implies that the EHD drying technology may become an effective approach for developing countries having high humidity environments with energy problems, from the viewpoint of low energy consumption, as a technology to replace the conventional drying process. Conflicts of Interest e authors declare that they have no conflicts of interest. Research Target Drying method Energy required to evaporate water (kJ/kg water) Durance and Wang [27] Tomato Air drying 29,900 Durance and Wang [27] Tomato Vacuum microwave drying 8600 Billiris et al. [28] Rice Hot wind 4,531 (average of eight trials) Journal of Engineering 5
4,190.2
2022-08-24T00:00:00.000
[ "Physics" ]
Cognitive reserve and TMEM106B genotype modulate brain damage in presymptomatic frontotemporal dementia: a GENFI study Frontotemporal dementia (FTD) shows substantial phenotypic variability. In a multicentre study, Premi et al. explore the effect of cognitive reserve and TMEM106B genotype in modulating grey matter volume in presymptomatic FTD. Environmental as well as genetic factors affect rates of brain atrophy, suggesting a possible strategy for delaying disease onset. Introduction Frontotemporal dementia (FTD) is a neurodegenerative disorder characterized by neuronal loss in the frontal and temporal lobes (Hodges et al., 2004;Rohrer et al., 2011;Warren et al., 2013). It presents clinically with behavioural symptoms, deficits of executive functions and language impairment, and in some cases, with motor neuron disease, progressive supranuclear palsy or corticobasal syndrome (Seelaar et al., 2011). Up to 40% of cases have a family history of dementia, with an autosomal dominant inheritance in around a third of patients (Stevens et al., 1998). Mutations within microtubule-associated protein tau (MAPT) (Hutton et al., 1998), granulin (GRN) (Baker et al., 2006;Cruts et al., 2006), and chromosome 9 open reading frame 72 (C9orf72) (DeJesus-Hernandez et al., 2011;Renton et al., 2011) are proven major causes of genetic FTD, accounting for 10-20% of all FTD cases. MAPT mutations lead to FTD with neuronal tau inclusions, while GRN and C9orf72 are associated with intraneuronal TAR DNA-binding protein 43 inclusions (Baborie et al., 2011). Recently, it has been demonstrated in the Genetic Frontotemporal Dementia Initiative (GENFI) study that grey matter and cognitive changes can be identified 5-10 years before the expected onset of symptoms in adults at risk of genetic FTD (Rohrer et al., 2015), and even earlier for those with C9orf72 expansions. However, there is wide variation in the age at onset within families, and possible modifiers of disease progression (including genetic and environmental factors) have yet to be investigated. Such modifiers will be important for several reasons: to properly define biomarkers that can stage presymptomatic disease and track disease progression, to correctly identify individuals most suitable for clinical trials, and to reduce heterogeneity and increase the statistical power of analyses of such trials. Cognitive reserve and genetic factors have both been proposed as moderators of the onset of disease. Cognitive reserve is a theoretical concept proposing that certain lifetime experiences, including education, individual intelligence quotient, degree of literacy, and occupational attainment, increase the flexibility, efficiency, and capacity of brain networks, thereby allowing individuals with higher cognitive reserve to sustain greater levels of brain pathology before showing clinical impairment (for a review, see Stern, 2009). In healthy individuals, higher educational attainment (Arenaza-Urquijo et al., 2013) as well as cognitive enrichment (Sun et al., 2016) have been related to greater volume and greater metabolism in frontotemporal regions, thus likely enhancing brain performance (Barulli and Stern, 2013). Genetic modifiers of disease expression also exist, affecting the phenotype and prognosis. TMEM106B has been identified as a genetic modifier in FTD, modulating the age at disease onset in frontotemporal lobar degeneration-TDP-43 disease (Cruchaga et al., 2011;Gallagher et al., 2014;van Blitterswijk et al., 2014). The TMEM106B rs1990622 TT genotype is detrimental and associated with earlier age at disease onset (Cruchaga et al., 2011) and greater functional impairment in frontal regions in presymptomatic GRN mutation carriers (Premi et al., 2014). Conversely, the role of this polymorphism in C9orf72 mutation carriers is still unclear, as it has been suggested a detrimental effect of TMEM106B rs1990622 CC genotype on disease onset and death (Deming and Cruchaga, 2014;Gallagher et al., 2014;van Blitterswijk et al., 2014). In this study, we aimed to evaluate modifiers of structural brain changes in presymptomatic mutation carriers from a large international cohort of subjects at risk for genetic FTD, investigating the effect of (i) pathogenetic mutation, i.e. MAPT, GRN and C9orf72 carriers versus non-carriers; (ii) cognitive reserve, as measured by years of formal schooling; and (iii) TMEM106B rs1990622 genotype, and their interaction, on grey matter volume. We hypothesized that individually and together, these three factors will modulate the degree of structural atrophy. Participants Data for this study were drawn from the GENFI multicentre cohort study (Rohrer et al., 2015), which consists of 13 research centres in the UK, Italy, The Netherlands, Sweden, and Canada. Inclusion and exclusion criteria have been previously described (Rohrer et al., 2015). Local ethics committees approved the study at each site and all participants provided written informed consent according to the Declaration of Helsinki. For the aim of the present work, we considered participants at 50% risk of carrying a GRN, C9orf72 or MAPT mutation based on having a first-degree relative who was a known symptomatic mutation carrier. Between January 2012 and April 2015, 365 participants were recruited into GENFI, of which 294 were at risk and 71 symptomatic. Of the 294 atrisk participants, 22 did not have a T 1 -weighted MRI scan suitable for volumetric analysis. Included at-risk subjects underwent a careful recording of demographic data, including years of formal schooling (education), past medical history, and a standardized clinical and neuropsychological assessment, as previously published (Rohrer et al., 2015). Genotyping was then performed for the TMEM106B rs1990622 (C/T) single nucleotide polymorphism according to standard procedures (Premi et al., 2014) (at the individual sites in 70.6% of cases, and at the University of Brescia, Italy in the remaining 29.4%). Genotype was not available for 41 participants, and so the final analysis was performed on 231 participants: 108 presymptomatic mutation carriers [genetic status (GS) = 1], 61 with GRN, 33 with C9orf72 and 14 with MAPT mutations) and 123 non-carriers (GS = 0)]. Participants (GS = 0 and GS = 1) came from 77 families (15 with MAPT, 33 with GRN, and 29 with C9orf72 mutations). TMEM106B genotype distribution was comparable between groups (GS = 0 versus GS = 1, Pearson 2 test, P = 0.958), as well as among GS = 1 subgroups (i.e. GRN, C9orf72 and MAPT mutation carriers, P = 0.419). Demographic characteristics of GS = 1, subgrouped on the basis of mutation type, and GS = 0 are reported in Table 1. Mean age of GS = 1 was 45.9 years (range 20.5-70.5 years) and of GS = 0 was 48.3 years (range 19.4-85.7 years). No significant differences were found in age, gender, years of education, and neuropsychological tests between the groups. Statistical analysis We fitted a linear mixed effect interaction model (Galecki and Burzykowski, 2013). We assessed the main effect of three factors on grey matter: (i) the presence of pathogenetic mutation (GS, coded as GS = 1 for GRN, MAPT or C9orf72 mutation carriers and GS = 0 for mutation non-carriers); (ii) the role of cognitive reserve as measured by years of formal education; and (iii) the TMEM106B rs1990622 C/T genotype (coded as CC, CT or TT). The relationship between each factor and grey matter volume was labelled as b1, b2, and b3, respectively ( Fig. 1, dark blue lines). Furthermore, we considered the two-way interaction effect of each factor (i.e. GS and education, labelled as b4 (red line), GS and TMEM106B genotype, labelled as b5 (orange line), and education and TMEM106B genotype, labelled as b6 (green line) (Fig. 1). Finally, we considered the three-way interaction effect, i.e. GS, education, and TMEM106B genotype, on grey matter (b7) (Fig. 1). These main and interaction effects were adjusted by fixed covariates, namely age and gender. Moreover, we considered two random effect factors, study site and pedigree, which permitted analysis of the correlations of subjects in the same cluster (centres of subjects' enrolment or individual families). To overcome the complexity of multiple comparison corrections, we first carried out data reduction of grey matter parcellation data. We proposed a graph-Laplacian Principal Component Analysis (gLPCA) to obtain a low dimensional representation of grey matter parcellation, which incorporated graph structure (Jiang et al., 2013). We did not apply principal component analysis (PCA), widely used to obtain a low-dimensional representation, as the imaging data presented spatial distribution and high left-right correlation (Belkin and Niyogi, 2001). Graph-Laplacian PCA (gLPCA) has several advantages: (i) it is modelled on the representation of the data; (ii) it can be easily calculated, presenting a compact closedform solution; and (iii) it allows noise removal. Once we obtained data reduction, bivariate correlations between principal component (PC) scores and each grey matter measure were computed. Finally, we fitted the mixed-effect interaction models with grey matter, summarized by the first PC scores, as outcome variable. Statistical analysis was performed via R packages (www.r-project.org) and in-house R scripts. Results By gLPCA using the skeleton graph between grey matter measures ( Supplementary Fig. 1), the first PC (PC1) was selected to summarize the grey matter volume data. Frontal, parietal and temporal regions were the areas that contributed most to graph construction and PC1 scores, based on correlations between PC1 scores and grey matter measures (Supplementary Table 1). Fitting the linear mixed-interaction model with fixed covariates (age and gender) and random effects (study site and pedigree), a significant direct effect of GS and years of education on grey matter outcome (PC1 scores) was observed (P = 0.002 and P = 0.02, respectively), while no effect of TMEM106B genotype on grey matter was detected. We did not find any significant two-way interaction between the considered variables, but did find a three-way interaction on grey matter (P = 0.007) ( Table 2). The data are summarized in Fig. 2. On the x-axis, years of education (i.e. cognitive reserve) are reported, and on the y-axis grey matter volume (PC1). Years of education had a significant direct effect on grey matter volume, independently of GS. We found that the greater the years of education, the greater the grey matter volume (in both GS = 0 and GS = 1), suggesting that cognitive reserve was able to exert an effect in presymptomatic at-risk participants carrying pathogenetic mutations as well as in non-carriers, by increasing grey matter volume. In comparison to non-carriers (GS = 0, red line), mutation carriers (GS = 1, blue line) showed a significant decrease of grey matter volume, confirming the effect of pathogenetic mutations in shaping progressive atrophy before the onset of symptoms (Rohrer et al., 2015). The TMEM106B genotype did not exert a direct effect on grey matter volume, and it did not affect grey matter (Fig. 2, red line, GS = 0 and TMEM106B CC or CT or TT). However, in those individuals carrying pathogenetic mutations (GS = 1), TMEM106B polymorphism modulated the slope of the relationship between education and grey matter volume (GS * Education * TMEM106B): a steeper slope was found in TMEM106B TT carriers compared with CT carriers, which in turn was greater than that of CC carriers, with a dose-dependent effect (Fig. 2, green and purple lines). Considering the contribution of each mutation separately, the effect of TMEM106B genotype (GS * Education * TMEM106B) was mainly driven by C9orf72 mutation carriers, the subgroup of patients with the greatest atrophy (Supplementary Table 2 and Supplementary Fig. 2). Discussion Autosomal dominant FTD presents with significant interand intra-familial variability among individuals bearing the same pathogenetic mutation. This suggests the presence of environmental, genetic and/or epigenetic modifiers, influencing the age at disease onset and clinical phenotype (Borroni and Padovani, 2013). The effect of genetic modifiers and environmental factors that might trigger the onset of neurodegeneration in carriers of the pathogenetic mutation (which is present at birth but only manifests symptoms in mid-late adulthood) are of extreme interest. In this view, the pathogenesis of inherited FTD may be a model of 'Latent Early-Life Associated Regulation' (LEARn), in which latent expression of associated genes is triggered by environmental and non-environmental factors (Maloney et al., 2012;Maloney and Lahiri, 2016), with neurodegeneration being modulated by lifetime exposure to one or more environmental factors as well as genetic background. In the present study, we aimed at identifying modulating factors of neuronal loss in presymptomatic subjects bearing pathogenetic mutations within GRN, MAPT and C9orf72 genes through the surrogate marker of volumetric MRI. We analysed the effect of (i) pathogenetic mutations; (ii) cognitive reserve as measured by years of schooling; and (iii) TMEM106B genotype on grey matter volume in the large GENFI cohort. Indeed, we chose grey matter volume as an endpoint measure as it correlates well with indexes of disease severity (Premi et al., 2016) and progression (Brambati et al., 2007;Lam et al., 2014). First, we confirmed, as previously reported (Rohrer et al., 2015), that pathogenetic mutations are detrimental to grey matter volume years before expected age at disease onset, being associated with smaller volumes i.e. greater atrophy as compared to siblings who did not inherit the mutation. C9orf72 repeat expansion carriers had a greater degree of atrophy (Rohrer et al., 2015), as compared to GRN mutation carriers and MAPT mutation carriers, with the latter being the smallest group in our sample. Furthermore, we demonstrated that cognitive reserve is associated with brain atrophy and also modulate neuronal loss years before the onset of symptoms. TMEM106B polymorphism, on the other hand, only modulated grey matter volume in those with an autosomal dominant mutation and with the lowest education. The duration of formal schooling, as a proxy of cognitive reserve, was associated with greater grey matter volume in both non-carriers and in mutation carriers. This might suggest that those subjects with higher educational attainment were able to better counteract the detrimental effect of a pathogenetic mutation than their counterparts with lower education. However, this effect was also found in the group that did not carry mutations: those with higher education attainment had greater grey matter volume then those with low education. Hence the finding is not specific to mutation carriers suggesting a broader effect of cognitive reserve on maintaining and ameliorating brain functioning. Interestingly, even if presymptomatic mutation carriers already had mild structural changes (Rohrer et al., 2015), the relationship between years of education and structural changes was comparable to that observed in non-carriers (direct correlation, the higher the education the greater the grey matter volume) (Sole-Padulles et al., 2009;Rzezak et al., 2015), rather than that reported in symptomatic FTD (inverse correlation, the higher the education the lower grey matter volume) (Borroni et al., 2009). The concept of cognitive reserve was originally proposed to explain the lack of a direct relationship between the degree of brain pathology and the severity of the clinical manifestations that should supposedly result from such damage in neurodegenerative disorders such as Alzheimer's disease (Stern et al., 1992(Stern et al., , 1994 and FTD (Borroni et al., 2009;Premi et al., 2012Premi et al., , 2013. Cognitive reserve represents the hypothesized capacity of the adult brain to compensate for the effects of a disease or injury that would be sufficient to cause clinical dementia in an individual with less cognitive reserve (Stern, 2002). Herein, we propose that high education might postpone the onset of dementia in those subjects at risk of developing FTD (Akbaraly et al., 2009;Craik et al., 2010;Pettigrew et al., 2013). These findings extend previous results obtained in healthy subjects (Sole-Padulles et al., 2009;Rzezak et al., 2015), and might represent a possible strategy to delay onset of inherited FTD. Conversely to education attainment, TMEM106B genotype did not have any effect on mutation free individuals, but this genetic trait might represent an additional nonmodifiable risk factor in mutation carriers. Literature data have widely proven that TMEM106B variants are genetically associated to frontotemporal lobar degeneration-TDP-43 pathology and are considered a major risk factor for this disease (Chen-Plotkin et al., 2012;Busch et al., 2013;Nicholson and Rademakers, 2016). It has been suggested that the TMEM106B polymorphism might modulate progranulin plasma levels, thus affecting age at onset of symptoms in GRN mutation carriers and explaining in part the reported variability (Cruchaga et al., 2011). Furthermore, presymptomatic GRN mutation carriers bearing the TMEM106B TT genotype showed greater functional brain damage than those with CT/CC TMEM106B genotypes (Premi et al., 2014). In frontotemporal lobar degeneration-TDP-43 due to C9orf72 mutations, the relationship is less clear, and it has been suggested that TMEM106B might be able to affect disease pathology, but with an opposite association (Gallagher et al., 2014;van Blitterswijk et al., 2014): two independent groups analysed the association of TMEM106B variants with disease risk, age at onset, and age at death in C9orf72 expansion carriers with the CC genotype (protective in GRN carriers) found to be associated with earlier onset and earlier death in C9orf72 expansion carriers (Deming and Cruchaga, 2014;Gallagher et al., 2014;van Blitterswijk et al., 2014). This effect may be an example of the general phenomenon of epistasis, in which a genetic variant is beneficial on some genetic backgrounds but deleterious in others (Gallagher et al., 2014;Busch et al., 2016). In particular, as hypothesized, if in GRN-related TDP-43 pathology TMEM106B is related to endosomal-lysosomal dysfunction and to the perturbation of the progranulin pathway, in C9orf72 knockdown mice TMEM106B over-expression may produce a phenotypic rescue effect (Busch et al., 2016). However, further studies are needed to elucidate its mechanism of action. Another possibility is that TMEM106B is simply in linkage disequilibrium with the actual associated variant and when different populations are examined, the allele associated with disease modulation is different. In the present work, a moderating, dose dependent effect of TMEM106B rs1990622 genotype together with education attainment was observed on grey matter volume in presymptomatic subjects carrying pathogenetic mutations. This finding supports the idea that epigenetic modifications in TMEM106B might occur. Epigenetic mechanisms, mostly mediated by DNA methylation, have been shown to be important in other neurodegenerative disorders (Piaceri et al., 2015) and to be influenced by socioeconomic status, which is strongly associated with cognitive reserve (Tehranifar et al., 2013). Thus, it could be hypothesized that TMEM106B, a gene containing a number of methylation sites (http://genome. ucsc.edu), might exert its effect on structural changes in atrisk subjects via cognitive reserve. Future studies, however, need to be performed to confirm this hypothesis. We indeed found a detrimental effect of the CC genotype, but the present results were mainly driven by subjects carrying C9orf72 mutations, in which CC is the risk genotype (Supplementary Table 2). We acknowledge that there are limitations with this work. First, the correlation between the factors herein considered and age at disease onset would benefit from longitudinal follow-up and independent studies to confirm the present results. The lack of assessment of leisure activities prevents us from characterizing the entire spectrum of cognitive reserve proxies (Nucci et al., 2012), and we are aware of possible biases determined by different school systems across the involved countries. Finally, the role of TMEM106b genotype should be further evaluated in the different genetic groups alone. In conclusion, our findings indicate that even several years before the onset of symptoms, brain changes in inherited FTD may be modulated by environmental and genetic factors. In the absence of effective pharmacotherapeutic treatments for counteracting the onset of symptoms in pathogenetic mutation carriers, high education may represent a large-scale strategy to be considered by national health system policies. TMEM106b genotype needs to be considered as an extra non-modifiable trait affecting brain pathology, and each FTD mutation should be analysed individually. Future clinical trials in genetic FTD should take into account both education level and TMEM106b genotype to define subjects with greater brain damage, thus representing those at higher risk of developing FTD at an earlier age.
4,460.4
2017-04-27T00:00:00.000
[ "Biology", "Psychology" ]
The Nature of Entrepreneurship and its Determinants: Opportunity or Necessity? Within the institutional theory of North (1990, 2005), the objective of this study is to analyse the impact of economic and institutional factors, formal and informal, in the entrepreneurial activity of nations, particularly in Total Entrepreneurial Activity (TEA). In order to evaluate the simultaneous influence of economic and institutional factors on the entrepreneurial activity, a multiple regression approach is used with cross-country data sets. The results show that TEA is negatively related to infrastructural capacity and political stability of a country, and positively related to government spending and freedom of expression and corporate associations (Voice & Accountability) at a country level. It is also tested the relationship between TEA and GDP per capita. Our results confirm a convex relationship between the two variables giving evidence that the entrepreneurial activity is mostly necessity driven rather than motivated by opportunity. INTRODUCTION Emerging and increasingly actual, the concept of entrepreneurship by its transversally, heterogeneity and subjectivity (Davidson, 2006), is far from congregating academic consensus (Berglann, Moen, Røed, & Skogstrøm 2011, Martin, Picazo, & Navarro, 2010. Indeed, due to the growing interest in the subject of entrepreneurship, such interdisciplinary should maintain its dynamic influence in the near future (Davis, 2008). In fact, it is perceptible the contribution of various scientific fields in the construction of the concept of entrepreneurship, such as, psychology (Shaver andScott, 1991, Fishbein andAjzen, 1975), economics (Schumpeter, 1934, Cantillon, 2001, Marshall, 1961, Knight, 1921, Schumpeter, 1942, sociology (Reynolds, 1991, Thornton, 1999 and management (Stevenson and Jarillo, 1990, Sahlman and Stevenson, 1991, Timmons and Spinelli, 2004, Stevenson, 2000. The interdisciplinary nature helps us to decode the different levels of analysis of entrepreneurship: individual-level, corporate-level and country-level. Within psychology, the degree of analysis of entrepreneurship focuses on the individual-level (Shaver and Scott, 1991), in management the level of analysis focuses on corporate-level, and in economics the interest of analysis of entrepreneurship is concentrated at a country level (Nandram and Samsom, 2008). Despite different levels of analysis, it seems obvious the connection between them, to the extent that the individual perception of the entrepreneurship towards the environment that surrounds it is a key factor for the success of the company (Bruno and Tyebjee, 1982), and inherently entrepreneurship (at individual-level and/or corporate-level) enhances the economic development of a country (Lari & Ahmadian, 2012). Indeed, due to the different degrees of analysis and its multidisciplinary nature, as well as the heterogeneity of its determinants, that Verheul et al. (2002) suggest the eclectic theory of entrepreneurship to reach a more comprehensive concept. Therefore, it is important to understand the wide range of determinants that helps to explain, at the country-level, the greater or lesser propensity to entrepreneurship and detect trends that are related to entrepreneurship by necessity or opportunity. Knowing that the study of entrepreneurship at the country-level is not so well developed as it is at the individual or at corporate-level, we focus on the country-level in order to decode a set of determinants and to measure their contribution in explaining entrepreneurial activities. The aforementioned heterogeneity led us to study a wide range of determinants and to understand which ones have more relevance to explain entrepreneurship at the country-level. The value added of our study lies on the variety of the determinants to understand the nature of Entrepreneurship that is, whether it is necessity or opportunity driven. Additionally we test the convex hypothesis between entrepreneurship and per capita income and determine the threshold level that drives the shape of this relationship. This paper is organized in four main sections: literature review, methodology, estimation results and conclusions. In the literature review, we revisited the concept of entrepreneurship, discussing the two types of entrepreneurship motivated by necessity or by opportunity, explaining also the determinants of these types of entrepreneurship. The methodology section describes the nature of the data, the estimation techniques and analyses the obtained results. The final section presents the main conclusions emphasizing the most important aspects of this research. LITERATURE REVIEW The current academic divergence (Agca et al., 2012), which prevails from many decades ago (Cole, 1942), according to Iversen, Jorgensen, & Malchow-Moller (2008), arises from difficulties in the conceptualization and definition of theoretical models to measure entrepreneurship. In this vein, also because the phenomenon of entrepreneurship is being complex, dynamic and with diversified purposes (Bruyat and Julien, 2001), there are different concepts associated with economics, management and psychology perspectives. The awareness that entrepreneurship is essential for economic growth (Naudé, 2010) assumed as the "main vehicle of economic development" (Anokhin et al., 2008: 117), seems to indicate the prevalence of the economic nature over the management or psychology ones. In fact, extensively studied since its inception by many economists like Knight , Schumpeter, Kirzner, Baumol, Marshall, among many others, the effect of entrepreneurship on economic variables, such as employment, innovation and wellbeing (Acs et al., 2008) can justify its importance in this area. Despite the current academic divergence some consensus prevails in line with the Schumpeterian doctrine, that entrepreneurship is manifested through the relentless pursuit of business opportunities through innovation and creativity (Bjørnskov & Foss, 2008). The impact of entrepreneurship on the economy has been studied at the level of the company, sector or region in detriment of the comparative analysis between nations (Stel, Thurik, & Carree, 2005). The level of economic development of a country is an important factor in explaining his entrepreneurial activity (Carree, Stel, Thurik, & Wennekers, 2007;Wennekers, Stel, Thurik, & Reynolds, 2008). However, several authors confirm the inverse relationship between GDP per capita and entrepreneurial activity (Stel et al., 2005). Some authors partially verify this inverse relationship to describe a convex curve between entrepreneurship and gross domestic product per capita (Acs, Audretsch, & Evans, 1994;Wennekers & Thurik, 1999). The entrepreneurship by necessity, opposing the entrepreneurship by opportunity / capacity may explain the inverse relationship between the two variables (Reynolds, Camp, Bygrave, Autio, & Hay, 2001). Therefore, we should differentiate the necessity driven entrepreneurship from the opportunity driven entrepreneurship. The former stems from the belief that the creation of self-employment grants to its promoter bigger utility and it is usually a result of employment that has been lost or from a saturation of the labor market (Block and Wagner, 2010). The latter, opportunity driven entrepreneurship, relates to the identification of an opportunity arising from an innovative idea (Valdez et al., 2011). The value-added generated by entrepreneurship due to necessity is residual and ephemeral to the economy. On the contrary entrepreneurship by opportunity generates higher value-added lasting longer, due to its innovative nature associated with technology-based activities. In parallel to this relationship between GDP per capita and entrepreneurship, the institutional theory of North (1990North ( , 2005 which confirms the contribution of institutions in economic development in the long term, is the basic reference for the study of entrepreneurship (Díaz-Casero et al., 2012, Bjørnskov and Foss, 2008, Veciana and Urbano, 2008, Álvarez and Urbano, 2011, Salimath and Cullen, 2010. In agreement with this theoretical stream, we must distinguish between the informal role of institutions in the creation of ideas, beliefs, attitudes and personal values and the formal role, which includes a set of political-legal rules, economic rules and contractual procedures. If, on the one hand, the role of informal institutions through its governance has impact on entrepreneurial activity (McMillan & Woodruff, 2002), his formal role as more Several authors have confirmed the existing relationship between some indicators of economic freedom (published by The Heritage Foundation) and entrepreneurship measured by Total Entrepreneurial Activity (Díaz-Casero et al., 2012, Bjørnskov and Foss, 2008, McMullen et al., 2008. According to Acs & Armington (2004), Wennekers, Stel, Thurik, & Reynolds (2005) and Alvarez & Urban (2011) the factors of competitiveness also have significant impact on the entrepreneurial activity in a country. In the early studies on entrepreneurship, the factors explaining its performance were mainly economic (Grilo & Thurik, 2005). However, given the weak explanation of the economic factors in the process of the entrepreneurship (Freytag & Thurik, 2007), several authors suggested also cultural dimensions to improve the degree of explanation , Wennekers et al., 2007, Hofstede et al., 2004, Osman et al., 2011 such as, education, religion, language, ethnic factors, the role of women in the labor market or Hofstede's cultural index, among others. METHODOLOGY This section explains the methodological aspects of the study used to identify the relevant determinants of total entrepreneurial activity of a country (TEA). The sample The Global Entrepreneurship Monitor (2011) reports data on TEA for a set of 54 countries. However due to missing values on the explanatory variables the sample is reduced to 36 countries and this set of countries is presented in Table 1. Estimation technique Focus will be given to understand the influence of different factors on the rate of entrepreneurship, as measured by Total Entrepreneurial Activity (TEA) and published by the Global Entrepreneurship Monitor (GEM) in 2011. To this end, we use a multiple regression analysis based on cross-country data. The estimation approach will make possible to compare the relative effect of various independent variables on the variable of interest (TEA). The cross-section multi-country model will be estimated initially by OLS and check its relevance through the usual diagnostic tests. Adaptations and corrections will be made to the regression model in case of detection of violation of some basic assumptions. Variables of control Entrepreneurship is a multidisciplinary concept, therefore, a vast number of explanatory variables can be used to explain its behavior. For this reason Table 2 reports the main factors that are likely to influence the decision to undertake a business activity. To optimize the chosen model, we follow the backward mode of estimation, starting with the whole set of explanatory variables and eliminating sequentially the variables with no statistical significance after performing an F-test on the joint significance of the population parameters. Doing so we have reached to a parsimonious model that includes the four most relevant explanatory variables explained in Table 3. Captures perceptions of the likelihood that the government will be destabilized or overthrown by unconstitutional or violent means, including politically-motivated violence and terrorism. Another task of our study is to verify the typology of entrepreneurship whether it is driven by opportunity or by necessity. In doing so we analyze the relationship between GDP pc and TEA through a linear regression to verify the convexity hypothesis Models to estimate When all the usual assumptions on the error term and the explanatory variables are fulfilled, the OLS (ordinary least squares) estimation approach can be used to estimate our cross-section model. For comparison, we perform also the GLS estimator Table 3): Our second goal is to understand the relationship between TEA and income per capita with the purpose of identifying whether the entrepreneurship is motivated by opportunity or by necessity. We propose a log-log model (Model 2), which allows us to obtain the elasticity between the two variables: A negative elasticity of this relationship will provide evidence in favor of entrepreneurship motivated by necessity. The lower the standard of living in a country the higher the necessity for people to create their own jobs. Finally, in an attempt to verify the convexity hypothesis described by some authors between TEA and GDP pc, we tested the following quadratic function (Model 3): . From this regression we are able to determine the threshold level of income that beyond this point the shape of the relation inverts its initial negative tendency. Table 4 reports some elementary descriptive statistics on the variables used in the estimation approach. Observing the data we can highlight some relevant aspects. The dependent variable Total Entrepreneurial Activity (TEA) represents the percentage of the population able to develop a professional activity, that is, actively involved in the creation of a business, either in the starting phase of business activity or 42 months after the birth of a business unit (Bosma, Wennekers, & Amoros, 2012). On the basis of the observed data, the values of TEA vary between 3.7 (minimum value for Slovenia) and 23.7 (maximum value for Chile). Descriptive statistics The variable 'Infrastructure' is an index that varies between 1 and 7 points and describes the quality of the infrastructures in a country in three major areas: transports, energy and communications. On the basis of the observed data, the values of this variable vary between 3.16 (minimum value for Algeria) and 6.27 (maximum value for France). The higher the value, the higher the structural facilities in a country. Republic of Iran) and 1.382 (maximum value for Finland). The higher the value, the higher the political stability and the lower the level of potential violence in a country. According to the Coefficient of Variation (see Table 4), the higher variability is shown by the variables 'Voice & Accountability' and 'Political Stability' and the lower dispersion is represented by the variable 'Infrastructures'. The full set of the crosssection data is reported in Table I in the Appendix. Note: ***, **, * indicates that coefficients are statistically significance at 1%, 5% and 10% level respectively. ESTIMATION RESULTS Homoskedasticity is not rejected, the GLS estimation results are also reported in column 2 for the sake of comparison. Both estimation approaches reveal similar results. As it can be seen, the OLS estimation reveals satisfactory results, in particular: all population coefficients of the variables show high statistical significance at 1% level; the goodness of fit is reasonable showing evidence that 67% of the variation in the entrepreneurial activity is explained by the controlled variables; through the conventional White-test, Heteroskedasticity is rejected, therefore estimates are efficient; through the Chow test the model is stable; and finally the Reset-test shows that the model specification is appropriate. Interpreting the marginal impacts of the covariates we can conclude the following: Every one point increase in the variable 'Infrastructure' (ranging from 1 to 7 points) is responsible for 24.0% decrease in TEA, everything else remaining constant. This evidence seems to be in line with the claim that less infrastructures in transports, energy and telecommunication leaves more space for developing entrepreneurial activities in these sectors. With respect to the variable 'Voice & Accountability' (ranging from -2.5 to 2.5 points) the evidence shows that 1 point increase in this index is associated with 70.9% increase in TEA, everything else staying unchanged. This is an expected result, since more liberty in expression and more facilities in doing businesses are necessary conditions for developing more entrepreneurial activities. The impact of the variable 'Government Spending' (ranging from 0 to 100) is estimated to be lower in magnitude. Assuming one point increase in this scale it is predicted that TEA increases by only 1.4%, everything else being unchanged. Having in mind that higher values of this scale indicate less state intervention, more space is left for the private sector to develop business activities. 'Political Stability and Absence of Violence' (ranging from -2.5 to +2.5 points) has a negative impact on TEA. It is estimated that one point increase in this scale is responsible for 44.1% decrease in entrepreneurial activities, everything else being constant. This evidence is interesting and in line with the claim that less political stability accompanied with violence create conditions of auto-defense, promoting therefore self-employment activities driven by necessity. Model 2 of Table 5 shows the results of the relationship between TEA and income per capita using a log-log specification and a set of 53 countries. As has been explained this simple relation will help us to understand the nature of entrepreneurship whether is motivated by necessity (negative correlation) or by opportunity (positive correlation). In fact looking at the evidence we observe a negative relation between the two variables confirming therefore that entrepreneurship in this group of countries is necessity driven. Countries with lower per capita income show higher propensity of developing business motivated by necessity and not by opportunity, and vice versa. More specifically, every one percent increase in GDP per capita is associated with 0.26% fall in TEA and this relationship is statistically significant at the 1% level. Table 5 (53 countries), shows that along with income per capita, the variable 'Government Spending' presents also statistical significance. It was not possible to find any statistical significance for the other explanatory variables due to colinearity problems. In this regression it is expected that one point increase in the scale of 'Government Spending' induces only 1% increase in entrepreneurial activities, everything else assumed constant. On the other hand, it is predicted that when income per capita increases by 1%, TEA decreases by 0.135% which is lower than the elasticity DISCUSSION AND CONCLUSIONS In this study we estimate cross-section models using two samples of countries: initially a set of 36 countries applied to the full model to identify the most relevant factors explaining entrepreneurial activities; and later a set of 53 countries to test the hypothesis of the convex relationship between entrepreneurial activity and income per capita. In the regressions we run we found that the variables 'Infrastructure' and 'Political Stability' have a negative impact on entrepreneurial activities (TEA), and in a separate regression we found evidence of an inverse relationship between TEA and income per capita. Combining all these results we can assert the prevalence of entrepreneurial activities motivated by necessity rather than by opportunity. In particular the negative impact of 'Infrastructures' on TEA is in line with the study of Fontenele (2010) constitutes one of the main explanation that entrepreneurship is necessity driven. Reflects the idea that, in developing countries, entrepreneurship is particularly important in light of the pressing needs of the populations and higher unemployment. Our evidence also supports the convex relationship between the two variables collaborating other studies by Acs et al, (1994) and Wennekers & Thurik (1999 Our evidence also reveal a positive impact of the variables 'Voice & Accountability' and 'Government Spending' on TEA collaborating the findings of Lecuna (2014) and Díaz-Casero et al. (2012) . This result suggests that greater freedom of citizens and the predominance of economic liberalism, with less State presence in the economy, induce higher entrepreneurial activity. The underlying idea of less government spending is that less State in the economy will bring more opportunities for the private sector and raise up the entrepreneurial activity (Bjørnskov & Foss, 2008). The prevalence of entrepreneurship motivated by necessity is associated with selfemployment activities involving less skilled labor and causal work of short duration. This kind of activities is also characterized by low value-added acting mostly in the nontradable sector. But most importantly this kind of activities are less innovative and lowtech users. On the contrary, entrepreneurship by opportunity is the kind of activity related to skilled labor, driven by innovation and technical progress improvements.
4,337.2
2014-03-13T00:00:00.000
[ "Economics", "Business" ]
GIS Applied to the Hydrogeologic Characterization – Examples for Mancha Oriental Aquifer (SE Spain) © 2012 Sanz et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. GIS Applied to the Hydrogeologic Characterization – Examples for Mancha Oriental Aquifer (SE Spain) Introduction The population on planet Earth, according to FAO forecasts, will increase from 6 billion to 8.1 billion inhabitants in 2030 and will coincide with an increase in water demands to meet human needs. Fresh water has ceased to be an inexhaustible resource to become a rather limited and scarce one. Earth's hydrosphere has an approximate volume of 1.38x10 10 km 3 of water, which has remained virtually constant since its formation over 3 billion years ago. This volume of water is distributed into four groups: 1. The vast majority is in the oceans, at 97.6% of the total (1,350x10 6 km 3 ), 2. In second place is solid water, in glaciers, at 1.9% (26x10 6 km 3 ), 3. Third is groundwater, with 0.5% of the total, which is 7x10 6 km 3 , and 4. The remainder of water on Earth (0.03% of total) is divided among lakes (0.017), soil (0.01%), the atmosphere (0.001%), Biosphere (0.0005%) and rivers (0.0001%). Ocean water is salt water and the glaciers are difficult to utilize because they are located far from major populated areas. Therefore, we find that groundwater is the largest volume of freshwater available to man. The volume of groundwater is 4,000 times greater than that of rivers and 30 times higher than the rest of liquid water that is on the surface of the continents. In addition, groundwater has characteristics that make it especially attractive to combating the processes of drought and desertification. Unlike surface water, groundwater does not evaporate, there are no major seasonal variations and flow is very slow, so it is difficult to contaminate . Groundwater is held in a naturally occurring reservoir called an aquifer, a geologic formation capable of storing, receiving and transmitting water so that man can easily take advantage of economically significant quantities to meet needs. The water is contained in any geological formation (ie: river gravel, karstified limestone, porous sandstone and so on). Like all scarce resources, groundwater management must be approached from a dual approach (Knowledge and Sustainability): Knowledge: There must be a sound understanding of the hydrogeological aquifer system to be managed. This should include a detailed analysis of the hydraulic aspects (geology, hydraulic parameters and groundwater flow) and should be contrasted with the hydrochemical aspects of water containing (origin of the substances dissolved in groundwater and hydro-chemical changes due to movement through groundwater flow). Sustainability: Groundwater pumping of a region (an aquifer) shouldn't exceed the water received, that is, the available resources. If users pump a volume of groundwater for shortterm needs (ie: drought conditions), beyond the resources available, they use the aquifer reserves. Then, the aquifer must be given time to recover (either by saving water or allowing recharge to increase during periods of more rainfall). Otherwise resources suffer overexploitation, putting the aquifer at risk of becoming depleted. It is obvious that the water volumes involved should be determined as precisely as possible. In managing groundwater resources, Geographical Information System (GIS) are tools capable of storing and managing spatial hydrogeological data by spatial referencing in digital formats. The correlation of all data with location is the key feature of GIS, which provides the ability to analyze and model hydrologic processes and produce results in maps and in digital formats. Thus, GIS can be considered a support system in decision making and an ideal tool for monitoring certain hydrogeological processes with socio-economic impacts (Goodchild et al., 1996). Figure 1 shows a diagram of a GIS aquifer system modeling tool (Case study of the Mancha Oriental System). This scheme is integrated into a) a block of hydrogeological data maps of the study area, which supplies data on groundwater from urban and industrial and general hydrological information (surface and groundwater hydrology) necessary to carry out the integration and interpretation of some results, and b) a block of data from remote sensing imagery. Remote sensing allows for classification of crops and their relationship to water supply for irrigation, mapping of wells, and the assessment of recharge by precipitation of rainfall. All this information is transferred to software that simulates groundwater flow in the Mancha Oriental aquifer using intersection tools. The main goal is to show the methods some GIS applications have in hydrogeological studies. This chapter is divided into two sections which describe some examples of hydrogeological characterization, and secondly, a method for calculating groundwater abstraction. To demonstrate these applications one of the largest aquifers in southern Europe (in terms of area), the Mancha Oriental System has been chosen. GIS & hydrogeological characterization In general, among the Earth Sciences and particularly hydrogeology, sources of data tend to be from points (wells, points of water, lithological columns, etc.) defined by a geographic location (UTM or geographic coordinates) and attributes (topographic top or bottom of a hydrogeologic entity, groundwater level, hydraulic parameters, concentration of a chemical compound). This type of data, usually measured in the field, must be spatially distributed in a continuous manner such that a value is given for any point within the space. To achieve this, interpolation or spatial estimation is used. This method derives an interpolation function that provides estimates for a point in space based on the points measured. GIS tools have incorporated algorithms which perform these operations with discrete entities (vector) and generate spatially continuous entities (raster, line models, etc.). In addition to expanding and the geodatabase and adding values, these techniques create a foundation for spatial modeling (Peña Llopis, 2006). The most commonly used spatially continuous entities are raster maps, which are characterized by a two-dimensional numerical matrix or digital image. Each element of the matrix, called a picture element or pixel, has an attribute assigned to it in the database. The only requirements are for maps to have attribute values referenced to the same coordinate system and the same number and arrangement of pixels to perform algebra operations with them (ie: isopaches: difference between top and the bottom raster maps of the geologic formation; calculation of storage volumes: difference between raster maps and contour lines or groundwater for different dates, multiplied by the storage coefficient, etc.). Theoretical foundations Using a spatial domain and a series of points (which we will refer to as points of observation) Pi, where i= 1, 2, ….n, which have a series of coordinates xi where variable Z has been measured in Pi points; Zi (observed values), the interpolation or spatial estimation aims to find the value of Z (estimated values) at any point in the known space. An interpolation function must be obtained: The interpolation function should have certain characteristics: a) accuracy: the estimated value in points of measurement should coincide with the measured value, b) spatial continuity, c) ability to be derived: the interpolation should be "smooth" and d) it should be stable with respect to the location of the variable as well as its value such that small variations in data do not provoke large variation in the interpolation. As a function of these characteristics, especially the last condition, there is no universal interpolator and there is always another interpolation method which can be applied (Samper & Carrera, 1996). Most GIS software presents two interpolation methods: Deterministic and Stochastic. Deterministic methods This type of method is characterized by associating a mathematical function, such as an interpolation function, to the measured or observed values, in which these points are considered without error. Following the nomenclature followed until now, this mathematical function could be written in the following manner: where for each x a Z(x) value is measured through a function f(x), which is defined by the sum of all "n" points of observation of a product between a base function f(x)i and coefficients, Ci. For example, in a simple exact interpolation the observed or measured values (Zi) coincide with the Cs values, multiplied by a weighting factor given by the function f(x)i. The deterministic interpolation functions differ from one another in the means of evaluating f(x)i and Ci. There are various deterministic interpolation techniques. The most commonly used methods are presented here (ESRI, 1997) (Samper & Carrera, 1996): Nearest neighbor (Thyessen polygons, Polygons of influence). This method assigns the value of each measured or observed point to each pixel or node of the interpolated area. For each point of observation the Euclidean distance is calculated for all other points and each is given the closest value. The result is a map of polygons with an interpolated value ( Fig. 2A). This method is often used for regular grids and/or dense observed data, or to find areas of influence. Interpolations based on weighting functions. The estimation or interpolation in this type of method is performed by a weighted average of the observed values. At each point of observation a weight is assigned. The selection criterion is that the weighting function is exclusively dependent upon the distance (d). The weight will decrease with increasing distance between points. The most common strategy for generating this criterion is the Inverse of Distance raised to some exponent (a). ( 3) This exponent shows the "speed" with which the weight of a point of observation decreases with distance from the point of estimation. At times the number of points of influence is restricted, or a radius or maximum distance is assigned for considering points of observation. This interpolation method is exact and is commonly applied, with the only disadvantage being creating the feared "bulls-eyes" (Fig. 2B). Polynomial interpolation. In this method the interpolating function is a polynomial function which varies in its exponential order. The choices for polynomial are: a) through exact fit and b) fit by least mean square. The first method aims to resolve the system of equations defined by the n points of observation. If there are many points of observation, the fit of higher order polynomials can become unviable, giving unrealistic interpolations with exaggerated variation among the values (Fig. 2C). In fact, by default these methods limit the polynomial to third order and only use the number of points in a nearby group. One special case of polynomial interpolation is linear interpolation, wherein the interpolation function is a first order polynomial which directly depends on the position of the observed values. It is an exact method and does not take into account the spatial distribution of the variable, with the result of soft surfaces. It is an easy method and is often used, above all in cases when not a lot of data is available and the aim is to study the spatial variation of a certain variable. In general, this interpolation method is not used for spatial estimates on realistic structures (topography, groundwater levels, etc.) but rather to determine the tendency of data ( Fig. 2C). Spline functions. Within polynomial interpolation, this general method generates a different series of expressions for each subdomain into which the whole interpolation space has been divided, wherein continuity requisites are imposed, especially in the contours common to more than one subdomain. The results of this interpolation tend to be surfaces with small changes in levels ( Fig. 2D). Stochastic methods This methodology is based on the premise that the variable to be interpolated is a random function associated with probabilistic distribution laws. This type of method gives a measure of error of the interpolation based on the data. There are two classes of stochastic interpolation: a) non-parametric, which are not exact because the errors are assumed to be independent and b) parametric, wherein the interpolation function depends on certain parameters calculated as a function of the observed data (IDW or Krigging) (Samper & Carrera, 1996). The most common method, Krigging, which is available in most GIS software packages, is explained below. Krigging was created under a new discipline, geostatistics, as a result of problems presented by deterministic interpolation in Earth sciences due to the uncertainty and variability of data (Cassiraga, 1999). The starting hypothesis of geostatistics is that the data of study has a correlation spatial structure, as the realization of an infinite amount of possible realizations. For this reason, geostatistics is called the science of regionalized variables. For spatial estimation using Krigging, the steps below should be followed, among others, and the variable to be interpolated should meet the criteria of normality and stationarity (Johnston et al., 2001). The first step is structural analysis, with the objective of estimating the semivariogram. This relates the Euclidean distance among the points of observation with the variability of the measured values (Samper & Carrera, 1996). First, the variable should be defined as stationary, if there is a tendency among the data, etc. The function of the semivariogram (estimator of spatial variability) is expressed as: where: Z(xi): experimental data, h: distance between points of observation (Variogram step), N: number of pair separated by vector h found in a group of data, Xi, xi+h: experimental points in an n-dimensional space. At first, from the observed data an experimental semivariogram will appear. A theoretical function with similar behaviourcan be fit to this in order to calculate a weighting matrix for each point, and statistical error affecting the interpolation can be calculated. The semivariogram is composed of a series of elements ( Fig. 3A): Range: the distance from which the spatial correlation is practically null (Area of influence), Sill: value that the semivarogram takes in the Range, Nugget: value of the semivariogram when it intersects at the coordinate axis. The experimental variogram cannot be used for the geostatistical application. It must be fit to a theoretic model (Fig. 3B). There are different technical variogram models available, with the most popular being the stationary or spherical semivariogram. Once the theoretical semivariogram has been chosen, the Krigging technique performs the spatial estimation of the data. There are diverse Krigging techniques as a function of diverse methodological hypotheses: Simple: Hypothesis of stationary variable with a known mean and covariance, Ordinary: Hypothesis of stationary variable with an unknown mean and known covariance, In an environment (by blocks): Quasistationary hypothesis, Residual: Non-stationary hypothesis with a known drift, from which residuals are derived and ordinary Krigging can be performed, Universal: Non-stationary hypothesis and polynomial form with a drift set a priori. MOS case study The Mancha Oriental System (MOS) is located in the SE of the Iberian Peninsula and is one of the largest aquifers in Spain (7,260 km 2 ) (Fig. 4). The area has a semiarid Mediterranean climate. Average rainfall is 350 mm/year and mean annual temperature is 13-15°C; the continental nature of the climate is clear from the extreme temperatures that occur. The area is characterized as a high plain (700 masl mid-altitude) surrounded by gentle relief, interrupted only by a valley which was carved by the action of the Júcar River. From a hydrogeologic perspective, the MOS is formed by the superposition of three limestone aquifer hydrogeologic units (UHs): UH2: Tertiary, UH3: Upper Cretaceous and UH7: Middle Jurassic. These HUs are separated by aquitards/aquifuges that comprise UH1 (upper and lower), UH4, UH5 and UH6 ). The impermeable base and southwest boundary of the area of study is composed of marl, clay and gypsum from the Lower Jurassic, belonging to HU8 (Fig. 4). Over the last 30 years the progressive transformation of approximately 100,000 ha from dry to irrigated farmland has translated into an acceleration of socioeconomic development due to widespread use of groundwater resources. Groundwater abstractions in the MOS exceed 400 Mm 3 /yr, of which 98% is used for irrigated agriculture and the rest to supply a population of 275,000 Inhabitants (Estrela et al., 2004). Groundwater pumping is not sustainable with the amount of available resources, estimated at 320 Mm 3 /yr by the Júcar Water Authority. Therefore, two major impacts are occurring: (a) the quantity of available groundwater is descending, noted as a continuous decline in the regional water level and a decrease in aquifer discharge to the Júcar River; and (b) the quality is also affected, as researchers have found a significant increase in nitrate concentrations in groundwater (Moratalla et al., 2009). In this context, the MOS is an ideal case study for testing and validating the usefulness of GIS Techniques for understanding the aquifer system and planning for sustainable management. Following is a description of the interpolation methods applied to these variables: a) The elevations of the top and bottom of the aquifer units, b) Hydraulic parameters, c) Groundwater level data d) Groundwater chemistry. The approach is to explore the variable data with histograms and spatial trend analysis in order to understand the behaviour of the variable in space as well as to establish whether the data are consistent or anomalous. After analyzing the data, a variable is selected to be interpolated. The type of interpolation to apply for raster maps or continuous spatial entities is chosen. Hydrostratigraphic framework Any attempt at making a coherent hydrogeological model should be approached by first understanding, with a certain amount of precision, the geometric configuration. In addition to information on the surface geology, lithologic columns should be analyzed and gathered from water sampling points, and tested materials should be classified within the defined hydrogeologic units (Murray & Hudson, 2002). Using this information, layers of geographically located points (X, Y coordinates) as well as the topographic height of the superior (top) and inferior (bottom) limits were made into attributes of each hydrogeologic unit that behaves as an aquifer. Using geostatistical interpolation models developed on theoretic and applied foundations, GIS software (i.e. ArcMap ® 9.3) was used to determine the continuous geographic entities, ie: raster maps of the surfaces corresponding to the top and bottom of each hydrogeologic aquifer unit. The result is the three-dimensional structure of the hydrogeologic system (Fig. 5A). These 3D geologic models (Fig. 5), constructed using GIS tools, became the foundation for the numeric simulation models in later steps. Hydrodynamic characterization Transmissivity, permeability and storage coefficient are hydraulic parameters that must be quantified for an aquifer because they are needed to estimate the progression of groundwater levels, groundwater flow through a section of the aquifer, contaminant transport time, the degree of aquifer homogeneity and the numeric parameterization of the groundwater flow models (Mace, 2000). Generally, estimating these parameters requires pumping tests in specific points. These provide specific geographic entities defined by their coordinates, and attribute values for the hydraulic parameters in the well. It is also useful to have previous knowledge on the spatial behavior of the variable and establish a relationship for the interpolation model (ie: the type of distribution function of the variable). In the case of the Mancha Oriental System, to determine the spatial distribution of any of the parameters mentioned, these logarithms have been used because the variable tends to have a log-normal distribution. In this case, the value estimated by the Krigging method is the absolute optimum and the semivariogram better represents the structure of the spatial variability (Samper & Carrera, 1996). Once the structure of the spatial variability of log-T was studied, ordinary Krigging type interpolation models were applied ( Fig. 5B and 3A). Characterization of groundwater flow As is the case with aquifer hydraulic parameters, data on the height of the groundwater levels are also point data. The attribute of the groundwater level in this point geographic entity is compiled in the inventory of water points by subtracting the topographic height of the point from the depth of the water level in the well. These measurements should be performed for a specific date and in the least amount of time possible. The raster maps obtained using the data on groundwater level height are called groundwater contour maps (isopiestic lines). These maps serve to determine how groundwater flow functions, where are the recharge and/or pumping areas in addition to indicating gradient calculations, flow and permeability (Fig. 5C). By crossing groundwater contour maps for different dates, contour descent maps can be obtained for that period. In addition, Variation in Saturated Thickness (VST) can be calculated between those dates. Hydrochemical characterization The chemical composition of groundwater is conditioned by a multitude of factors. Among those, the most important are: a) chemical composition and disposition of materials with which the water is in contact, b) time of contact with these materials, c) temperature, d) pressure, e) presence of gases, and f) level of water saturation in relation to distinct incorporated salts (Custodio & Llamas, 1983). Although the composition of groundwater is continually changing, the anthropogenic factors can significantly influence the composition. In fact, changes in land use are considered the most influential factor in groundwater pollution. Ions such as NO3, SO4, Na and Cl can come from agricultural fertilizers, livestock waste and waste from industry and urban centers. Nitrate is accepted as the most common contaminant in groundwater (Gulis et al., 2002;Jalali, 2009). In Europe, the objective is for waterways to achieve "good" chemical and ecological status according to Directive 2006/118/EC of the European Parliament and Commission (DOCE, 2006). This directive describes the protection of groundwater from pollution and deterioration and the establishment of a pollution prevention and reduction plan by 2015. In addition, water bodies should be in good quantitative and qualitative status, especially in reference to nitrate content, which should not exceed 50 mg/l. Thus, establishing the spatial distribution of NO3 concentrations in groundwater within the aquifer is of vital importance. To accomplish this, point analyses of groundwater in wells and springs must be performed. Using advanced interpolation capacities provided by GIS tools, a complete geostatistical study can be performed to establish the most contaminated areas in terms of nitrate (Fig. 5D). GIS & groundwater abstractions Intensive use of groundwater for irrigation in arid and semiarid regions has often been the main driver of socioeconomic development over the past four decades (Shah, 2005). However, poor management of pumped volumes of water has led to negative consequences in terms of quality and quantity of available groundwater resources and associated ecosystems. Controlling the groundwater withdrawals from a wide area of intensive irrigation is not easy. The largest volume of water used for agriculture has been extracted through tens of thousands of pumping-wells which generally have no measurement system and, in many cases, do not meet legal requirements or are unknown even in their location. Various methods of calculating groundwater abstractions have been known for years, but all of them are very expensive or inaccurate in their application to large areas (Brown et al., 2009). In this scenario, the data provided by satellites (remote sensing) and the computerized processing of these geo-referenced data (GIS) represent a new approach to monitoring and quantifying groundwater abstractions, with the following characteristics: instantaneous observations are available over large areas, there are several images throughout the year, there is information not visible to the naked eye, data distributed in both space and time is available, the information is not conditioned by the legal or administrative characteristics of the pumping wells, and satellite image acquisition and processing is very low-cost compared to traditional methods ). Theoretical foundations The methodology for determining groundwater pumping for irrigation follows these steps: First, the irrigated crops are identified and classified by the multitemporal analysis of images obtained by multispectral sensors on satellite platforms, comparing the phenological evolution of the crops with the evolution of the Normalized Difference Vegetation Index (NDVI; González-Piqueras, 2006). Then, the area covered by crops is quantified by introducing the data into a Geographic Information System (GIS) and overlay them with the areas or required limits. Based on the surface area of each crop and the knowledge of their water requirements, the theoretical amount of water needed for those crops to reach the stage of development seen in the images is calculated. When agricultural practices are known, a correction factor is applied to translate the theoretical amount of water applied to each crop. Finally, all the information generated is integrated (spatially and temporally distributed) in a Geographic Information System (see Figure 1) and is used to establish relationships among all elements of the water balance (Brown et al., 2009). Multitemporal analysis of satellite images and cross with vector cartography The term "Remote Sensing" has different definitions, but the most commonly used is "a group of techniques that analyze data obtained by multispectral sensors located on airplanes, spatial platforms or satellites." The sensors (on satellites) that observe the surface of the Earth are instruments that register the radiation from Earth and the atmosphere and transform it into a signal that can be managed in analog or digital format (Calera et al., 2006). The sensors do this by detecting the electromagnetic signal from the Earth and the atmosphere of a certain wavelength and converting them into an established physical magnitude. The energy values detected, quantified and coded from the sensors are usually in a two-dimensional number matrix or digital image (raster). Each element of the matrix, called a picture element or a pixel, has a digital value assigned to it (digital levels) which is usually registered in a byte or binary code (2 8 values, from 0 to 255). These represent the energy associated with the wavelength to which the detector is sensitive. According to Chuvievo (2002), each satellite scene can be used to extract four types of information, each with its respective resolution (Table 1): 1. Spatial, derived from the organization and presence of elements on the surface of Earth in three dimensions, 2. Spectral, dependent upon the observed and measured energy, 3. Temporal, associated with changes over time in a specific spatial location and 4. Radiometry, related with the conversion of voltage collected by the apparatus that receives the signal sent from quantifying entities and later on digital levels. The information referred from the sensor is treated digitally to obtain a geo-referenced representation of the land. Once the interactions of the atmosphere are removed from it, the radiation values received correspond exactly with those measured on the surface (see more detailed information in Chuvieco (2006) or Calera et al., (2006). The source of radiant energy is solar radiation on the land surface after moving through the atmosphere. The radiation the sensor obtains is that which emerged from the land surface to the proper region of the spectrum when the emissions due to temperature are considered null. Therefore, the electromagnetic spectrum is the continuous succession of these frequency values (wavelengths). Conceptually it can be divided into bands in which electromagnetic radiation has a similar behaviour (Fig. 6). Three basic elements can be distinguished as the components comprising all forms of the landscape on the Earth's surface: soil, water and vegetation. The behaviour of these elements in different regions of the electromagnetic spectrum can be observed in Figure 7. The energy emitted (reflectivity) from the ground in the solar spectrum has a uniform response, showing a flat curve and ascending to greater wavelengths. It is important to know that bare soil can present different curves according to the chemical composition, humidity content, organic material content, etc. In the optical spectrum, water can be observed as a strong contrast between the reflectivity of the visible (5%) and the infrared, where water absorbs almost all the radiation in these wavelengths (Fig. 7). This effect is used to separate the water-soil limit. Similarly to soil, the characteristic reflectivity curve for water can vary as a function of factors such as depth, suspended materials, roughness, etc. (Calera et al., 2006). The morphology of the reflectivity curve against the wavelength that vegetation has is well defined (Fig. 7). It has low reflectivity (10%) with a maximum relative to the region of green, high reflectivity in the near infrared which is gradually reduced to the middle of the infrared spectrum. The strong contrast between the reflectivity of red and near infrared indicates that the higher the contrast is, the more vigorous the vegetation is, either due to greater land cover or greater photosynthetic activity (Calera et al., 2006). This spectral behaviour is the foundation for the development of certain indices with an objective to highlighting active vegetation from other components (soil, water, dry vegetation). From the reflectivity of each band (quantitative information distributed and geo-referenced in space) a relationship with the biophysical characteristics can be established (biomass, fraction of plant cover, etc.). This allows for quantitative, spatial-temporal monitoring of the processes on the Earth's surface (Bastiaanssen et al., 2000;Calera et al., 2001;González-Piqueras, 2006). Nonetheless, reflecting the spatial and temporal variability of plant cover is complicated if different spectral bands with the reflectivity values are used. To unify this process, the Vegetation Indices have been developed, one of the most important being the NDVI (Rouse et al., 1973). The NDVI is: where: NDVI: Normalized Difference Vegetation Index, NIR: Near Infrared reflectivity (spectrum range in micrometers), R: reflectivity in red reflectivity. GIS tools have the capacity to establish dynamic processes if they contain spatially referenced information which is repeated over time in addition to the ability to study spatial changes over the land surface. Mathematical operations can be used between the different sensor bands (digital images) to obtain quantitative information of each satellite scene obtained for a specific date. In this way, a temporal series is available for establishing the progression of a variable, for example NDVI. Multitemporal analysis stems from the availability of a time sequence of images, so these scenes must meet a set of requirements such as geometric coregistration (ability to superimpose images with the highest precision possible and radiometric normalization; Calera et al., 2005). With this information and digital classification tools each pixel of the image from each date can be assigned a class defined through an automated process. There are two methods for classification: a) supervised and b) unsupervised (does not require intervention of an "interpreter"). The difference between the two is in the method of obtaining the spectral reference classes for assigning one to each pixel. Supervised classification stems from a priori knowledge on specific land uses located in space, which are called training plots. These serve to establish spectral reference classes. There are several methods and a procedure for assigning a class to each pixel, but the most commonly used is an algorithm of maximum probability. Without getting into the details of this method, the algorithm is based on multivariate statistical analysis of components that identify each pixel in terms of their closest resemblance. Other classification methods that could be used as alternatives or complimentary methods are decision tree (expert systems). These procedures are based on separating the pixel values of a layer into homogeneous groups and subgroups. Another method, called contextual filters, can also be applied. This not only considers the spectral characteristics of an individual pixel, but also considers neighboring pixels (Calera et al., 2006). Once the classified map has been obtained, the spectral classes can be used to select the classes that are of interest from a hydrogeologic point of view. In our case study, this is crops irrigated with groundwater. Therefore, it is important to know the area of irrigated crops and their spatial distribution. One of the most commonly used GIS techniques for this is overlay vectorial and raster cartography, which is the only way to obtain this information for rasterized areas. There are two types of overlays that depend on the pixel value (ESRI, 1997). When the pixel has a real value (for example a precipitation map, groundwater level, NDVI, etc.) a statistic is calculated for areas by obtaining statistical values from the raster within the selected polygons (mean, minimum, maximum, etc.). The other case is when the attribute of each pixel is a discrete value defining a series of classes (i.e. raster map of the classified land uses). In this occasion (tabulate areas) the result is the surface of each class within the vectorial polygons selected (ESRI, 1997). Once the areas of each irrigated crop have been determined and the amount of irrigation water supply is known, it is possible to calculate the volume of water used to irrigate crops in the area on an annual basis (see summary in Castaño et al., (2009). MOS case study The use of groundwater resources of the MOS above its recharge capacity has led to several quantitative impacts: a steady decline in groundwater level, reduced aquifer discharge to the Júcar River and aquifer pollution. In fact, the quantitative analysis performed on the Júcar River Basin (Estrela et al., 2004) for the European Water Framework Directive (EU, 2000) clearly indicates that the environmental objectives set are not being reached at the present time and there is a certain risk of not meeting them by 2015. In this situation, quite common in a semi-arid river basin, it is particularly important to precisely quantify the groundwater balance in order to determine aquifer sustainability. The information provided by the multispectral images becomes critical because these data sets are the only consistent and objective information on crops and can replace the data on agricultural statistics. In this regard, the MOS is an ideal case study for testing and validating the adequacy of remote sensing and GIS techniques for calculating groundwater abstractions in agricultural basins in semi-arid climates ). Following is a description of several studies in the MOS to classify irrigated crops and quantify the ground water consumption required for ideal phenological development. Calculation of groundwater withdrawals The development of a method to calculate groundwater abstractions has been briefly described in the section on the theoretical foundations. In addition to knowing the method, one must have previous knowledge of the study area in order to choose the type of satellite image, for example crops and natural vegetation (phenologic development), soils, climate, relief, etc. In this study, due to the characteristics of the study area, the ideal images for thematic cartography are those from Landsat5 TM and Landsat7 ETM+ (Tables 1 and 2 Using this information, the number of scenes required can be established as well as the bands to use for differentiating the crops of interest. For example, at least two images are necessary to establish a time series and to identify the non-irrigated crops, one in May or June (maturation process) and another in July (harvest). If more Landsat scenes on specific dates are included spring irrigated crops can be identified. Generally, a minimum of 16 images are used for performing the classification. If the temporal progression of the spectral response is considered a discriminating element for crops (phenologic development), the NDVI spectral band is the most useful. This is obtained by performing mathematical operations with the images for the same dates using bands 3 and 4 of the Landsat sensor (Table 2 and Figure 8). The next step is to choose the classes to use in the classification as a function of those which can be differentiated using the spectral band in the images used such that they meet the objectives of the study (Spring Irrigation, Summer Irrigation, Spring-Summer Irrigation, Alfalfa, Bare soil, Dry farming crops, Shrubs, Forest). For classification, a study is required on the training plots called "true land." This information is used to perform the classification using the maximum probability algorithm, tree decision and contextual filters. These classifications will be capable of discriminating between the sources of error that can be generated, e.g. in dry farmed crops because several images can have similar spectral responses. For example, in a rainy spring, cereals grown in irrigated or dry farming can be difficult to differentiate. Therefore, using other scenes with tree decision can help in the classification process. Contextual filters can be used to eliminate error on the plot boundaries or isolated pixels that belong to a different class than the rest of the plot. The result is a raster map with the classification of all irrigated crops (Fig. 9). Once the maps have been classified, the spectral classes can be used to select the classes of interest from a hydrogeologic point of view. The area of irrigated crops must be determined (divided into spring, summer and spring-summer irrigated crops) as well as their spatial distribution (overlay tools). Information on the irrigation needs of the crops present in the MOS are provided weekly through the Irrigation Assessment Service (SAR) by the Agronomic Institute of Technology of the Province of Albacete (ITAP) using the method proposed by Allen et al., (1998). For each agricultural year, the institution groups the irrigation needs for each crop and publishes them in the annual monitoring reports (http://www.itap.es). The theoretical irrigation needs represent the minimum water consumption for sustaining the crops of interest. To determine the true water needs, correction coefficients must be applied to the theoretical irrigation volumes . Field work to quantify the agricultural practices applied in the region should be done to perform this calculation. Thus, the irrigation efficiency can be used to calculate the correction coefficient that transforms the theoretic quantity of water necessary into the true values applied to each crop in the area. Generally the true amount of groundwater abstraction is higher than the theoretical irrigation needs. Therefore, the calculation of water consumption for the different types of irrigated crops by applying the following equation: Where: Vr = the annual volume of irrigation water for each type of crop (m³). A = the area of each type of irrigated crop (ha). D = the irrigation needs for each type of crop, applying the correction coefficient (m³/ha). i = Hydrogeologic Domain, Municipality…In this way groundwater consumption could be calculated for the MOS or any part of it. Estimating the amount of water required for irrigation is critically important in times of water shortage and especially in the current situation of increasing water demands with increasing populations. The use of GIS tools in this endeavor greatly increases the accuracy and efficiency of these types of study. This chapter is meant to be a summary of some methods used in the case study of the Mancha Oriental System, but they can be applied to other systems worldwide at risk of groundwater overexploitation or as a preventative measure to protect natural resources. Author details David Sanz, Santiago Castaño and Juan José Gómez-Alday University of Castilla -La Mancha / Remote Sensing and GIS Group, Albacete, Spain
8,940.4
2012-10-31T00:00:00.000
[ "Environmental Science", "Geography", "Geology" ]
Cloud platform to improve efficiency and coverage of asynchronous multidisciplinary team meetings for patients with digestive tract cancer Background Multidisciplinary team (MDT) meetings are the gold standard of cancer treatment. However, the limited participation of multiple medical experts and the low frequency of MDT meetings reduce the efficiency and coverage rate of MDTs. Herein, we retrospectively report the results of an asynchronous MDT based on a cloud platform (cMDT) to improve the efficiency and coverage rate of MDT meetings for digestive tract cancer. Methods The participants and cMDT processes associated with digestive tract cancer were discussed using a cloud platform. Software programming and cMDT test runs were subsequently conducted to further improve the software and processing. cMDT for digestive tract cancer was officially launched in June 2019. The doctor response duration, cMDT time, MDT coverage rate, National Comprehensive Cancer Network guidelines compliance rate for patients with stage III rectal cancer, and uniformity rate of medical experts’ opinions were collected. Results The final cMDT software and processes used were determined. Among the 7462 digestive tract cancer patients, 3143 (control group) were diagnosed between March 2016 and February 2019, and 4319 (cMDT group) were diagnosed between June 2019 and May 2022. The average number of doctors participating in each cMDT was 3.26 ± 0.88. The average doctor response time was 27.21 ± 20.40 hours, and the average duration of cMDT was 7.68 ± 1.47 min. The coverage rates were 47.85% (1504/3143) and 79.99% (3455/4319) in the control and cMDT groups, respectively. The National Comprehensive Cancer Network guidelines compliance rates for stage III rectal cancer patients were 68.42% and 90.55% in the control and cMDT groups, respectively. The uniformity rate of medical experts’ opinions was 89.75% (3101/3455), and 8.97% (310/3455) of patients needed online discussion through WeChat; only 1.28% (44/3455) of patients needed face-to-face discussion with the cMDT group members. Conclusion A cMDT can increase the coverage rate of MDTs and the compliance rate with National Comprehensive Cancer Network guidelines for stage III rectal cancer. The uniformity rate of the medical experts’ opinions was high in the cMDT group, and it reduced contact between medical experts during the COVID-19 pandemic. Introduction Multidisciplinary team (MDT) meetings can provide more reasonable treatment plans for cancer patients, which could prolong their survival and improve their quality of life (1)(2)(3)(4)(5)(6)(7).MDT meetings are the gold standard for cancer treatment decisions and are widely used for diagnosing and treating different tumors (8).However, MDT meetings are usually hosted weekly in many hospitals.Different specialists must regularly participate at the same time and place (9)(10)(11), which is timeconsuming and economically ineffective.Brauer et al. (12) retrospectively analyzed 470 patients with benign and malignant pancreatic and digestive tract diseases, which led to an MDT discussion.They focused on institutional resource utilization for MDT meetings, estimating a cost of 2,035 USD and a total time expenditure of 16.5 hours weekly.Therefore, MDTs are used only in settings that require critical decisions (12).However, MDT meetings are mandatory in the United Kingdom to improve the prognosis of patients with cancer (13).Many cancer patients benefit from MDT meetings; however, balancing MDT efficacy and coverage rate remains challenging. Internet-based communication has been widely used in the medical care of cancer patients.Telemedicine has been a part of the care of cancer patients during the COVID-19 pandemic (14,15).Using web conferences to discuss complex or rare cancer cases is reliable and effective for decision-making (16,17).Virtual multidisciplinary approaches could improve MDT workflow efficiency, shorten the preparation time of MDTs, reduce the meeting time, and yield the same survival results as those in the literature (18,19).However, few tumor types and cases use web conferences and virtual multidisciplinary meetings; multidisciplinary experts must simultaneously discuss these meetings, which undoubtedly affects the MDT's coverage rate and efficiency (16)(17)(18)(19)(20)(21).Asynchronous communication content has been used between care team members of breast cancer patients, which may improve physicians' clerical burden and reduce unnecessary interruptions (22). We propose an asynchronous MDT based on a cloud platform (cMDT) to develop a treatment plan for digestive tract cancer that maximizes the MDT coverage rate for cancer patients and improves MDT efficacy.In the current quality improvement project, we conducted a feasibility study on the implementation of this Internet-based MDT platform, aiming to 1) demonstrate the performance of cMDT in creating a treatment plan for digestive tract cancer, 2) investigate the barriers to implementation, and 3) quantify the burden and compliance with cMDT from the clinicians' perspective. Study setting The formation of the cMDT included four steps. Step one: Establishment of the cMDT.The doctors and administrative staff discussed the following questions: How many groups will be involved in an MDT for digestive tract cancer, and who will be the members of each group?How to perform the cMDT workflow?After four rounds of discussion (one round of discussion every 20 days) from October-December 2018, the participants reached a consensus and proceeded to the next step. Step two: From January-February 2019, the programmers wrote programs according to the consensus of the cMDT, which was discussed in the first step. Step three: The cMDT performs test runs and further improves the software and process of the cMDT according to the test run results from March-May 2019. Step four: The cMDT for digestive tract cancer was officially launched in June 2019. Data collection We defined digestive tract cancer patients diagnosed for the first time in our hospital from March 2016 to February 2019 as the control group and those diagnosed from June 2019 to May 2022 as the cMDT group.Patient characteristics, number of doctors participating in each cMDT, doctor response time (the interval between the MDT invitation to the doctor and doctor starting MDT), time of cMDT (total time spent by all MDT participants in a patient), the coverage rate of MDT (the ratio of the number of digestive tract cancer patients who received MDT and the number of digestive tract cancer patients who were diagnosed for the first time), compliance rate with the National Comprehensive Cancer Network (NCCN) guidelines for stage III rectal cancer (the ratio of the number of stage III rectal cancer patients whose treatment plan was consistent with the NCCN guidelines and the number of stage III rectal cancer patients who were diagnosed for the first time) and uniformity rate of medical experts' opinions (the ratio of the number of digestive tract cancer patients whose treatment opinions were uniform and the number of digestive tract cancer patients who received MDT) were collected. Statistical analysis The ages of the patients are presented as the means ± standard deviations.The coverage and compliance rates are expressed as percentages.We used the c 2 test or Fisher's exact test to compare categorical variables between the control and cMDT groups.A t-test was used to compare the ages of the patients in the control and cMDT groups.The c 2 test was used to compare the coverage rate of MDT between the control and cMDT groups.The c 2 test also compared the compliance rate with NCCN guidelines for patients with stage III rectal cancer between the control and cMDT groups.SPSS 24.0 software was used for the statistical analyses. Ethics approval and consent to participate This study was approved by the Ethics Committee of Mianyang Central Hospital, Sichuan Province, China (approval number: S-20230340-01).Anonymized patient data from this study were analyzed, and informed consent was not needed. Composition of cMDT The cMDT sets up a part-time secretary responsible for the MDT's operation.The digestive tract cancer cMDT was divided into four groups: esophageal cancer, gastric cancer, hepatobiliary pancreas, and colorectal cancer.Every team has a group leader who hosts offline, face-to-face discussions.The cMDT of each patient included three types of doctors: surgeons, oncologists (chemoradiotherapy), and radiologists.Based on the patient's condition, other medical experts, including pathologists, nurses, nutritionists, physicians, and intervention doctors, can be invited to participate in the cMDT.Two of the same specialized professionals were included in each group, serving roles A and B. All the cMDT participants had at least 10 years of work experience (Figure 1). cMDT software system The cMDT software system on the cloud platform includes four parts: a participant pool, an automatic trigger, patient information, and invited medical experts' opinions.The participant pool included all the medical experts on cMDT.This automatic trigger is the first time a patient diagnosed with digestive tract cancer has automatically entered the cMDT system.Patient information included name, sex, age, diagnosis, medical history, and imaging and laboratory examinations.The opinions of the invited medical experts included their opinions and summary opinions (Figure 2).Composition of Cmdt.Digestive tract cancer cMDT was divided into four groups: esophageal cancer, gastric cancer, hepatobiliary pancreas, and colorectal cancer.Two professionals of the same specialization were included in each group, serving as roles A and B. Processes of cMDT The cMDT system had three test runs.During the test runs, improvements were made to the proposed system.We found that forming only one group that included all medical experts resulted in the invitation information not being pushed accurately; therefore, we divided one group into four groups.When all the invited medical experts provided their opinions, and nobody judged the uniformity of their opinions, we added the secretary's summary comments.We found that some expert opinions could be reached through simple communication; therefore, we added a WeChat discussion.When medical experts are on vacation or in business, they cannot give their timely opinions; therefore, the number of medical experts in each discipline in each cMDT group increases from one to two, and they are at AB angles to each other.Because more than 50% of the medical experts could not opine within 24 hours, the cMDT secretary reminded them to do so 48 hours after the invitation was sent.Consequently, doctor participation compliance significantly improved, and the number of doctors who needed to be notified manually decreased from 56% to 5%. The final process is demonstrated in Figure 3.When an inpatient is diagnosed with digestive tract cancer, the patient is automatically imported into the cMDT cloud platform by the software system, which includes the patient's medical history, examination and test results, and pathological results.After the patient has completed the necessary imaging, laboratory, and pathological tests, the doctor in charge initiates a cMDT invitation for other medical experts in the cMDT software system.The system then pushes a message with the patient's name, age, diagnosis, department, and bed number to the mobile phones of invited medical experts (roles A and B).The invited medical experts asynchronous checked the cloud platform for the patient's medical history, image, and laboratory examinations and provided patient treatment opinions.Roles A and B are competitive; only those first entering the system can provide their opinions.If participants, A and B, did not give their opinions 48 hours after the invitation was sent, the secretary of the cMDT reminded them to complete the invitation promptly by phone.After all the invited medical experts provided their opinions, the secretary reviewed and summarized them.If the opinions of all the invited medical experts were consistent, the treatment plan for the patient was determined.If the opinions of all the invited medical experts were inconsistent, the secretary initiated an online discussion through the WeChat group.If the online discussion differed, the team leader organized face-to-face discussions to reach a consensus (Figure 3). After an official operation, the average doctor response time was 27.21 ± 20.40 hours (range between 1 and 98 hours).The average duration of cMDT was 7.68 ± 1.47 minutes (from 5 to 16 minutes), 16.46 ± 3.57 minutes (from 12 to 31 minutes), and 35.52 ± 6.89 minutes (from 25 to 48 minutes) in the cMDT system, WeChat discussion and face-to-face discussion, respectively.According to the cMDT system, 84.98% of the doctors responded at work, 15.02% were off-duty, and more surgeons (25.73%) responded off-duty than other doctors (10.34%).The average number of doctors participating in each cMDT was 3.26 ± 0.88 (range 3 to 8).Among the 11,263 doctors who participated in the cMDT, only 3.18% (358 times) needed to be reminded 48 hours after the message was initiated.cMDT software system.The cMDT software system on the cloud platform includes four parts: a participant pool, an automatic trigger, patient information, and invited medical experts' opinions. Patient of the cMDT Among the 7462 patients with digestive tract cancer, 3143 (control group) were diagnosed between March 2016 and February 2019, and 4319 (cMDT group) were diagnosed between June 2019 and May 2022.The patient characteristics are shown in Table 1.The coverage rates were 47.8% (1504/3143) and 79.99% (3455/4319) in the control and cMDT groups, respectively.Compliance rates with stage III rectal cancer guidelines were 68.42% and 90.55% in the control and cMDT groups, respectively (Table 2).The uniformity rate of medical experts' opinions was 89.75% (3101/3455), and 8.97% (310/3455) of patients needed online discussion through WeChat; only 1.28% (44/3455) of patients needed face-to-face discussion by multidisciplinary team members in the cMDT group. Discussion In this quality improvement project, we developed and implemented a web-based MDT in oncology for digestive tract cancer patients.Using a cloud platform on which multidisciplinary professionals can conveniently present their opinions, the modified MDT enables most patients in busy oncological practices to be covered by a standardized and individualized decision-making procedure.For patients whose medical conditions require further discussion, the platform also provides a mechanism for the traditional MDT to reach a consensus on medical decisions.The significantly increased coverage rate of asynchronous cMDT and compliance with clinical practice guidelines demonstrated the benefit of the modified mode of MDT, as it improved the efficiency and effectiveness of patient care. The UK Department of Health defines an MDT as "a group of people from different healthcare disciplines that meet at a given time (whether physically in one place or by video or teleconferencing) to discuss a given patient, and who are each able to contribute independently to the diagnostic and treatment decisions about the patient" (23).Due to the simultaneous participation of multidisciplinary experts, improving the effectiveness and efficiency of these methods is challenging.Previous studies have shown that the average length of patient discussions is 2-3 minutes (24,25).Time pressure and excessive caseload affect the quality of MDT decisions (24,26,27).A survey based on 1269 MDT members showed that streamlined discussions enhance efficiency and ensure high-quality discussion of complex cases.However, there is also a lack of consensus about the methods by which streamlining can be achieved (28).Another study showed that streamlining the MDT creates additional time within the meeting to discuss more complex clinical cases while allowing all members of the team an opportunity to discuss all patients if needed (29).Another study supported tumor-specific guidance for Flowchart of cMDT.When an inpatient is diagnosed with digestive tract cancer, the patient is automatically imported into the cMDT cloud platform by the software system.The doctor in charge initiates a cMDT invitation for other medical experts in the cMDT software system.The system then pushes a message to the mobile phones of invited medical experts (roles A and B).The invited medical experts provided patient treatment opinions.If participants, A and B did not give their opinions 48 hours after the invitation was sent, the secretary of the cMDT reminded them to complete the invitation promptly by phone.After all the invited medical experts provided their opinions, the secretary reviewed and summarized them.If the opinions of all the invited medical experts were consistent, the treatment plan for the patient was determined.If the opinions of all the invited medical experts were inconsistent, the secretary initiated an online discussion through the WeChat group.If the online discussion differed, the team leader organized face-to-face discussions to reach a consensus. streamlined MDT discussions (30,31).This study used the cMDT software system and the platform for asynchronous MDT because experts gave their opinions at different places and times.After digestive tract cancer patients (excluding emergency surgical patients) are automatically imported into the cMDT system, the doctor in charge can advise on radiotherapy, chemotherapy, and surgery only after the patient has completed the MDT.Those who were included in the expert pool had 10 years of work experience to ensure professional opinions from each medical expert.A secretary reminder system and mutual replacement of roles A and B were arranged to ensure the timely implementation of the cMDT.This ensures that the patient's diagnosis or treatment plan is reasonable.The use of the cMDT system is feasible, because of the limited manual reminders, and the fact that few doctors need assistance with the operation.The cMDT software system improved the coverage rate of MDT for digestive tract cancer, and the uniformity rate of the medical experts' opinions was high.Previous studies (32)(33)(34) have shown that a low compliance rate with the NCCN guidelines leads to a worse prognosis in cancer patients.An MDT can increase the compliance rate with NCCN guidelines.We chose the compliance rate with the NCCN guidelines for stage III rectal cancer patients as an observation indicator because the treatments included neoadjuvant, surgical, and adjuvant treatments.The use of a cMDT increased the compliance rate with the NCCN guidelines for treating stage III rectal cancer, especially for neoadjuvant treatment; moreover, the use of a cMDT has improved the efficiency of MDT therapy (35,36).Artificial intelligence (AI) clinical decision support systems (CDSSs) have also been used in MDT for breast cancer, and treatment concordance between the AI CDSS Watson for Oncology (WFO) and a multidisciplinary tumor board occurred in 93% of breast cancer patients.These results suggest that WFO offers an AI computing methodology that may be an effective decision support tool in cancer therapy (27).AI cloud computing is being embedded correctly into infrastructure to help automate routine processes and streamline workloads.Computer-aided diagnosis (CAD) is considered a way to reduce heavy workloads and provide a second opinion to radiologists, as it aids identification and classification of pulmonary nodules as malignant or benign and clarifies the stage of lung cancer (37,38).AIbased in vitro diagnostics have been used in detection and disease severity assessment for cardiovascular diseases, COVID-19, and oral cancer (39).The integration of AI in radiotherapy not only autocontours the gross target volume and normal tissue but also plays a role in online adaptive radiotherapy, which saves considerable time for radiation oncologists and physicists and holds the potential for more personalized and efficient cancer care (40)(41)(42)(43).Virtual tumor boards were piloted for breast oncology and neurooncology, with an optimistic capacity for helping clinicians care for patients with complex needs and address barriers (44).Digital technology could help individuals better connect among the members of multidisciplinary teams.WeChat, QQ, Whatchat, etc. (APPs) with a group chat function can allow members of multiple disciplines to discuss simultaneously or asynchronously in the group through voice or text (45,46).Additionally, some video conferencing software (Tencent Meeting, ZOOM, Teams, etc.) allows team members of multiple disciplines to have online discussions simultaneously (47,48).In this study, medical experts still held opinions, but at different times and places, unlike classic MDTs.Doctors can freely arrange their MDT.Because surgeons operate on patients during work time, more surgeons respond off-duty.This approach can avoid the delay caused by waiting for all MDT members to arrive and can save time from the medical department to the MDT location, saving doctors time. cMDT reduces the MDT preparation time, saving physicians' time.The progress of MDT is time-consuming and cumbersome; for example, Stahl (49) reported that oncologists took as long as 2 hours to prepare a complex case for review in nearly 47% of health systems.Other specialties, such as radiologists and pathologists, may spend up to 6 hours preparing diagnostic images for a single MDT meeting (34).Digital tumor board solutions have been used to reduce the overall preparation time of MDT for breast cancer, digestive tract cancer, and ear, nose, and throat cancer (50)(51)(52).In the last few years, technological developments in the surgical field have been rapid and are continuously evolving.One of the most revolutionizing breakthroughs was the introduction of the internet of things (IoT) concept within surgical practice (53).IoT technology has been used in laparoscopic surgery and can aid in intraoperative, real-time decision-making (54,55).The IoT is also used for remote monitoring of surgical patients, as it allows doctors and nurses to remotely understand the postoperative condition of patients and provide personalized interventions in a timely manner (56,57).In this study, almost no patient data needed to be prepared by doctors, except for a few pathological data points.All patient information including data from electronic medical records, laboratory information systems, picture archiving, communication systems, and digital pathology systems, was automatically imported into the cMDT system.The invited medical experts could view all patient information in the cMDT system. cMDT reduced contact between medical experts during the COVID-19 pandemic.This study began in October 2018, and the pandemic began in December 2019.This study improved MDT coverage and efficiency and objectively reduced contact between medical experts during the pandemic.Restricting movement and gatherings have played a role in reducing COVID-19 transmission rates (58) and have changed the form of MDT.The survey results demonstrated a 63% decrease in the number of MDTs continuing with face-to-face meetings, with the majority making changes, including limiting attendees, social distancing, the use of face masks, and the use of virtual software.There was a decrease in the number of patients discussed, and the quality of the discussion was also limited (59).During the COVID-19 pandemic, virtual multidisciplinary team meetings were held for cancer patients (36,60,61).cMDT reduced contact between medical experts and ensured quality, providing a new idea for cancer MDT during respiratory infectious disease pandemics. This study has several limitations.First, this was a single-center study and not a randomized controlled study, with some results compared to previous data.Second, there was no long-term followup data, such as progression-free survival and overall survival data for these patients, which indicates that the effect of cMDT needs to be clarified.Third, cMDT reduces academic communication between doctors of different specialties, especially young doctors (because all cMDT participants must have more than 10 years of work experience), which is not conducive to the growth of young Conclusion asynchronous cMDT based on a cloud platform can increase the MDT coverage rate and guideline compliance rate for patients with stage III rectal cancer, thereby saving doctors time.The uniformity rate in the medical experts' opinions was high of the cMDT group.In addition, it reduced contact between medical experts during the COVID-19 pandemic. TABLE 1 Demographics and characteristics of the control and cMDT groups. TABLE 2 Compliance rates with guidelines for stage III rectal cancer.Engaging young doctors in WeChat discussions or face-toface discussions may compensate for this disadvantage.
5,152
2024-01-15T00:00:00.000
[ "Medicine", "Computer Science" ]
Understanding tumor ecosystems by single-cell sequencing: promises and limitations Cellular heterogeneity within and across tumors has been a major obstacle in understanding and treating cancer, and the complex heterogeneity is masked if bulk tumor tissues are used for analysis. The advent of rapidly developing single-cell sequencing technologies, which include methods related to single-cell genome, epigenome, transcriptome, and multi-omics sequencing, have been applied to cancer research and led to exciting new findings in the fields of cancer evolution, metastasis, resistance to therapy, and tumor microenvironment. In this review, we discuss recent advances and limitations of these new technologies and their potential applications in cancer studies. Introduction A single cell is the ultimate unit of life activity, in which genetic mechanisms and the cellular environment interplay with each other and shape the formation and function of such complex structures as tissues and organs. Dissecting the composition and characterizing the interaction, dynamics, and function at the single-cell resolution are crucial for fully understanding the biology of almost all life phenomena, under both normal and diseased conditions. Cancer, a disease caused by somatic mutations conferring uncontrolled proliferation and invasiveness, could in particular benefit from advances in single-cell analysis. During oncogenesis, different populations of cancer cells that are genetically heterogeneous emerge, evolve, and interact with cells in the tumor microenvironment, which leads to host metabolism hijacking, immune evasion, metastasis to other body parts, and eventual mortality. Cancer cells can also manifest resistance to various therapeutic drugs through cellular heterogeneity and plasticity. Cancer is increasingly viewed as a 'tumor ecosystem' , a community in which tumor cells cooperate with other tumor cells and host cells in their microenvironment, and can also adapt and evolve to changing conditions [1][2][3][4][5]. Despite the dramatic advances, substantial limitations and challenges still exist in single-cell sequencing. The first challenge lies in the technological noise introduced during the amplification step. Notable allelic dropouts (i.e., amplification and sequencing of only one allele of a particular gene in a diploid/multiploid cell) and non-uniform genome coverage hinder the accurate detection of single nucleotide variants (SNVs) at the genome or exome level. These problems can be partially alleviated by the LIANTI (linear amplification via transposon insertion) method [40], which implements a linear genomic amplification by bacterial transposons and reportedly reaches improvements in genome coverage (~97%), allelic dropout rates (< 0.19) and false negative rates (< 0.47). Similarly, in single-cell RNA sequencing (scRNA-seq), lowly expressed genes are prone to dropout and susceptible to technological noise even when detected, although they often encode proteins with important regulatory or signaling functions. These technological issues are more profound for scRNA-seq technologies designed to offer higher throughput [81,84]. Although many computational methods are available to model or impute dropout events [92,94,95], their performances vary and may introduce artificial biases. Much effort is needed to fully address this challenge. The second challenge is that only a small fraction of cells from bulk tissues can be sequenced. Bulk tissues consist of millions of cells, but present studies can often only sequence hundreds to thousands of single cells because of technological and economic limitations [9-11, 20, 25, 124-126]. To what extent the sequenced cells represent the distribution of cells in the entire tissue of interest is not clear. A plausible solution to address this challenge would be to further improve the throughput of cellular captures, e.g., MARS-seq [82] and SPLiT-seq [86], or alternatively to combine bulk and single-cell sequencing together and then conduct deconvolution analysis [127]. Deconvolution analysis for bulk RNA-seq data uses cell-type signature genes as inputs [128][129][130], which can be substituted by single-cell sequencing results, although critical computational challenges still exist, such as collinearity among single cells. If marker genes for known cell types are orthogonal to each other, the proportions of each cell type in a bulk sample can be reliably estimated. However, collinearity of gene expression exists widely among single cells, which complicates the deconvolution process. At present, successful deconvolution of bulk RNA-seq data based on scRNA-seq-defined signatures has been reported only in cases where orthogonal molecular signatures and fine cluster structures are well balanced [131]. The wide usage of scRNA-seq based deconvolution will hinge upon the availability of comprehensive single-cell clusters and the development of general methods for selecting orthogonal signatures for each cell type. Spatial information of single cells in the tissue is often lost during the isolation step and thus single-cell sequencing data typically do not show how cells are organized to implement the concerted function within a tissue of interest. Many new techniques have been developed to keep or restore the spatial information of sequenced single cells such as fluorescence in situ hybridization (FISH), singlemolecule fluorescence in situ hybridization (smFISH), laser capture microdissection, laser scanning microscopy, including two-photon laser scanning microscopy, and fluorescence in situ sequencing [21,30,[87][88][89][90][91][132][133][134][135][136][137][138][139][140][141][142][143]. However, at present all of these techniques have inherent limitations and only apply to specific spatial architecture. For example, while FISH-based technologies can map the spatial distribution of a set of selected genes upon which the spatial information of single cells subject to RNA-seq can be reconstructed via probabilistic inference, the methods are limited to two dimensions and the inference is primarily dependent on the availability of marker genes that can properly discriminate the spatial characteristics with sufficient resolutions. Other conditions for valid marker genes include accurate and robust estimation of their expression levels, but this requirement can be greatly compromised by inherent dropout in scRNA-seq protocols. Accurate restoration of single cell spatial positions via FISH-based inference also requires replicable tissues for parallel FISH and scRNA-seq, which can be only approximately fulfilled on model organisms. For human cancers, however, such requirements usually cannot be met and spatial-recording methods have thus been proposed. With laser capture microdissection, single cells are obtained simultaneously when their spatial information is recorded. However, the cellular throughput of such methods is extremely limited due to operation difficulties, and the biological interpretation of the recorded spatial positions are confined because adjacent cells cannot be properly dissected for scRNA-seq, whereas sequenced cells are often distantly distributed. Low molecular throughput is also problematic with these recently developed in situ sequencing methods. Typically, only tens or hundreds of known genes can be in situ labeled or sequenced, far from the requirement of fully understanding the molecular landscapes of single cells of interest. Furthermore, the replicability of such complicated experiments also imposes barriers for their practical applications to human samples. Because single-cell sequencing captures individual cells at a particular time point, other factors such as cell cycle and functional state must be considered. By contrast, these factors are often ignored in bulk sequencing due to the average effect. Cell cycle phases can be discerned by phase-specific expression analysis [144][145][146], but cell types and cell states can be hard to distinguish. Sometimes even cancer cells cannot be easily distinguished from normal cells, although inferred DNA copy numbers are often used for this purpose [22,47,51]. More robust methods are needed for cell type determination in silico. Compared to traditional bulk sequencing technologies, which characterize samples via a gene-by-sample matrix, single-cell sequencing adds a cellular layer between genes and samples, which results in a gene-by-cell-by-sample data structure. Addition of the cellular dimension allows simultaneous characterization of samples at both the molecular and cellular level. However, bioinformatics and algorithmic methods for single-cell sequencing data analysis are generally developed for gene-by-cell data, which essentially have the same structure with the gene-by-sample matrices. Although methods exploiting the cellular dimension for phenotype classification have been proposed [147], tools sufficiently employing all the molecular, cellular, and sample information of the new data structure are still needed. Given the maturation of single-cell sequencing technologies, especially scRNA-seq, the scale of datasets of one study soon increases from hundreds to tens of thousands and even millions of cells. For large programs, e.g., the Human Cell Atlas project [148], the volume of data demands more robust computer hardware and software. Although a few down-sampling or convolution-based methods have been proposed to manage large-scale scRNA-seq data for clustering and differential expression analysis [149][150][151], efficient and effective algorithms are of pressing need to circumvent these difficulties. Complexity of tumor ecosystems Cancer is known for its heterogeneity, at the inter-and intra-tumor levels [152]. Within a tumor, different spatial sites have different composition of cancer cell clones (Fig. 2), which results in spatial heterogeneity [152]. As cancer cells evolve, temporal variations also arise during the course of cancer genesis and progression, causing temporal heterogeneity [152]. In addition to cancer cells, tumors are also infiltrated with stromal, immune, and other cell types. The diversity of these cells forms the basis of the heterogeneity of the tumor microenvironments [1,4,153]. The complex and dynamic nature of cancer heterogeneity within tumors is analogous to ecosystems. Thorough understanding of the composition, interactions, dynamics, and operating principles of tumor ecosystems is key to understanding cancer evolution and the emergence of drug resistance. Multi-region sampling coupled with bulk sequencing is a plausible approach to investigating intra-tumor heterogeneity on the genome scale [36,154,155]. However, although this approach reveals intra-tumor heterogeneity, it cannot directly dissect the cellular composition of tumors. Computational deconvolution techniques could help infer the cellular composition of tumors, but such analyses are limited to a few known cell types [128][129][130]. Single-cell sequencing represents a quantum technological leap, as it allows the most precise dissection of the complex architecture of tumors while capturing rare cell types. Here, we review recent progress on understanding tumor ecosystems using single-cell sequencing technologies ( Table 1). Decomposition of clonal and sub-clonal tumor structure Early success of single-cell sequencing applications in cancer research came from the studies of clonal and sub-clonal structure of primary tumors. DNA-based single-cell sequencing has been applied to breast [7,20,21,26,156,157], kidney [158], bladder [159], and colon tumors [39,160,161], glioblastoma [162], and hematological malignancies such as acute myeloid leukemia and acute lymphoblastic leukemia [11, 33, Because of the spatial heterogeneity, bulk sequencing from a specific specimen will produce an average signal of thousands of cells with unknown composition, which forms a hidden confounding factor that interferes with the interpretations of cancer research and diagnosis. Single-cell sequencing inherently has the power to dissect the cellular composition of tissues, providing a powerful tool to advance cancer studies [163][164][165]. These studies demonstrated the existence of common mutations among different cancer cell clones in individual cancer patients, which provided evidence for the origin of common cancerous cells and subsequent clonal evolution. Meanwhile, the application of scRNA-seq in glioma [22,51,166] demonstrated that cell differentiation of neural stem cells also contributes to tumor heterogeneity, thus supporting a cancer stem cell model. Notably, a recent study of intra-tumor diversification of colorectal cancers [42] integrated single-cell technologies and tumor organoid culture to show that cancer cells had several times more somatic mutations than normal cells. The authors of this study also observed that most of the mutations occurred during the final dominant clonal expansion, contributed by mutational processes absent from normal controls. In addition to canonical mutations, transcriptomic alterations and DNA methylation were cell-autonomous, stable, and followed the phylogenetic tree of each cancer. The study by Roerink et al. [42] provided a paradigm of cancer evolution by characterizing clonal and sub-clonal tumor structures, and indicated the potential dynamics of cancer progression. These findings exemplify the unique power of single-cell sequencing to characterize the diversity of cancer cells, resulting in different evolutionary models between cancers. In particular, single-cell data challenged the cancer stem cell model by showing that continued proliferation and clonal expansion formed the majority of tumor cells. Furthermore, scRNA-seq data supported the cancer stem cell model by demonstrating the contribution of cell differentiation to tumor heterogeneity. Copy number alternations (CNAs) and point mutations of cancer cells were subject to different evolutionary modes, with the former preferring punctuated evolution and the latter preferring gradual accumulation. Outstanding disparities need to be resolved before consistent models of cancer genesis and evolution can be applied to a wide range of cancers. Studies with larger sample size and higher molecular and cellular resolution are needed to reconcile various cancer evolution models. Sequencing analysis of single-cell-derived organoids could provide a template for investigating cancer evolution, but this should be extended to larger samples and other cancer types. Monitoring cancer progress through characterization of circulating tumor cells Circulating tumor cells (CTCs) are extremely rare in blood (1 in 10 6 ), with only tens of cells captured from a typical blood draw [60]. The application of bulk sequencing to such limited input material for genomic exploration is difficult, hindering the analysis of cancer cell migration via blood. Single-cell sequencing has transformed the ability to characterize CTCs and has been used to identify metastatic potential of CTCs in cancer metastasis models, to monitor abnormal signaling pathways for drug-resistance prediction. By characterizing mutation profiles of CTCs, their tissue sources can be matched to the positions of primary and metastatic tumors [13,16,24,167,168]. This type of analysis holds great potential in early cancer detection and real-time monitoring of disease progression with or without treatment. Furthermore, the origin and destination of CTCs could be further explored to reveal the dissemination conditions of specific tumors. The application of DNA-based single-cell sequencing to CTCs in colon cancer [161], melanoma [169], lung cancer [170], and prostate cancer [171,172] revealed that the copy number profiles of CTCs are highly similar to primary and metastatic tumors but point mutation profiles show much greater variations, consistent with punctuated Table 1 Recent progress of cancer studies based on single-cell sequencing Technology Study topic References Single-cell DNA-seq Heterogeneity of cancer clones [18,33,40,45,157] Single-cell DNA-seq Mutation profiles of CTCs [16,43] Single-cell RNA-seq Expression patterns of cancer cells upon treatment [19,29,46,165] Single-cell RNA-seq Expression heterogeneity and dynamics of cancer cells [22,37,38,41,44] Single-cell RNA-seq Expression patterns of CTCs [31] Single-cell RNA-seq Heterogeneity of tumor microenvironment [47][48][49][50][51][52][55][56][57][58][59] Single-cell RNA-seq, mass cytometry Heterogeneity of tumor microenvironment [53,54] Single-cell DNA-seq and RNA-seq Integrated analysis of cancer cells [20,166] Single-cell epigenomics Epigenomics of cancer cells [187] Single-cell multi-omics Multi-omics analyses of the same cancer cells [32] Single-cell-derived organoids Diversification of cancer cells [42] Spatial single-cell sequencing Spatial heterogeneity and metastasis of cancer cells [21,30] Single-cell DNA-seq Amplification methods [7,8,39,40,172,[206][207][208] evolution of CNAs and gradual evolution of point mutations observed within tumors. A recent integrative analysis of colon, breast, gastric, and prostate cancers by single-cell DNA sequencing compared the mutation profiles between primary tumor cells and CTCs, and revealed convergent evolution of CNAs from primary cancer tissues to CTCs [16]. Remarkably, CNAs affecting the oncogene MYC and the tumor suppressor gene PTEN were observed only in a minor proportion of primary tumor cells but were present in all CTCs spanning multiple cancer types. These observations suggest that the potential of primary tumor cells to transit into CTCs are quite uneven, or otherwise strong selection pressure exists upon CTCs during the metastasis process. To resolve the detailed molecular mechanisms involved in the generation of CTCs in primary tumors to colonization in metastasis sites, it will be important to temporally trace the variations of CTCs during cancer progression from primary tumors to metastasis in both a research and clinical setting. Furthermore, scRNA-seq has been used in the study of CTCs in melanoma [173], breast [167], pancreatic [126,174], and prostate cancers [31], revealing specific transcriptional signatures of CTCs relative to their primary and metastatic tumors. Extracellular matrix proteins were specifically expressed by CTCs, and plakoglobin appeared to be a key regulator of CTC clusters with survival advantages distinct from individual CTCs. Furthermore, abnormal signaling pathways for drug resistance prediction can be monitored using scRNA-seq of CTCs, as illustrated by the Miyamoto et al. study [31], in which scRNA-Seq profiling of 77 CTCs from 13 prostate cancer patients revealed extensive heterogeneity of the androgen receptor gene at both expression and splicing levels. Activation of non-canonical Wnt signaling was observed in the retrospective study of CTCs from patients treated with an androgen receptor inhibitor, indicating the potential resistance to therapy. Despite enviable progress, CTC studies remain limited by difficulties in the detection and enrichment of CTCs from blood. How to effectively obtain insight into the generation, progress, metastasis, and response to therapies of the entire tumor through the characterization of CTCs is still an elusive question. Interrogating the genesis and evolution of therapy resistance Chemotherapy and targeted therapies have been important weapons to combat cancers, but drug resistance is common for most tumors. Due to the complexity of cancer drug resistance, the underlying mechanisms remain poorly understood for most human cancers, which hampers the development of new approaches to overcome drug resistance. An important question to address is whether drug resistance arises from rare pre-existing subclones with drug-resistant phenotypes prior to treatment (intrinsic resistance) or, alternatively, is acquired through induction of new mutations conferring drug-resistance (acquired resistance). Acquired versus intrinsic resistance has been studied for decades in bacteria, which are single-cell systems [175], but remains elusive in most human cancers. Single-cell sequencing can be used to resolve tumor heterogeneity, reconstruct the evolutionary trajectories of cancer cells, and identify rare subclones, and has therefore been a promising method to address drug resistance [19,25,29,47,165]. The recent study by Kim et al. [20] of triple-negative breast cancers treated with neoadjuvant chemotherapy employed both single-cell DNA-and RNA-sequencing to resolve the genesis and evolution of drug-resistant clones. Using DNA data from 900 cells and RNA data from 6862 cells, CNAs in drug-resistant subclones were found to be pre-existing and adaptively selected while their expression profiles were acquired through transcriptional reprogramming in response to chemotherapy. These results suggest a model of drug-resistance acquisition involving both intrinsic and acquired modes of evolution. According to the newly proposed model, drug resistance-associated CNAs are acquired in rare tumor clones during several short evolutionary bursts at the earliest stages of tumor progression and then subject to gradual evolution. Following anti-tumor therapies, the selective pressure will result in two fates for tumor cells: clonal extinction and persistence, during which the pre-existing rare drug-resistant tumor clones will persist and become the major clones. The transcriptional programs of the persisting clones will converge on a few common pathways associated with the therapy-resistance phenotypes. Both genomic mutations and transcriptional reprogramming could be relevant in understanding therapy resistance as they might exert different modes of evolution for changes at individual levels. It remains unclear how different mechanisms coordinate with one another; therefore, more powerful technologies, such as single-cell multi-omics, are needed to address these questions. Dissecting the tumor microenvironment to understand cancer immune evasion and metastasis The tumor microenvironment represents all components of a solid tumor that are not cancer cells. Besides the genetic and non-genetic heterogeneity among tumor clones, heterogeneity among tumor-infiltrating stromal and immune cells in the microenvironment also plays vital roles in tumor growth, angiogenesis, immune evasion, metastasis, and responses to various therapies. With bulk DNA sequencing, the genomes of these cells in the microenvironment are indistinguishable from those of normal tissues and thus often interfere with the detection of tumor CNAs and point mutations by altering tumor purity. With bulk RNA sequencing, the mRNAs of these cells are intermingled with those of tumor cells, which makes it difficult to untangle the expression signals by tumor cells from those by microenvironment cells. The variable compositions of tumor microenvironment often become 'dark matter' that confounds subsequent analyses. Although pathway analysis may indicate major types of infiltrated cells, the results are not sufficiently detailed to provide insights into the underlying mechanisms of tumor phenotypes. Computational deconvolution analysis can infer tumor-infiltrating cell types based on tumor bulk RNA-seq data [128][129][130]. However, these algorithms are limited by the availability of gene signatures specific to individual cell types and the collinearity among gene signature profiles. The majority of these limitations are overcome by single-cell sequencing. With scRNA-seq, the immune landscapes of melanoma [47], glioblastoma [176], breast [52,55,56], head and neck [48], colorectal [50], liver [49], kidney, [54,58] and lung [53,57,59] cancers have been depicted at unprecedented resolution. New immune cell subtypes with distinct functions or states have been identified, and genes specifically expressed in rare immune cells have been linked to tumor immune evasion. For example, results from a recent single cell study of lung cancers by 10X Genomics [59] revealed that tumor-enriched B cells can be further grouped into six clusters, of which two follicular B cell clusters are characterized by the high expression of CD20, CXCR4, and HLA-DRs. By contrast, two plasma B-cell clusters express immunoglobulin gamma and the remaining two mucosa-associated lymphoid tissue-derived B-cell clusters have immunoglobulins A and M and JCHAIN as signature molecules. Subtypes of macrophages were also depicted by mass cytometry [53]. In particular, T cells, which specifically recognize tumor neoantigens and kill cancer cells in a targeted way, have been in the spotlight of single cell interrogation of several cancer types [49,55,57]. Tissue-resident T-cell subsets are found in liver, lung, and breast tumors, with lower T-cell exhaustion levels associated with better prognosis [49,55,57]. Immunotherapies that reinvigorate cytotoxic T cells via immune checkpoint blockade or adoptively transfer neoantigen-specific T cells are therapeutically effective in multiple cancer types [177]. Specific T-cell clusters with suppressive functions in treatment-naïve tumors and T-cell clusters that respond to immunotherapies have been identified [47,49,178,179]. Signature genes of these T-cell clusters, e.g., LAYN identified in exhausted CD8 + T cells and regulatory T cells of liver cancer, can provide attractive biomarkers to predict patient responses to cancer immunotherapies and potentially serve as new candidate targets for further investigation. Nevertheless, accompanying these great achievements, single-cell studies of tumor microenvironment are limited in their depictions of spatial, temporal, and interactive characteristics among cancer and immune cells. Besides the immune cells themselves, cancer-associated fibroblasts (CAFs) also play crucial roles in cancer immune evasion and metastasis. Heterogeneity of CAFs in various cancer types via scRNA-seq has been shown in several studies [47,48,50,59]. In lung cancer studies by 10X Genomics [59], five distinct types of tumor-resident fibroblasts were identified that expressed unique repertoires of collagens and other extracellular matrix molecules. In colorectal cancers profiled by SMART-seq2 [50], two distinct subtypes of CAFs were identified, one of which was enriched for epithelial-mesenchymal transition (EMT)-related genes, which is consistent with results from the lung cancer study [59]. The heterogeneity of CAFs of these cancer types was consistent with results from earlier studies in metastatic melanoma and head and neck cancer, in which the potential functions of CAF subclusters were indicated [47,48]. Interestingly, a specific subcluster of CAFs that exclusively expressed multiple complement factors, including C1S, C1R, C3, C4A, CFB, and C1NH (SERPING1), correlated with T-cell infiltration based on data analysis from the Cancer Genome Atlas project [47]. Although the correlation cannot imply causality, the cellular and molecular mechanisms of T-cell recruitment by CAFs should be studied. Furthermore, certain CAFs observed in a head and neck cancer single-cell study were found to co-localize with malignant cells highly expressing a p-EMT (partial EMT) gene program that is correlated with metastasis [48]. The co-localization was supported by numerous ligand-receptor interactions between CAFs and the corresponding malignant cells, thus providing new clues for the underlying mechanisms of tumor invasion. The dynamic nature of CAF gene expression certainly deserves further exploration. Outlook of single-cell sequencing in cancer research Single-cell epigenomic technologies are maturing and steadily making their way to cancer research [15,68,72,[180][181][182][183][184][185][186][187][188][189][190] (Fig. 3). These technologies provide various means to explore DNA methylation status, chromosome accessibility, protein binding, and high-order chromosome conformations. As single-cell epigenomic technologies depict the molecular layers connecting the genome and its functional outputs, the adaptation of single-cell epigenomic technologies to cancer research would greatly advance the understanding of regulatory mechanisms of cancer cell phenotypes and provide new therapeutic targets to combat cancers [191]. New insights may also include mechanisms of cancer cell mutagenesis as epigenomics plays key roles in chromosome stability and dynamics [192]. Single-cell epigenomic technologies may also help investigate the regulatory mechanisms that shape tumor-infiltrating cells, and thus help in advancing the development of therapies that target the tumor microenvironment. Despite its exciting prospects, single-cell sequencing still faces notable technical challenges that limit the release of its full power in cancer research and clinical applications. For example, the single layer-omics technology generally only gives a snapshot of the state of tested cells. Thorough understanding of the functions of individual cells often requires comprehensive molecular information that covers all layers from the nucleus to extracellular matrix, and includes genomes, epigenomes, chromosome confirmation, transcriptomes, proteomes, metabolomes, and interactomes (Fig. 3). Comprehensive information is important for cancer studies because of the great genomic and phonemic heterogeneity of cancer cells. Single-cell multi-omics technologies [32, 76-79, 124, 187, 193] have proved feasible but these methods are still in the infant phase of development, limited by low coverage, throughput, and automation levels. Wide application of such technologies in cancer research and clinics requires more effort to conquer the aforementioned challenges. CITE-seq has been used to simultaneously profile mRNA levels and the abundance of a set of selected proteins of cancer samples [80]. Furthermore, SUPeR-seq allows simultaneous measuring of linear and circular RNA levels within the same single cancer cell and associated cells [124], and G&T-seq provides both genomic and transcriptomic information of a given cell [76]. scTrio-seq has been used to obtain epigenomic, genomic, and transcriptomic information of the same cancer cell [32]. Future challenges will include circumventing the loss of spatial information of tested single cells during the dissociation step. Tumor ecosystems are highly organized and dynamic; therefore, the spatial positions of various cancer cells and the tumor microenvironment cells and their interactions may play pivotal roles during cancer progression, metastasis, immune evasion, and the development of therapeutic resistance (Fig. 3). Integration of imaging techniques with single-cell sequencing have made meaningful progress in this area. By recording the spatial information of single cells or important 'anchor genes' via FISH, smFISH, immunohistochemistry, laser capture microdissection, laser scanning microscopy, or in situ sequencing, the spatial structure of single cells can be experimentally recorded or computationally reconstructed [21, 87-91, 132, 138, 143, 149], thereby shedding light on the spatial heterogeneity of tumor ecosystems. The recently developed NICHE-seq technology [89] allows isolation of immune cells in a specifically prescribed niche of model animals for single-cell sequencing, which provides a powerful tool to explore tumor immunology in animal models. However, the wider application of NICHE-seq to clinical samples will take time, because two-photon laser scanning microscopy requires the targeted cells to be optically labeled, which at present is only possible on model animals. ProximID maps the cellular interaction network of tissues and could be used for spatial position mapping to cellular physical networks [194] to show how cancer cells interplay with the tumor microenvironment (Fig. 3). ProximID dissects tissues into doublets or triplets to capture the physical interactions among cells and determines cellular identities via scRNA-seq. ProximID provides great promise for cellular interaction and spatial position mapping, as shown by the recently proposed Integration of single-cell sequencing technologies with spatial information of cells to analyze the spatial architecture of tumors. This technique is not yet widely used but is important for cancer biology and treatment. b Single-cell multi-omics. Interrogation of the cellular interaction network within tumors by singlecell sequencing. The very recent development of ProximID, which maps physical cellular interaction networks via single-cell RNA-seq without prior knowledge of component cell types, has proved the principles of single-cell multi-omics [194] and provides great promise for cancer research. c Cellular interaction mapping. Application of single-cell multi-omics techniques to resolve both the somatic mutations and gene expression, which will allow the investigation of immunogenicity of single cancer cells. d Single-cell epigenetics. Techniques to resolve the heterogeneity of cancer cells and tumor-infiltrating immune cells, which may provide new insights into the regulatory mechanisms within tumors and new drug targets to modulate tumor progression paired-cell sequencing method that adopts a similar strategy [195]; however, cellular throughput is still modest at present. A newer version of ProximID parallels the microdissection of doublets and triplets with single cell identity determination, and improves the throughput at the expense of accuracy of cell identity assignment. Overall, creative technological advances in the basic research field have recently emerged in quick succession. Despite obvious pros and cons, they provide exciting new tools to interrogate human cancers at the single-cell level. Furthermore, the development of new computational and analytical tools is often lagging behind corresponding experimental methods. The new single-cell sequencing data, with added new dimensions or features, often violate the analytical assumptions of bulk sequencing studies, which makes existing analytical tools obsolete or underpowered. For example, the data structure of single-cell sequencing of cancers requires the application of tensors to depict the gene-by-cell-by-sample relationships, whereas the bulk sequencing data can be sufficiently encapsulated by gene-by-sample matrices. Analytical tools currently available are generally designed for matrix-based data structure. Reduction of dimensionality from tensors to matrices is currently needed to use the available bioinformatics tools to analyze either gene-by-cell, gene-by-sample, or cell-by-sample relationships. Tools for simultaneous analysis of gene-cell-sample relationships are urgently needed. The ever-increasing data size of single-cell sequencing studies also requires more robust computational powers. Down-sampling is often applied to reduce data size so that the dataset can be analyzed. Computational algorithms that can handle large single-cell sequencing datasets while simultaneously maintaining similar analytical performance are needed. The spatial single-cell RNA sequencing technique also generates unprecedented data type, for which two new algorithms have been proposed recently [196,197], allowing analysis of the spatial variance of cancers. Computational development specifically for single-cell data will likely be the field to watch in the next few years, because there are many unresolved yet important issues. It is hoped that bioinformatics of single-cell analysis will catch up with the rapid technology development and the ever-expanding appetite for new data in the cancer research field. Potential applications of single-cell sequencing in the clinic Single-cell technologies use limited input materials to resolve tumor heterogeneity and so have great potential in the cancer clinic for diagnosis, prognosis, early detection, risk assessment, progress monitoring, and therapy response prediction. Single cancerous cells can be isolated from blood samples in early stages of cancer genesis [161,170,172], which enables early detection and assessment of cancers [198,199]. If a set of known driver mutations are observed independently in multiple single cancer cells, clonal expansion of cancerous cells is inferred. Additional diagnostic tests are then combined to validate the inference, and further monitoring or treatments may be needed. For diagnosed cancer patients, single-cell sequencing can reveal clonal and subclonal information of their tumor lesions with respect to their genomic and transcriptomic characteristics, upon which clinicians can determine the most suitable therapies [200]. With longitudinal sampling of CTCs or DTCs (disseminated tumor cells), single-cell sequencing also allows the monitoring of patient responses to the prescribed therapies [31,171,201]. The resulting genomic and transcriptomic information can be used to examine the selective pressure of drugs to various cancer clones and alert the emergence or expansion of drug-resistance cancer clones [20]. The non-invasive nature of CTC or DTC isolation also greatly reduces the inherent risks of core biopsy directly at the tumor site. Single-cell sequencing data potentially provide metrics beyond conventional genomic mutation data or gene expression data for prognosis analysis. For example, various indices for tumor heterogeneity could be designed to predict responses to therapies, probability of metastasis, disease-free periods, and overall survival [147,[202][203][204][205]. Conclusions Since its inception, single-cell sequencing has revolutionized cancer research. The pioneering studies have covered the development and applications of single-cell DNA and RNA sequencing to address a wide range of topics such as intra-tumor heterogeneity of primary tumors, roles of CTCs and DTCs during metastasis, evolution of therapy resistance, and the characteristics of tumor microenvironments. Many novel biological insights have been obtained, and the revolution is just starting. Improvement of existing single-cell sequencing technologies, emergence of new techniques, and the integration of single-cell sequencing with other experimental protocols have provided powerful toolsets to understand many of the remaining mysteries of cancers. Single-cell epigenomics, multi-omics, and spatial single-cell sequencing technologies are some of the major directions of single-cell sequencing technologies that will bring the second wave of revolutions of cancer research.
7,432.6
2018-12-01T00:00:00.000
[ "Biology" ]
Meta-Analysis on Randomized Controlled Trials of Vaccines with QS-21 or ISCOMATRIX Adjuvant: Safety and Tolerability Background and Objectives QS-21 shows in vitro hemolytic effect and causes side effects in vivo. New saponin adjuvant formulations with better toxicity profiles are needed. This study aims to evaluate the safety and tolerability of QS-21 and the improved saponin adjuvants (ISCOM, ISCOMATRIX and Matrix-M™) from vaccine trials. Methods A systematic literature search was conducted from MEDLINE, EMBASE, Cochrane library and Clinicaltrials.gov. We selected for the meta-analysis randomized controlled trials (RCTs) of vaccines adjuvanted with QS-21, ISCOM, ISCOMATRIX or Matrix-M™, which included a placebo control group and reported safety outcomes. Pooled risk ratios (RRs) and their 95% confidence intervals (CIs) were calculated using a random-effects model. Jadad scale was used to assess the study quality. Results Nine RCTs were eligible for the meta-analysis: six trials on QS-21-adjuvanted vaccines and three trials on ISCOMATRIX-adjuvanted, with 907 patients in total. There were no studies on ISCOM or Matrix-M™ adjuvanted vaccines matching the inclusion criteria. Meta-analysis identified an increased risk for diarrhea in patients receiving QS21-adjuvanted vaccines (RR 2.55, 95% CI 1.04–6.24). No increase in the incidence of the reported systemic AEs was observed for ISCOMATRIX-adjuvanted vaccines. QS-21- and ISCOMATRIX-adjuvanted vaccines caused a significantly higher incidence of injection site pain (RR 4.11, 95% CI 1.10–15.35 and RR 2.55, 95% CI 1.41–4.59, respectively). ISCOMATRIX-adjuvanted vaccines also increased the incidence of injection site swelling (RR 3.43, 95% CI 1.08–10.97). Conclusions Our findings suggest that vaccines adjuvanted with either QS-21 or ISCOMATRIX posed no specific safety concern. Furthermore, our results indicate that the use of ISCOMATRIX enables a better systemic tolerability profile when compared to the use of QS-21. However, no better local tolerance was observed for ISCOMATRIX-adjuvanted vaccines in immunized non-healthy subjects. This meta-analysis is limited by the relatively small number of individuals recruited in the included trials, especially in the control groups. Introduction Adjuvants are substances that do not confer immunity on their own [1]. However, when added to immunogens, they facilitate, improve and maintain the immune responses against the immunogens [2,3]. Therefore, adjuvants provide a rational strategy to improve the efficacy of vaccines, especially in the case of weak immunogens and/or vaccines intended for individuals with weakened immune systems (e.g. newborns, the elderly or immune-compromised persons). Furthermore, adjuvants allow for dose sparing of vaccine antigen and helps in reducing the cost of vaccination programs [4,5]. A group of immunoenhancers of great interest is saponins, whose strong adjuvant activity was first described in 1930s [6]. Saponins are natural triterpenoid or steroid glycosides, which can be extracted from the bark of a South American tree Quillaja saponaria (Soapbark) [7,8]. QS-21 is one of the most potent and the most extensively studied saponin adjuvants in numerous studies including prophylactic and therapeutic vaccines for both animals and humans [9,10]. QS-21 has been shown to be an effective immunological adjuvant for human vaccines with a wide variety of antigens and to have a relatively low toxicity in preclinical studies in mice [11,12]. It stimulates both antibody and cellular immune responses composed of both Th1 and Th2 immunity. The cellular immune stimulating capacity of QS-21 is especially important for developing vaccines against cancers and intracellular pathogens [13]. A number of vaccine trials have been performed using QS-21 as adjuvant, initially for cancer vaccines (i.e. melanoma, breast and prostate cancer) and, subsequently, for vaccines against Alzheimer's disease and infectious diseases, including human immunodeficiency virus (HIV)-1, influenza, herpes simplex virus (HSV), malaria and hepatitis B diseases [9,12]. However, the natural saponin QS-21 has inherent disadvantages such as chemical instability, limited supply, difficult and low-yielding purification, and dose-limiting toxicity, which prevent it from wider use [14]. Importantly, QS-21 could cause hemolysis in vitro and its use in vivo has been observed with side effects [7]. Saponins have been shown to interact with cholesterol, and might form pores in the lipid bilayer of cell membranes. When such interaction happens to erythrocytes, hemolysis could occur [15]. In order to reduce saponin-related toxicity, the formulation of immunostimulating complex (ISCOM) was developed by Morein et al. in 1984 [16]. ISCOMs are open cage-like 40-nm particulate structures, which are formed spontaneously if cholesterol, phospholipids, saponin and viral envelope proteins are mixed together [17,18]. The formulation retains the adjuvant activity of the saponin with an increased stability, when compared to QS-21. Furthermore, the concern of hemolysis is solved by eliminating the possibility of the saponin to interact with cell membranes [19,20]. Nevertheless, the types of antigens that can be incorporated into ISCOM are restricted, and the incorporation process is difficult to control [4]. Due to these technical problems and the fact that antigen incorporation is not necessary to achieve a potent immune stimulation, matrix formulations such as ISCOMATRIX and Matrix-M™ were developed. These matrix formulations contain the same components and have the same structure as the ISCOM but without the incorporated antigen [19,21]. ISCOMATRIX usually contains Quil A or more purified forms of saponins, including QS-21, ISCOPREP™ 703 and, more recently, ISCOPREP [22]. Matrix-M™ is a combination of two individually formed matrix particles from different purified fractions of Quillaia saponins, namely Matrix-A™ and Matrix-C™ [23]. The former fraction has a higher adjuvant activity while the latter fraction has a lower adjuvant activity but a high tolerance [19]. Both ISCOMATRIX and Matrix-M™ adjuvanted vaccines have been tested in animal models and more recently in human clinical trials [19,22,23]. Vaccines adjuvanted with either ISCO-MATRIX or Matrix-M™ have been shown to induce strong antibody and T-cell responses and to be well tolerated in both pre-clinical and clinical studies [24,25]. ISCOMATRIX is currently under evaluation in candidate vaccines against hepatitis C virus (HCV) [26], influenza [27] and cancer [28][29][30]. Matrix-M™ is currently being investigated in vaccines for influenza, HSV type 2 and malaria [23,31]. Current applications of ISCOMs include the development of influenza vaccines for humans [32]. The benefits from adjuvant incorporation into any vaccine formulation have to be balanced with the risk of adverse events (AEs) [2]. The purpose of this meta-analysis is to evaluate the safety and tolerability of QS-21 and the improved saponin-based adjuvants such as ISCOMA-TRIX. Here we focus on single adjuvant formulations (QS-21, ISCOM, ISCOMATRIX and Matrix-M™) rather than combinations of adjuvants. Therefore, studies that used adjuvant systems, such as AS01, AS02 and AS04 developed by GlaxoSmithKline, were not included in the analysis. Methods We conducted a meta-analysis according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [33]. The protocol for the study was not published online. Literature search strategy A systematic search of literature was performed using the electronic databases of MEDLINE (Ovid), EMBASE and Cochrane Central register of Controlled Trials (CENTRAL). The clinical trial register (clinicaltrials.gov) was searched for unpublished trials. To define the studies of interest in these databases, the following keywords were used: "QS-21", "ISCOMs", "ISCOMA-TRIX", "Matrix-M", "randomized controlled trial" and "clinical trial". Details of the search strategy are provided in the supporting information (S1 Appendix). References were imported to RefWorks where duplicate entries were removed. Furthermore, the literature search was complemented by manual search of the reference lists of all identified studies and reviews for additional studies. Study selection References were evaluated using the pre-defined inclusion criteria: (1) randomized controlled trials (RCTs) on vaccines with saponin adjuvants (QS-21, ISCOM, ISCOMATRIX or Matrix-M™); (2) which included a control group (i.e., individuals immunized with saline buffer, adjuvant alone, antigen alone or adjuvanted with a licensed adjuvant); and (3) reporting information regarding safety and/or tolerability. The inclusion of only RCT studies was considered necessary to avoid the possible selection and reporting biases that may arise from observational studies. Two independent reviewers (EB, ED) performed primary evaluation of the retrieved articles for relevance based on the title and abstract. Disagreements were discussed with a third investigator (HL) until consensus was achieved. Potentially eligible publications were reviewed as full text. The acronym PICOS (patients, interventions, comparator (control) group, outcomes and study design) was used to assess if the references fully complied with the inclusion criteria. In order to lower the between-study heterogeneity and due to the fact that there were not enough eligible RCTs in healthy volunteers to be included for the meta-analysis, we limited our study selection to RCTs that recruited adult (18 years and older) non-healthy subjects. References for which full-text could not be acquired electronically or were reported not in English language were excluded. Data extraction Two independent reviewers (EB and ED) identified potentially relevant articles and collected the following data: the first author's last name, the year of publication, clinicaltrials.gov identifier (if applicable), study design, total number of participants, age range, gender, disease background, study arms with number of vaccinated participants in each arm, doses of adjuvants used for the preparation of vaccines, immunization route and number of injections. The following safety outcomes were identified from the included studies and considered for the meta-analysis: serious, systemic and local AEs. The commonly reported systemic AEs across the selected studies included headache, fatigue, insomnia, pyrexia, myalgia, nausea, diarrhea, dizziness, anxiety and back pain. The local AEs included injection site pain, redness, erythema and swelling. Evaluation of study quality Following Cochrane guidelines for systematic reviews of interventions [34], two independent reviewers (EB, ED) assessed the quality of individual studies included in the meta-analysis. The Jadad scale for reporting RCTs, which summarizes the methodological quality of a study in an overall score, was employed. In brief, the Jadad scale evaluates three items: randomization (up to two points are given), double blinding (up to two points are given) and report of withdrawals and dropouts (up to one point is given). An overall score between zero and five is assigned. A score of three and above is commonly regarded as the reference point for adequate trial quality [35]. Studies were not to be excluded on the basis of this assessment but their quality scores were taken into account when describing results. Data analysis To evaluate the safety and tolerability of saponin adjuvanted vaccines, the dichotomous data on the number of subjects experiencing a systemic or local AE in the saponin-adjuvanted study vaccine group and placebo group were extracted from each study with subsequent determination of the risk ratios (RR) and their 95% confidence intervals (CI). Of note, within each study we pooled all subjects that received adjuvanted vaccine, regardless the concentration of the adjuvant and antigen in vaccine formulation. We combined data statistically using random effects (Mantel-Haenszel method) model due the differences in among the studies in e.g. vaccine formulation, adjuvant dose and the disease background of subjects. Chi 2 and I 2 statistics were used to assess the heterogeneity among the included studies. Values of I 2 can be interpreted as low (25-50%), moderate (50-75%), and high (75% and greater) levels of heterogeneity [36]. Meta-analyses were performed using Review Manager (RevMan 5.3, Cochrane Collaboration). Results were considered to be statistically significant with a p value of <0.05. In addition to the meta-analysis, descriptive reports on serious adverse events (SAEs) and treatment discontinuations were given. Dealing with missing data Our analysis relies solely on the existing data. Assessment of reporting biases Due to the limited number of studies available for meta-analysis, assessment of publication bias was not applicable. The review is subject to publication bias. Search results A total of 813 references were identified from electronic databases during the search performed during 03-04.03.2016 (Fig 1). Additional 7 references were identified by manual search. After removing duplicate entries (151), 669 references were evaluated for inclusion based on the title and/or abstract. As a result, 112 potentially relevant articles were included in the next stage for the full-text evaluation. From the 112 articles, full text was unavailable for 13 studies, 3 references were reviews, and 1 reference was a completed clinical trial with no study results reported. Characteristics of the study population, interventions, control groups, the evaluated outcomes and/or design of the study (PICOS) did not meet the inclusion criteria in 81 publications. Most of these studies did not include a control group, i.e. all enrolled subjects received the saponin-adjuvanted vaccines. One meta-analysis [37] and four pooled analyses [22,[38][39][40] were excluded since the original safety data of the reported studies were not retrievable. Ultimately, a total of nine RCTs fulfilled all inclusion criteria and were selected for the meta-analysis [41][42][43][44][45][46][47][48][49]. Study characteristics The main characteristics of the selected RCTs are summarized in Table 1. Among nine studies, six used QS-21 and three used ISCOMATRIX as vaccine adjuvant. No RCTs on ISCOM or Matrix-M™ adjuvanted vaccines matched the inclusion criteria. Briefly, the trials on ISCOM (one trial) and Matrix-M™ (one trial [50], two reports [51,52]) adjuvanted vaccines were performed in healthy volunteers and/or did not report safety data. The six trials on QS-21-adjuvanted vaccines enrolled a total of 755 individuals, of which 510 in the treatment group (i.e. subjects received antigen with adjuvant) and 245 in the control groups (i.e. subjects received placebo, antigen alone or adjuvant alone). A total of 152 nonhealthy subjects were recruited for the three RCTs on ISCOMATRIX-adjuvanted vaccines and included 98 and 54 in the treatment and control groups, respectively. The selected trials for both adjuvants recruited adult non-healthy subjects. Studies that involved healthy volunteers could not be included for the meta-analysis due to the limited number of identified studies that fulfilled the pre-defined inclusion criteria. The age of the enrolled subjects in the nine studies varied from 19 to 85 years, among which 44.9% were males. All studies reported the number of subjects experiencing a specific AE. Studies of Anderson et al. [47] and Frazer et al. [48] used a seven-day diary card to record specific local and systemic AEs. In the former study, unsolicited AEs could also be reported on a separate 30-day diary card. In the study of Frazer et al. [48], an additional home visit was conducted with each study subject at the end of the follow-up period. Studies described by Gilman et al. [41] and Wald et al. [42] did not mention the method of AE reporting; however, they used physical examinations and evaluations of clinical and laboratory parameters after each vaccination. In the Sharp&Corp study [49], the safety data was collected up to 4 years after first dose of vaccine by systematic assessment (not further specified). During Pfizer studies NCT00479557 [43] and NCT00498602 [44] AEs were reported throughout 110 weeks, including a 6-week screening period, 52 weeks of dosing and 54 weeks for follow-up after the last dose. The studies NCT00752232 [45] and NCT01227564 [46] recorded AEs from day 1 throughout the trial 100 μg IM/3 or 1 [48] (Continued) (24 months and 104 weeks, respectively). All Pfizer studies used non-systematic assessment of AEs. Furthermore, trials NCT00479557, NCT00498602, NCT01227564 and Sharp&Corp (but not NCT00752232) used 5% frequency threshold for reporting AEs (not including SAEs). A threshold of 5% indicates that only AEs with a frequency greater than 5% within at least one arm were reported. Study quality The methodological quality of the included RCTs was satisfying (Table 2), except for the study from Wald et al. [42] According to the Jadad scale, eight out of the nine studies (88.8%) have a score of 3 or 4. The study published by Wald et al. [42] has a score of 2 due to the fact that this RCT was single-blinded, and there was insufficient information on the randomization method. Systemic adverse events. The most frequent systemic AEs observed across the six studies include headache, fatigue, insomnia, pyrexia, nausea, diarrhea, dizziness, anxiety and back pain. The results of the meta-analysis demonstrated that out of the nine systemic AEs selected for the analysis, only cases of diarrhea were significantly more frequent in non-healthy subjects receiving QS-21-adjuvanted vaccines than in those receiving placebo (pooled RR 2.55, 95% CI 1.04-6.24, p = 0.04) (Fig 2). Furthermore, although the pooled RRs did not reach statistical significance, a trend towards a higher incidence of headache was observed in the QS-21-adjuvanted vaccine group (pooled RR 1.66, 95% CI 0.93-2.97, p = 0.09). Aside from the systemic AEs included in the meta-analysis, other commonly reported systemic AEs ( 5% of participants) were vomiting, myalgia, asthenia, upper respiratory tract infection, urinary tract infection, constipation, contusion and nasopharyngitis. Local adverse events. Regarding the local AEs, we were not able to retrieve dichotomous data from RCT reported by Gilman et al. [41] However, the authors mentioned that injection * A study receives a score of 1 for "yes" and 0 for "no" ** A study receives a score of 0 if no description is given, 1 if the method is described and appropriate, and -1 if the method is described but inappropriate. # The word "double-blind" was not used by the authors. However, according to the description of the blinding of the investigator, investigational site staff, and participants, one point was given for "described as double-blind". The meta-analysis showed that QS-21-adjuvanted vaccines caused significantly more cases of injection site pain (pooled RR 4.11, 95% CI 1.10-15.35, p = 0.04) than placebo (Fig 3). No statistically significant increase in the risk for injection site redness/erythema and injection site swelling was observed with the immunization with QS-21-adjuvanted vaccines. Discontinuations due to AEs. Together with SAEs, systemic and local AEs, we were interested in the cases of discontinuations due to the AEs. Wald et al. [42] reported that two subjects (5.7%, 95% CI 1.6-18.6%) dropped out form the study due to AEs that occurred in a temporal relation with the QS-21-adjuvanted vaccine. One subject developed severe arthralgia, and another developed mild neck pain and neck vein distention. The number of subjects who discontinued treatment due to AEs could not be retrieved from the study of Gilman et al [41]. However, the authors stated that the AEs leading to treatment discontinuations were more frequent among the participants receiving QS-21-adjuvanted vaccine than those receiving placebo. The two Pfizer 2014 trials [43,44] resulted in 4 dropouts in QS-21-adjuvanted vaccine group (2.9%, 95% CI 1.2-7.3%) due to the AEs related to the study vaccine with no cases of discontinuation in the placebo group (0%, 95% CI 0-18.4%). Furthermore, other 4 subjects discontinued the treatment in QS-21-adjuvanted vaccine group due to the AEs unrelated to the study vaccine. The AEs that caused discontinuation were not specified. No withdrawals due to AEs were reported from the Pfizer 2014(a) trial [45]. In general, these three trials showed higher incidence of discontinuations in the QS-21-adjuvanted vaccine group compared to the placebo. Pfizer 2015 trial [46] resulted in one case of discontinuation due to the AEs in QS-21-adjuvanted vaccine group and no cases in the placebo group. However, the total incidence of dropouts in Pfizer 2015 trial was twice higher in placebo group (28.6%, 95% CI 13.8-49.9%) than in QS-21-adjuvanted vaccine group (14.3%, 95% CI 6.7-27.9%). Systemic adverse events. The most commonly reported systemic AEs across the selected studies include headache, fatigue, pyrexia, nausea, myalgia and insomnia. In general, the observed systemic AEs were mild to moderate in severity and lasted for 2-3 days. The proportion of subjects reporting systemic AEs was greater in the ISCOMATRIX-adjuvanted vaccine group than in the placebo group. The meta-analysis comparing the incidence of systemic AEs between the ISCOMATRIX adjuvanted vaccine group and placebo group showed no statistically significant differences in any selected for the analysis systemic AE (Fig 4). Furthermore, pooled RRs were generally lower for ISCOMATRIX-adjuvanted vaccines than for vaccines containing QS-21. Local adverse events. The reported local AEs were usually associated with injection site pain, redness/erythema and swelling. Individual studies reported also other local AEs such as warmth, bruising, injection site hematoma and injection site pruritus. Trials of Anderson et al. [47] and Frazer et al. [48] specified that the proportion of subjects experiencing an injection site reaction (including pain, swelling, warmth, redness and bruising) or injection site pain was greater in the groups receiving ISCOMATRIX-adjuvanted vaccines than in the group of subjects receiving placebo. In general, the observed local AEs were of mild to moderate intensity. The meta-analysis showed that there was no association between the exposure to the ISCOMA-TRIX-adjuvanted vaccines and the incidence of local redness/erythema (pooled RR 1.87, 95% CI 0.76-4.61). However, the ISCOMATRIX-adjuvanted vaccines significantly increased the likelihood of experiencing the injection site pain (pooled RR 2.55, 95% CI 1.41-4.59, p = 0.002) and swelling (pooled RR 3.43, 95% CI 1.08-10.97, p = 0.04) than placebo (Fig 5). Of note, the risk for injection site pain is approximately 1,6 times less for subjects receiving ISCOMATRIXadjuvanted vaccines compared to subjects receiving QS-21-adjuvanted vaccines (pooled RR 2.55 vs. pooled RR 4.11). In contrast, the risk for injection site swelling increases two-fold in subjects receiving ISCOMATRIX-adjuvanted vaccines (pooled RR 3.43 vs. pooled RR 1.75). Saponin-adjuvanted vaccine versus placebo In order to study the general effect of saponin adjuvantation on the safety and tolerability of tested vaccines, we performed meta-analysis on the reported AEs from all nine eligible studies. In the case of frequently reported systemic AEs, we were able to combine the dichotomous data on the number of non-healthy subjects experiencing headache, fatigue, insomnia, pyrexia, myalgia, nausea, diarrhea, dizziness, anxiety and back pain after immunization with QS-21-or ISCOMATRIX-adjuvanted vaccines (Fig 6). The meta-analysis showed that there was a trend towards a higher risk of systemic AEs in patients receiving saponin-adjuvanted vaccine compared to those receiving placebo, although the difference was not statistically significant (e.g. headache: pooled RR 1.36, 95% CI 0.95-1.93, p = 0.09; diarrhea: pooled RR 2.32, 95% CI 0.99-5.45, p = 0.05). When combining the data on the frequently reported local AEs from all selected trials, the performed meta-analysis confirmed that immunization of non-healthy subjects with saponinadjuvanted vaccines increased the risk for injection site pain (pooled RR 2.76, 95% CI 1.61-4.73, p = 0.0002) and injection site swelling (pooled RR 2.62, 95% CI 1.07-6.45, p = 0.04), when compared to placebo (Fig 7). Furthermore, the results showed a trend towards an increased risk for injection site redness/erythema in patients receiving saponin-adjuvanted vaccine compared to those receiving placebo, although it was not statistically significant (pooled RR 1.44, 95% CI 0.95-2.17, p = 0.08). Discussion The objective of this meta-analysis was to assess the safety and tolerability of vaccines containing QS-21 and new saponin adjuvant formulations such as ISCOM and ISCOMATRIX. References were included if they reported on a RCT of vaccines with saponin adjuvants (QS-21, ISCOM, ISCOMATRIX, or Matrix-M™) including a placebo control group and reporting information regarding safety and/or tolerability. The effect of saponin-adjuvantation was evaluated by the RR with 95% CI for systemic and local AEs. Reports on SAEs and the number of subjects who discontinued treatment due to the AEs were discussed descriptively. We identified three studies reported on ISCOMATRIX-adjuvanted vaccines and six studies on vaccines adjuvanted with QS-21. All nine studies included adult ( 18 years) non-healthy subjects. Overall, we included in the meta-analysis 98 subjects receiving ISCOMATRIX-adjuvanted vaccines, 510 SAEs were reported by five studies [41,[43][44][45][46] on QS-21-adjuvanted vaccines and one study on vaccines adjuvanted with ISCOMATRIX (Sharp&Corp [49]). However, none of the observed SAEs were considered to be related to the use of the saponin adjuvants. The majority of SAE cases described by Gilman et al. [41] were associated with encephalitis. The authors linked the addition of polysorbate-80 to the vaccine formulation with the occurrence of the SAEs. Polysorbate-80 is an emulsifier that helps to improve the product stability, and was previously shown to be involved in the development of the inflammatory reaction. According to Gilman et al. [41], the addition of polysorbate-80 may have caused a greater exposure of antigenic epitopes, which might lead to an inflammatory T-cell response. All reported cases of encephalitis occurred mainly in the antibody non-responders and are reviewed in detail by Orgogozo et al [53]. Of note, none of four Pfizer trials in subjects with Alzheimer's disease observed any cases of encephalitis. The SAEs reported by Sharp&Corp [49] were unlikely associated with the use of ISCOMATRIX as an adjuvant due to the fact that SAEs were not more frequent in the recipients of ISCOMATRIX-adjuvanted vaccine compared to those received placebo or the antigen alone. Based on the performed meta-analysis, none of the reported systemic AEs were significantly increased upon the administration of ISCOMATRIX-adjuvanted vaccines. In the case of QS-21-adjuvanted vaccines, patients experienced significantly more cases of diarrhea compared to placebo. Most of the systemic AEs observed across the included studies were of mild to moderate intensity and of short duration. When we combined the systemic AEs from all the selected studies to evaluate the general effect of saponin adjuvantation on the safety and tolerability of the tested vaccines, we observed that none of the frequently reported systemic AEs were significantly increased upon the use of saponin-adjuvanted vaccines. In general, the relative risks of the reported systemic AEs from the pooled saponin studies are higher than those observed from the ISCOMATRIX-specific studies, but lower than those observed from the QS-21-specific studies. The performed meta-analysis further showed that both QS-21-and ISCOMATRIX-adjuvanted vaccines are associated with a higher risk for injection site pain, although the estimated risk is twice lower for ISCOMATRIX-adjuvanted vaccines. On the other hand, the risk for injection site swelling is only increased upon the use of the vaccines containing ISCOMA-TRIX-adjuvanted vaccines. Although the use of QS-21-adjuvanted vaccines also resulted in a trend towards a higher risk for injection site swelling, the risk was not statistically significant and was 1.6 times less when compared to ISCOMATRIX-adjuvanted vaccines. When we pooled the data on reported local AEs from all selected trials, the meta-analysis confirmed that the use of saponin-adjuvanted vaccines significantly increased the likelihood of experiencing both injection site pain and swelling. This might provide a possible reason for the observation that generally more treatment discontinuations due to the AEs were reported from the saponin-adjuvanted vaccine recipients than control group. The safety and tolerability profile of QS-21-and ISCOMATRIX-adjuvanted vaccines revealed by the performed meta-analysis in non-healthy subjects is similar to that observed in healthy volunteers. In healthy subjects, no vaccine related SAEs were observed receiving QS-21-or ISCOMATRIX-adjuvanted vaccines [26,39,54,55]. However, these trials conducted in healthy subjects often reported higher incidence of systemic AEs in saponin-adjuvanted vaccine study group than in placebo or an active control group. The systemic AEs reported in healthy subjects were mild to moderate in intensity, self-limiting and of short duration. Local AEs observed in healthy volunteers included local pain, redness and induration. The injection site pain was often moderate to severe and was more frequently reported by QS-21-adjuvanted vaccine recipients than those receiving placebo [54,55]. The study of Waite et al. [40] further confirmed that the presence of QS-21 in the injected formulation is associated with the injection site pain. Interestingly, McKenzie et al. [39] stated that the incidence of local and systemic AEs using ISCOMATRIX vaccines is similar to that published for other adjuvanted vaccines (e.g., AS04, aluminum containing adjuvant). The selected RCTs aimed to assess not only the safety and tolerability of the study vaccines, but also the immunogenicity. To be able to determine the added immunogenicity value of an adjuvant, we need to compare the immune responses elicited by the adjuvanted study vaccines with non-adjuvanted vaccines (i.e. antigen-adjuvant vs. antigen alone). Only one out of three trials on ISCOMATRIX-adjuvanted vaccines [49] and four out of six trials on QS-21-adjuvanted vaccines [42][43][44][45] included appropriate study groups. Moreover, the high heterogeneity in immune parameters reported by different trials, such as the immune factors analyzed (i.e. antibody responses or cellular immune responses), the assays performed (i.e. enzyme-linked immunosorbent assay (ELISA) or enzyme-linked immunospot (ELISPOT)), the units used (i.e. geometric mean titer or mean fold-increase of a specific antibody response) and the timeframe of the analysis restrains us from performing meta-analysis on the immunological benefit of saponin adjuvants. However, the immune boosting effect of saponin adjuvants can be confirmed by the data reported from five of the selected trials [42][43][44][45]49]. One should consider that the immunological benefit of the saponin adjuvants should be weighed against the potential risk of adverse events. The significant increase in the incidence of injection site pain and swelling upon the immunization with saponin-adjuvanted vaccines might prevent the use of such adjuvants in routine immunizations, especially in the case of prophylactic vaccines. The results of the meta-analysis should be interpreted with caution due to the several limitations. To have direct information about the safety and tolerability of adjuvants, the adjuvanted test vaccines should be compared to an active control group (immunization with the antigen alone or antigen with licensed adjuvants). Although there were five eligible RCTs [42][43][44][45]56,57] on QS-21-adjuvanted vaccines with an active control group (antigen alone), we identified only one eligible RCT on ISCOMATRIX-adjuvanted vaccine [49]. Due to the limitation, AEs reported from the saponin-adjuvant vaccine group were compared with those from the placebo group in our meta-analysis, which made the actual causes of AEs associated with the use of saponin-adjuvanted vaccines unidentifiable. In addition, the number of the included clinical trials meeting the inclusion criteria was limited. Furthermore, the number of subjects recruited to each of these trials was generally small, especially for the control groups. These contribute to the wide confidence intervals, and decrease the statistical power to detect statistically significant differences between the treatment groups. Due to the different settings of the trials included in the meta-analysis, we chose random-effects model for the meta-analysis, which further widens the confidence intervals. Last but not least, differences in the reporting method of observed AEs and the classification of AEs limit the possibility for including respectively more trials for the meta-analysis or perform meta-analysis on other reported AEs among the included studies. Conclusions No major safety concern was identified for both ISCOMATRIX-adjuvanted vaccines and vaccines containing QS-21 based on the reported SAEs. Most AEs reported by non-healthy subjects in the nine selected trials were generally mild to moderate, self-limiting and of short duration. The performed meta-analysis showed that the use of QS-21-adjuvanted vaccines resulted in a statistically significant increase in the incidence of diarrhea when compared to placebo, while no systemic AEs were found to be associated with the use of ISCOMATRIXadjuvanted vaccines. Both QS-21-and ISCOMATRIX-adjuvanted vaccines were associated with a higher incidence of injection site pain. The observed elevated risk for local pain was lower for the vaccines containing ISCOMATRIX. On the other hand, an increased incidence of injection site swelling was only observed from the use of ISCOMATRIX-adjuvanted vaccines. Furthermore, the pooled analysis on ISCOMATRIX-and QS-21-adjuvanted vaccines further confirmed that subjects receiving a saponin-adjuvanted vaccine experienced significantly more injection site pain and swelling when compared to placebo. In addition, for both adjuvants the number of subjects who discontinued treatment was higher in the group of subjects receiving the adjuvanted vaccine than in the placebo group. Our results indicate that the use of ISCO-MATRIX results in a better systemic tolerability profile when compared to the use of QS-21. However, no better local tolerance was observed for ISCOMATRIX-adjuvanted vaccines in immunized non-healthy subjects. The relatively small number of published studies, however, limited our ability to calculate robust estimates for other AEs and to draw strong conclusions on the effects of QS-21-and ISCOMATRIX-adjuvanted vaccines. Therefore, further studies are needed, particularly with properly defined and reported safety outcomes and including an active control group, to better evaluate the risks of saponin adjuvanted vaccines.
7,256.2
2016-05-05T00:00:00.000
[ "Medicine", "Biology" ]
NEWS-BASED SOFT INFORMATION AS A CORPORATE COMPETITIVE ADVANTAGE . This study establishes a decision-making conceptual architecture that evaluates decision making units (DMUs) from numerous aspects. The architecture combines financial indicators together with a variety of data envelopment analysis (DEA) specifications to encapsulate more information to give a complete picture of a corporate’s operation. To make outcomes more accessible to non-specialists, multidimensional scaling (MDS) was performed to visualize the data. Most previous studies on forecasting model construction have relied heavily on hard information, with quite a few works taking into consideration soft information, which contains much denser and more diverse messages than hard information. To overcome this challenge, we consider two different types of soft information: supply chain influential indicator (SCI) and sentimental indicator (STI). SCI is computed by joint utilization of text mining (TM) and social network analysis (SNA), with TM identifying the corporate’s SC relationships from news articles and SNA to determining their impact on the network. STI is extracted from an accounting narrative so as to comprehensively illustrate the relationships between pervious and future performances. The analyzed outcomes are then fed into an artificial intelligence (AI)-based technique to construct the forecasting model. The introduced model, examined by real cases, is a promising alternative for performance forecasting. Introduction Domestic corporates in most industries face extreme risk and operate in a highly volatile business world due to globalization and international trade. Compared to multinationals and global enterprises, domestic corporates usually lack sufficient human capital, financial resources, and risk-absorbing capability, as well as are unable to quickly react to customer requirements, which could lead to higher risk exposure and tremendous uncertainty. Supply chain (SC) collaboration, which is an architecture for arranging interdependencies among operations, process/product designs, and sales forecasting/planning in order to come up with consensus strategic decisions among SC partners, has been widely deemed as an efficient and useful method for addressing the challenge of globalization and upgrading a nation's industrial level (Tan, Lyman, & Wisner, 2002;Wu & Chiu, 2018;Hu, Jianguo, & Tzeng, 2018). The rationale is that corporates highly involved in SC collaboration can facilitate knowledge sharing, create customer loyalty and value, shorten product waiting time, and increase profitability by utilizing the intertwined and interconnected network for the spread or exchange of dissimilar types of information among SC partners, including operational and tactical strategies, scarce resources, and opportunities (Narasimhan & Nair, 2005;Kim, 2014;Bhattacharjee & Cruz, 2015). sive decision may come from traditional performance measures focused on accounting ratio analysis (such as, ROA: return on assets and ROE: return on equity) that are too simplify, which cannot represent the whole picture of a corporate's conditions in today's highly volatile and aggressive business atmosphere. To make an overarching and robust judgment, this study prefers to approach it from the data envelopment analysis (DEA) perspective (Cinca & Molinero, 2004;Chen & Zhu, 2011;. The merits of DEA can be briefly summarized as follows: (1) it can handle the evaluation task by considering multiple inputs and outputs in the same time without a priori assumption (i.e., profit maximization or cost minimization); and (2) it can provide a complete and intuitive performance score for decision makers to make a judgment. However, the performance score calculated by DEA is influenced by the inclusion or exclusion of an input or an output (Parkin & Hollingsworth, 1997;Sagarra, Mar-Molinero, & Agasisti, 2017;Scalzer, Rodrigues, Macedo, & Wanke, 2018) − that is, the utilization of different inputs or outputs leads to different performance scores. Rather than employ a single DEA specification, this study chooses to go beyond a single performance score and extends to multiple DEA specifications (i.e., this study considers all the existing combination strategies). To make the outcome more realizable to non-specialists, multidimensional scaling (MDS) (one type of dimensionality reduction techniques) is conducted to visualize the essential elements of the data. Through joint utilization of multiple DEA specifications and MDS, the decision makers can appropriately discriminate the difference between superior and inferior operating performances. Petersen (2004) stated that information can typically be divided into two different types: hard information and soft information. The former refers to numerical information, including stock prices and trading volumes, while the latter refers to textual information, such as sentiment/opinions and ideas. Most previous research studies related to financial risk forecasting rely heavily on hard information (i.e., numerical ratios), especially in financial crisis prediction and credit risk prediction. However, Beattie, Mcinnes, and Fearnley (2004) indicated that numerical information comprises only 20% of all information. Merely utilizing numerical information to reach a final conclusion is not trustful and robust. Hajek, Olej, and Myskova (2014) also noted that forecasting models with only numerical ratios are not yet fully capable of explaining the relationships between previous and future financial performances. One possible reason may be from omitting essential soft information (i.e., textual information) that can be substantially extracted from text documents (Gajzler, 2010). Grounded on the work done by Huang, Zang, and Zheng (2014), they indicated that the textual information can yield incremental messages beyond a quantitative ratio by providing well-organized descriptions to clarify company's numerous aspects. Loughran and Mc-Donald (2011) also indicated that the sentiment from documents significantly correlates to a corporate's profit potential, share turnover, and earnings surprise. However, far too little works focus on establishing performance forecasting model by utilizing sentimental indicators. Improper performance forecasting could lead to an inability to recognize an unhealthy company before a financial crisis is triggered. To fill this vacancy in the literature, we extract the sentiment indicators from the most important section of annual reports, called management discussion and analysis (MD&A), and use it to construct our operating performance forecasting model. The reasons for choos-ing MD&A are as follows: it (1) provides a narrative disclosure of a corporate's financial statement that enhances interpretability by market participants; it (2) covers the full financial disclosure report and determines the context within which information should be further inspected; and and it (3) gives information about the variability of a corporate's earnings and cash flow (Li, 2010;Tajvidi, & Karami, 2017). Through joint utilization of hard information and soft information, we are able to achieve a much more precise and unbiased forecasted outcome. West, Dellana, and Qian (2005) also indicated that even a little improvement in forecasting performance can result in saving a considerable amount of money to corporates and market participants. The objectives of this study are summarized as follows. -The study proposes a novel decision-making architecture to forecast operating performance, which has been widely deemed as the prior stage before financial troubles. -To describe the whole picture of a corporate's operations, this study goes beyond the traditional one-input and one-output accounting ratios and further extends to a multiple-input and multiple-output assessment measure (i.e., DEA). -To yield more comprehensive information for decision makers to form their own judgments, this study extends the singular DEA specification to multiple DEA specifications (i.e., it considers all the combination strategies). -To make the outcome more assessable to decision makers, we conduct MDS (one type of dimensionality reduction techniques) to visualize and represent the main characteristics of the data. -To identify a corporate's SC relationships, we perform TM with a domain-specific word list to analyze the news articles and then establish its SC network. -To evaluate the strength of a corporate's involvement in a SC network, we conduct SNA. -To improve the model's forecasting quality, we take soft information (i.e., sentimental) into consideration. The remaining of this study is structured as follows. Section 1 presents the methodologies used. Section 2 describes the experimental results. The final section provides and discusses experimental results. Charnes, Cooper, and Rhodes (1978) introduced data envelopment analysis (DEA) in 1978 which is a popular mathematical programming method based on the frontier theory for evaluating the relative efficiencies of multiple inputs and outputs of decision-making units (DMUs). If the efficiency score is less than 1, a DMU is considered relatively inefficient (Lu, Liu, Kweh, & Wang, 2016;Lu, Kweh, Nourani, & Huang, 2016;Çalik, Yapici Pehlivan, & Kahraman, 2018;Radojicic, Savic, & Jeremic, 2018;Joulaei, Mirbolouki, & Bagherzadeh-Valami, 2019). By executing DEA, we are able to understand how efficient a unit is relative to its competition and see the relative shortages of inefficient units (Ross & Droge, 2002). Thus, this study applies DEA to determine corporate operating performance and briefly illustrates it as follows (Kritikos, 2017;Zheng, Wang, Chen, & Zhang, 2019;Nosrat, Sanei, Payan, Hosseinzadeh, & Razavyan, 2019). Data envelopment analysis: DEA Assume that there is a group of b DMUs to be measured, where each DMU has h inputs and k outputs. We present utilizing the conventional denotations in DEA by x ij = (i = 1, ..., h) and y rj = (r = 1, ..., s), which are the values of inputs and outputs of DMU j ( j = 1, ..., n), and which are known and positive. The mathematical formulation of the well-known DEA-CCR model is represented in Eq. (1) and can be used to assess the relative efficiencies of each DMU: Here, DMU o expresses the DMU under measurement, and the m r and s i are represented as the weights assigned to the outputs and inputs, respectively. If there is a group of positive weights in which * 0 ϑ reaches 1, then DMU o is relatively efficient (that is, DMU o is lied on the efficient frontier); otherwise, it is relatively inefficient (that is, DMU o is not lied on the efficient frontier). By implementing mathematical transformation strategy (Charnes & Cooper, 1962), the aforementioned task can be switched into a linear programming task. The above DEA-CCR model provides an efficiency score for each DMU, making it tell the difference between efficient and inefficient DMUs. By assigning weights to each DMU's inputs and outputs and maximizing the ratio of the weighted sum of outputs to the weighted sum of inputs, we can obtain the DMU's relative efficiency score. DEA-CCR models consists of two different types: input-oriented and output-oriented. An output-oriented one is selected, because a company generally try to use the resource at hand to reach the fulfillment of the company's goal. Random vector functional link networks: RVFL With its advantage of universal approximation capability, the artificial neural network (ANN) with a back-propagation (BP) supervised learning mechanism is one of the most popular machine learning algorithms, but it has some weaknesses, such as slow convergence and being extremely sensitive to learning rate determination and difficulty at escaping from the local minimum (Schmidt, Kraaijveld, & Duin, 1992;Forero, Cano, & Giannakis, 2010;Scardapane, Comminiello, Scarpiniti, & Uncini, 2016). To handle these challenges, a randomized-based NN, called the random vector functional-link (RVFL) network, was introduced that assigns a weight randomly and connects the input and output layers by a functional link (Pao & Takefuji, 1992;Friedman, Hastie, & Tibshirani, 2009;Georgopoulos & Hasler, 2014). Igelnik and Pao (1995) and Georgopoulos and Hasler (2014) indicated that randomly generating the weights from the input layers to the hidden layers can enhance the model's forecasting performance. Assume that the approximation of g(x) is represented as g * (x) and can be used to map input data x = [x 1 , x 2 , ..., x n ] to a target value Y = [y 1 , y 2 , ..., y n ]. We present a mathematical representation of RVFL as: where F j is the vector of weights connecting the input to the j th enhancement node, and the error term and the weights connected to the output are g j and a j , respectively. Since the weights from the input layer to enhancement node and error term can be randomly determined in an appropriate range and kept constant in the learning procedure, the only computation task is to determine the output weight a, which can be realized via handling the following task: , 1,2, , where N denotes the total amount of research targets, and e expresses the vectors that can be used to concatenate initial and random features. In order to avoid the problem of over-fitting, we conduct the Moore-Penrose Pseudoinverse. Zhang and Suganthan (2016) further indicated that the ridge regression can perform a satisfactory job in handing this task. Thus, we apply ridge regression to handle the following task: denotes the solution of the aforementioned task, b depicts the regularization value, and the input and output matrices are E and Y, respectively. For a more detailed illustration of RVFL, one may refer to Zhang and Suganthan (2016), Scardapane, Wang, Panella, and Uncini (2015), and Katuwal, Suganthan, and Zhang (2018). The research target and dependent variable Taiwan, a small, resource-scarce, densely-populated island nation, has gained considerable attention due to its great influence on the global supply chain, especially in high-technology electronics products. Above 50% of the world's personal computers (PCs) are either made in Taiwan or contain an essential electronic components provided by a Taiwanese company (Wu & Chiu, 2018). Moreover, the electronics manufacturing industry in Taiwan has received numerous government grants and beneficial financing incentives, turning it into a mainstream of local economy as well as an important funding source to worldwide market participants. However, because manufacturing firms have shorter product lifecycles, greater revenue volatility, and higher customer turnover rates, there is an imperative need to establish a sophisticated mechanism for decision makers to realize the current status of corporate operations. Thus, we choose the top 1000 manufacturing companies in Taiwan as our focused sample. All the company-level data were collected from Taiwan Economic Journal (TEJ) database for the period 2015−2017. How to appropriately describe the whole picture of a corporate's operating situation is an important task. This study prefers to go beyond a single DEA score and extends to multiple DEA specifications. By implementing a bundle of performance scores (i.e., multiple DEA specifications), we can reach a more comprehensive and robust outcome. The used variables (i.e., inputs and outputs measures) should be decided before constructing the multiple DEA specifications. Total liability (TL) and total equity (TE) are designated as input variables, and profit ratio (PR), ROA, and ROE are designated as output variables (Wang, Lu, Kweh, & Cheng, 2014;Hsu, 2019aHsu, , 2019bLin, Chang, & Hsu, 2019). We conduct the Pearson correlation to test the representativeness of the chosen variables. The result in Table 1 states that all the chosen variables show significant positive correlation − that is, no selected variable should be omitted. Note: ***denotes p < 0.01; ** denotes p < 0.05; * denotes p < 0.1. In order to reach a more comprehensive analyzed outcome, we measure multiple DEA specifications (i.e., two input variables and three output variables can generate 21 dissimilar combinations) (see Table 2) together with financial indicators (i.e., sales growth rate: SGR; earnings per share: EPS; earnings before interest and tax: EBIT; inventory turnover rate: ITR). We perform a visualization algorithm − namely, multidimensional scaling (MDS), which highlights the essential characteristics of the information hidden in the data − to make the results much more accessible to non-specialists. The basic concept of this method is to measure the proximity between pairs of samples. The samples are located close to each other implies that their proximity is high (Sagarra et al., 2017). By doing so, we can divide the objects into two groups: superior and inferior group (see Figure 1). The independent variable Almost 99% of financial distress cases have resulted from bad operating performance (Kamei, 1997). In other words, bad operating performance not only can be viewed as the preceding stage before financial distress bursts out, but also can be deemed as the main trigger for financial distress (Chang, Hsu, & Lin, 2018). Thus, the selected variables used in financial distress can be designated as predictors in this study. Table 3 lists the selected variables. The resource based view (RBV) theory supports the assumption that a corporate acquiring a competitive edge relies heavily on the application of its bundle of physical productive resources (Wernerfelt, 1984;Barney, 2001). Grounded on this theory, corporate resources (i.e., capital, land, and human labor) are valuable, rare, distinctive, and inimitable and can be recognized as important sources of competitive edge and superior operating performance (Peteraf, 1993). However, with the development of information technology, the element of any value-creating process has shifted from tangible assets to intangible assets. Utilizing cyberspace and web-technology turns out to be a common practice for all businesses in today's modern economy (Zhang, Shi, Wang, & Fang, 2018). Corporates are empowered to contact their SC partners, customers, and clients anytime from anywhere by the Internet (Chen & Chiang, 2011;Liu & Cruz, 2012;Huang, Ho, & Chiu, 2014). Walter, Auer, and Ritter (2006) stated that operating performance outcome can be improved by developing strong relationships with market participants, thus facilitating knowledge sharing and enhancing customers' purchasing intention. Gensler, Volckner, Liu-Thompkins, and Wiertz (2013) also indicated that corporates are highly embedded in their SC collaboration networks, which contribute greatly to their operating performance. Unfortunately, due to the opaque nature of a corporate's business relationships and difficult at gathering such data, there are few research works in the literature depicting real-life SC collaboration networks (Kim et al., 2011). To fill this vacancy left in the literature, this work is grounded on prior research work done by Bao et al. (2008), who noted that corporates showing in the same piece of news imply that there may possibly have some relationship among them. To make our SC network much more concrete, we conduct a domain-specific word list (i.e., cooperate, collaborate, team, supply, transport, partner, ally, procure, purchase, joint, and coordinate). Furthermore, by making use of Yahoo!Kimo's specific organizing style, we extract a firm's stock ticker information by TM and then utilize it to construct the SC collaboration networks (see Figure 2). We totally gathered around 10,000 news article from social media (i.e., Yahoo!Kimo) during the period from 2015 to 2017. We take the degree of centrality that counts how many neighbors a node has to construct the forecasting model. The financial ratios calculated from financial statements could hide some details about financial troubles by means of selective accounting principles and different estimation methods. Generally speaking, the changing signal of a corporate operating performance is more likely to appear in a textual format (i.e., accounting narrative or news article) before any subtle modification shows up in financial ratios. Disclosure in the Management's Discussion and Analysis (MD&A) section has been widely considered as a vital conduit of narrative information to market participants. We describe three principle objectives for MD&A as Figure 2. The SC collaboration networks follows (Li, 2010;Loughran & McDonald, 2011): (1) To solidify financial transparency and yield the context under which financial information should be evaluated; (2) To provide a narrative explanation of a corporate's operating status that enables market participants to realize much more the real status of its operations; and (3) To yield useful message about the quality and variability of future cash flow or earnings. Epstein and Palepu (1999) stated that investors should rely more heavily on MD&A than audited financial statements. Based on this concept, we believe that the sentiment in the MD&A section should contain some essential information. Lougharn and McDonald (2011) proposed a specific dictionary for the financial domain that consists of six sentiment categories. By matching the content of MD&A with this sentiment dictionary, we can obtain the sentiment indicator and use it to construct the forecasting model. The measurement criteria There are many measurement criteria introduced for the forecasting task, with the most widely utilized criteria being overall accuracy. However, only utilizing one measurement criterion to judge the model's performance and reach a final conclusion is not a reliable and trustful way (Hsu, Yeh, & Lin, 2018). Thus, two other measurement criteria, "type I error: it means that a corporate with superior operating performance has been misclassified as an inferior one" and type II error: it denotes that a corporate with inferior operating performance has been misclassified as a superior one", are executed. The former may result in additional investigation, and the latter may result in ruining a corporate's value as well as lead to fatal economic losses. Table 4 shows the confusion matrix. The forecasting results Most financial variables derived from financial statements contain some degree of errors. Thus, feature selection (FS) is a necessary and inevitable pre-process . The purpose of FS is to find a core subset from a problem domain without deteriorating the model's classification accuracy and appropriately representing the structure of the original features. Pawlak (1982) introduced a rough set theory (RST) that not only can deal with data containing impreciseness, uncertainty, and vagueness, but also can identify the dependency of the data and eliminate the number of used features without requiring extra information (Bai & Sarkis, 2018;. How to generate the minimal reduct (i.e., the best feature subset) for RST has thus been proven to be a NP-hard problem (Chen, Zhu, & Xu, 2015). This study employs a new swarm intelligence approach, fish swarm algorithm (FSA) (Li, Shao, & Qian, 2002), which has the merits of a strong capability to escape from local minimums, a faster convergence rate, and intuitive mathematical formulation, in order to determine the best feature subset for RST (see Figure 3). To examine the effectiveness of FS, we consider two scenarios: (1) with FS and (2) without FS. To access a more robust outcome, we take the introduced model as a benchmark and compare it with four other models for predicting financial troubles, such as Bayesian belief network (BBN) (Kirkos, Spathis, & Manolopoulos, 2007), Adaptive neuro fuzzy inference system (ANFIS) (Pan, 2009), Support vector machine (SVM) (Shie, Chen, & Liu, 2012), and extreme learning machine (ELM) (Lin, 2017). To avoid the result just happening by chance, we perform a statistical examination. The results appear in Table 5. We can see that the model with FS can increase forecasting quality as well as lower the number of misclassification errors. The outcome is in accordance with the work done by Uysal and Gunal (2012) who indicated that FS can boost the performance of the classification model as well as eliminate the impact of dimensionality. One of the interesting finding is that a corporate's SC influential indicator (SCI) (i.e., it represents the level of a corporate's involvement in a SC collaboration relationship network) and sentimental indicator (STI) are included in the minimal reduct determined by RST. It implies that both indicators are essential and have a considerable influence on the model's classification accuracy. To test the influence of each indicator, we set up four scenarios: (1) without SCI and STI, (2) with SCI, (3) with STI, and (4) with SCI and STI. One of the nonparametric statistics, called the Friedman test, poses the advantage of easy-to-use, depicts the synthesized outcome of models in the ranking format instead of dubious averages, and is taken as a measurement criterion under several of our comparisons (Demsar, 2006). Figures 4−6 present the results. We can see that the model with SCI outperforms the model without SCI. This outcome is in line with Dyer and Nobeoka (2000), who empirically proved that a higher degree of interconnection in a SC network lowers the cost of transaction (such as monitoring, and negotiating) considerably. Gnyawali and Madhaven (2001) further indicated that a corporate's position located on SC network can be viewed as its competitive advantage and bring informational and reputational priorities. The forecasting model with STI still outperforms the model without STI. This outcome is the same with Magnusson et al. (2005), who indicated that when a corporate is expected to perform well, the sentiment of the accounting narrative document (i.e., MD&A) tends to be positive. To our knowledge, none of the current research studies utilize SCI together with STI to construct the performance forecasting model. To fill this gap, two different kinds of soft information are taken. The result shows that this model with two different kinds of soft information performs better than the model without two different kinds of soft information. West et al. (2005) also indicated that even a little improvement in preciseness can bring huge amount of future profits to market participants. Thus, the two different kinds of soft information should be taken simultaneously by decision makers to form a final judgment. Robustness test Reaching an ultimate conclusion only by relying on one pre-decided dataset is not appropriate and reliable in today's highly volatile environment. To robust our research findings, we consider another two different datasets: (1) performance rank determined by the original DEA, and (2) performance rank determined by ROA. Table 6 depicts the difference between the original DEA and multiple DEA specifications in discriminant capability. We can see that multiple DEA specifications perform better than the original DEA in discriminant capability. In other words, the performance scores determined by multiple DEA specifications have lower mean and higher variance values. This outcome is in response to Sagarra et al. (2017) who indicated that the utilization of a different set of efficiency scores (i.e., multiple DEA specifications) provides a broader set of information for discriminating and grouping the observed units as well as reaches a more robust result. Moreover, to ensure the selected performance measures are fairly representative, we take each corporate's credit rating status into consideration. A corporate's credit rating status is assessed by professional and independent agencies that aim to find out how it might be unable to meet its financial obligation and specifically rests on a complete and overarching analysis of all the risk elements of the measured objects. The utilization of credit status is widely being taken as an assessment of corporate's risk and creditworthiness. The rating status can be divided into 10 ranks, ranging from best to worst (ranks 1 to 4 express low risk; 5−6 express middle risk; 7 to 10 express high risk). Figure 7 expresses the results under two different performance measures (original DEA vs. multiple DEA specifications). We can see that a corporate with superior performance determined by multiple DEA specifications usually has a better credit rating status. In contrast, most corporates with better performance derived from original DEA still have a bad credit rating status − that is, multiple DEA specifications' discriminant ability is better than the original DEA. Tables 7−8 show the forecasting performance under two different datasets. We can see that the introduced model still reaches optimal performance under the whole assessment criteria for the two dissimilar datasets. (3) p-value P = 0.000*** P = 0.000*** P = 0.000*** Note: *denotes p < 0.1; **denotes p < 0.05; ***denotes p < 0.01. (2) p-value P = 0.000*** P = 0.000** P = 0.032** Note: *denotes p < 0.1; **denotes p < 0.05; ***denotes p < 0.01. Description of decision logics The NN-based mechanism poses outstanding generalization capability, but it encounters a severe weakness from its lack of understandability − that is, the inherent decision-making concept embedded in the NN-based mechanism are implicit and opaque. If the model's forecasted outcome cannot be realized and re-checked by users, then they would have a higher intention to not use the model as well as impede its practical applications. How to extract the inherent decision-making concepts from the NN-based mechanism and represent the concepts in an intuitive and human readable format turns out to be a crucial requirement for the acceptance of black-box models. This study treats RVFL, one type of a NN-based mechanism, as a black-box and extracts the decision rules that depict the relationship between the model's inputs and outputs. The fundamental idea is to construct an artificially labelled case where the target class label of the training dataset is substituted by RVFL's forecasted outcome. Sequentially, we feed the artificial dataset into another model with explanation capability that can learn what RVFL has learned. Decision tree (DT) has superior forecasting quality with a relatively small computational burden and can discover useful patterns as well as yield intuitive decision rules for decision makers. Thus, we utilize DT to handle the task of knowledge extraction. Figure 8 depicts the decision rules. We can realize that a corporate with superior performance normally presents a higher profit margin, suitable capital structure, efficient asset utilization, and greater involvement in the SC collaboration relationship network (Ramanathan, Ramanathan, & Bentley, 2018). From the results, the government can consider potential implications to announce future beneficial policies. For example, if the government wants to upgrade the nation's industrial level, then it can allocate much more resources on some specific corporates that are highly involved in their SC collaboration relationship network. The reason is that such high involvement in a SC network can facilitate better resource and knowledge sharing, decrease risk exposure and uncertainty, increase profit margin, and strengthen the firms' competitive edge. Top-level managers can take the proposed model as a roadmap to allocate valuable resources to more suitable places, modify their firm's financial leverage to minimize the cost of the capital, and react quicker to customers' requirements. Market participants also can view the extracted knowledge (i.e., decision rules) as a navigating principle to adjust their investment portfolio and financing strategies in order to fulfill the goal of profit maximization under an endurable risk level. Conclusions and future works Bad corporate operating performance is responsible for 99% of financial distressed cases, and therefore it can be viewed as the primary trigger for financial troubles. However, works on performance prediction are quite scarce compared to the well-established literature on financial distress prediction and credit risk prediction. To fill this vacancy, the study introduces a sophisticated framework that integrates multiple DEA specifications, MDS, TM, SNA, sentimental analysis, and AI technique for operating performance evaluation and forecasting. Through joint utilization of multiple DEA specifications and financial indicators, we are able to obtain a more overarching description on a corporate's operations. To make the analyzed outcome more accessible to non-specialists, we employ MDS to visualize the main characteristics of the data. By doing so, users can appropriately see which corporates belong to the superior group and which ones belong to the inferior group. Compared to multinationals and global enterprises, domestic corporates usually have limited human capital and scarce financial resources and are unlikely to exist for a long term. Greater involvement in a corporate's SC collaboration relationship network has been widely deemed as an effective manner to response to the shocks of globalization, but it is very complicated to determine a corporate's SC relationships due to their implicit and opaque nature. Through a mixture of TM and SNA, we can realize a corporate's involvement in its SC network and further examine the network's influence on its operating performance. Most previous works on financial forecasting model construction have relied heavily on hard information (i.e., financial ratios) that is collected from financial statements. However, these ratios may be contaminated to some degree by errors due to manager's discretion ability, such as dissimilar estimation approaches, and selective accounting principles. Broadly speaking, the changing signal of a corporate's financial status is more likely to appear in soft information (i.e., sentiment, opinion, news, and accounting narratives) before any subtle modifications show up in hard information. Thus, we believe that soft information (which denotes sentiment in this study) usually holds some message about future corporate operating performance, and thus we conduct sentimental analysis. We sequentially feed the analyzed result into RVFL to construct the forecasting model. To our knowledge, no current research has taken both SCI and STI into consideration, and hence even a fraction of improvement in the model's forecasting accuracy can transmit considerable future savings to market participants and corporates. To reach a more robust outcome, we therefore simultaneously take two indicators (i.e., SCI and STI) to establish the model for performance forecasting. The result indicates that the model with two indicators reaches optimal forecasting quality, which is in accordance with Kim (2014) and Bhattacharjee and Cruz (2015), who indicated that a corporate with a higher involvement in its SC network can assist other corporates in reacting to changes in the market, shorten product lead times, increase customer loyalty and values, and enlarge profit margins by means of utilizing the network for transmitting essential knowledge, information, opportunities, and resources. Furthermore, the result also corresponds to Loughran and McDonald (2011), who stated that the context-sensitive knowledge (i.e., sentiment/tone/opinion) provided in annual reports significantly correlates with profitability, trading volume, and performance. A lack of comprehensibility is one of the severe obstacles of a NN-based mechanism (i.e., RVFL). To overcome this obstacle, we employ DT grounded on a pedagogical structure to extract the inherent judgments from RVFL and represent them in an easy-to-use and intuitive manner in order to enhance its real-life utilization. The introduced framework, tested by real cases, is a promising alternative for performance forecasting task. Future works can consider two potential research directions. First, because RVFL is an efficient and effective learning algorithm with the property of fast learning, it has attracted considerable attentions in many research fields. In reality, the selection of network parameters for RVFL significantly influences forecasting quality. To cope with the aforementioned task, parameter selection can be converted into an optimization task and one can use a metaheuristic algorithm to solve it. In addition, the introduced model herein belongs to the class of singular models that contain only one classifier. However, no specific model can achieve the optimal forecasting outcome under all scenarios. Inspired by the idea of classifier ensemble, which aims to complement the error made by a single mechanism, the proposed model can be extended to an ensemble structure so as to increase forecasting performance. Second, feature research can use more advanced feature selection techniques to determine the essential feature subset. This is because the original RST only accepts discrete data. Data going through a discretization procedure will incur the problem of information loss. To overcome this task, fuzzy RST can be used in the future.
7,849.8
2019-11-21T00:00:00.000
[ "Business", "Computer Science", "Economics" ]
Gamma Power Is Phase-Locked to Posterior Alpha Activity Neuronal oscillations in various frequency bands have been reported in numerous studies in both humans and animals. While it is obvious that these oscillations play an important role in cognitive processing, it remains unclear how oscillations in various frequency bands interact. In this study we have investigated phase to power locking in MEG activity of healthy human subjects at rest with their eyes closed. To examine cross-frequency coupling, we have computed coherence between the time course of the power in a given frequency band and the signal itself within every channel. The time-course of the power was calculated using a sliding tapered time window followed by a Fourier transform. Our findings show that high-frequency gamma power (30–70 Hz) is phase-locked to alpha oscillations (8–13 Hz) in the ongoing MEG signals. The topography of the coupling was similar to the topography of the alpha power and was strongest over occipital areas. Interestingly, gamma activity per se was not evident in the power spectra and only became detectable when studied in relation to the alpha phase. Intracranial data from an epileptic subject confirmed these findings albeit there was slowing in both the alpha and gamma band. A tentative explanation for this phenomenon is that the visual system is inhibited during most of the alpha cycle whereas a burst of gamma activity at a specific alpha phase (e.g. at troughs) reflects a window of excitability. Introduction Neuronal oscillations in different frequency bands have been reported in multiple studies in both humans and animals. These oscillations are produced by large ensembles of neurons oscillating in synchrony and are considered to be important for neuronal computation responsible for e.g. perception, memory, and attention [1][2][3][4]. Much less is known about interactions between various frequency bands during specific cognitive tasks or simply at rest. This interaction can be carried out in several ways [5]: by amplitude correlations [6][7][8], phase synchronization (n:m coupling) [8], or phase to power locking [7,[9][10][11][12]. Also bicoherence, that has been applied to animal [13] and human data [14], revealed cross-frequency interactions between theta and gamma activity. Phase to power coupling is particularly interesting since it could be generated by a slower rhythm (e.g. theta or alpha) modulating the excitability of a network producing high frequency oscillations. In animals, phase to power interactions have been reported in the theta (5)(6)(7)(8) and gamma bands. This phenomenon has been identified in the hippocampus of anesthetized and awake rats [15,16]. In humans, similar coupling has been reported with intracranial recordings in the medial temporal lobe during successful vs. unsuccessful memory performance [9,12]. The theta phase to gamma power modulation was task dependent in various cognitive paradigms. Yet, a study applying intracranial electrodes in human neocortical areas reported that delta oscillations (0-3.5 Hz) oscillations modulated gamma power in distributed brain areas [7]. Spontaneous brain activity recorded from humans during rest is dominated by strong oscillations in the alpha band (8)(9)(10)(11)(12)(13) [17]. These oscillations are produced in posterior brain regions in which gamma activity also has been reported [18][19][20]. In order to examine the interaction between cell assemblies producing the two main rhythms in the human visual system, we have investigated cross-frequency coupling between the phase of the alpha activity and the gamma power in the human magnetoencephalogram (MEG). Results The cross-frequency measure was applied to the sensor data in each subject to investigate coherence between a low frequency signal and the time-course of the power at higher frequencies. Interaction was observed between alpha and gamma bands. The sensors with the strongest alpha-gamma coupling were identified subject by subject. The human gamma-band activity reported in different studies varies from 30 to 150 Hz (for a review, see [2]), possibly depending on a cognitive task, imaging method used and/ or intersubject differences. Therefore, the boundaries of the gamma-band of a single subject may somewhat differ from those of the grand average. Cross-frequency coupling was significant in six subjects (four subjects: p,0.01; two subjects: p,0.05). Data from these subjects were subjected for further analysis. The crossfrequency representations were averaged showing coupling between the phase of alpha (8)(9)(10)(11)(12) and the power of gamma activity (Fig. 1A). Note that the 10 to 20 Hz coupling is likely to be explained by the first harmonic of the alpha activity. The power spectra for those sensors were also averaged over subjects. Interestingly, while the alpha activity resulted in a strong 10 Hz peak, the gamma activity was not apparent as a peak in the spectrum (Fig. 1B). Then we extracted mean cross-frequency values for all sensors for a 8-12 Hz by 30-70 Hz tile (white rectangle in Fig. 1A). The corresponding topography averaged over sensors is shown in Fig. 1C. The posterior distribution resembled the topography of the ,10 Hz alpha power (Fig. 1D). Next, we tested if the strength of the alpha power correlated with degree of cross-frequency coupling in all 14 subjects (Fig. 1F). This resulted in a significant correlation suggesting that strong alpha power (Spearman r = 0.68, p,0.01) is a prerequisite for the crossfrequency coupling. Although the differences in alpha-gamma coupling and alpha power between subjects cannot be attributed to the amount of data used in the analysis, it remains hard to dissociate whether inter-individual differences in the observed effect are due to differences in signal-to-noise or reflect an underlying physiological phenomenon. It is important to note, however, that the amount of data used in the present study resembles the amount of data used in a typical cognitive paradigm (100 trials of 1 s). Thus there is a realistic chance to detect crossfrequency coupling even in the absence of an extended recording. To investigate the phase relationship between gamma power and the alpha signal we calculated time-frequency representations of power with respect to epochs aligned to the alpha phase (data from one subject shown in Fig. 2). Confirming the cross-frequency estimates, this revealed a strong modulation in gamma power with respect to the alpha phase. Depending on either left or right hemisphere sensors gamma power was strongest at, respectively, alpha peaks or troughs (Fig. 2). This shows that the gamma power modulation is in phase or anti-phase with the alpha signal in respectively the left and right hemisphere. Further, based on the average of the four subjects whose data revealed this dipolar topography (data not shown), it seems more likely that gamma power coincides with peaks and troughs of alpha oscillation, rather than with its rising phase. However, we do not have the sufficient amount of data to make stronger claims about the phase lag. In the other two subjects the topography did not have a clear dipolar pattern. Figure 1. Coupling between alpha and gamma activity. A. Cross-frequency interaction in ongoing human MEG signals during eyes closed. The highlighted area indicates increased coherence between alpha activity (along the x-axis) and the power of the gamma activity (along the y-axis). B. Grand-average (black line) and standard deviation (red line) of log-transformed power. While a strong peak could be observed in the alpha band there was no detectable peak in the gamma band. C. Topography of cross-frequency coupling (the highlighted area in A). D. Topography of the logtransformed alpha power (9-11 Hz). E. Topography of the log-transformed gamma power . A to E are calculated from an average of 6 subjects. F. Correlation over 14 subjects between alpha power and the cross-frequency coupling in the gamma band. Mainly subjects with higher alpha power had significant cross-frequency interactions (shown in red). doi:10.1371/journal.pone.0003990.g001 Biophysically, the observed dipolar topography is best explained by alpha activity produced by a single source in the midline. This source will produce a dipolar field distribution (i.e. being in antiphase in the axial gradiometers with respect to left and right hemisphere). Consistent with the notion that it is one single alpha source modulating the gamma power, the gamma power will be in phase in the left and right hemisphere. The relationship between gamma power and alpha phase was also estimated from the angle of the coherency. Figure 2E shows the topography of the cosine of the coherency (i.e. the real value of coherency). This yielded a bipolar pattern over the occipital region. To support the notion that the bipolar topography is a consequence of a single neuronal source whose gamma activity is modulated by the alpha rhythm we performed a simple simulation. We used a forward model (G) to calculate the fields (F) in the sensors given a dipolar source (q) at location r: F = Gq r (t). For the forward model we used a spherical head model with a realistic position in the sensor array of CTF151 system. The source was located at the midline in 'visual cortex' (r = [25,0,1] with respect to head coordinates) and pointing in the posterior direction. This forward model was applied to the signal representing a gamma signal at f 2 = 75 Hz modulated by the phase of an alpha signal f 1 = 11 Hz (Fig. 3A). Notably, the resulting signal was asymmetric, i.e. alpha peaks were modulated differently than alpha troughs (see further). As for the measured data, the cross-frequency coupling was subsequently calculated for the sensor data. The topography of the cross-frequency coupling revealed a posterior distribution. The topography of the cosine of the coherency yielded a clear bipolar distribution (Fig 3F) which resembled that of the actual MEG data ( Fig 2E). Further, similarly to the actual MEG data, gamma power on one side of the alpha source was modulated by the peaks and on the other side of the source by the troughs of the alpha oscillation ( Fig. 3B,C,D,E). Even though the bipolar topography can be explained by a single dipole in the midline, it is an oversimplification since alpha activity emerges as a thalamocortical interplay [e.g., 21]. Nevertheless, MEG is mainly sensitive to neocortical and not thalamic sources. Furthermore, the midline alpha source is likely to be a consequence of summed activity from central sources in the right and left hemisphere. To support the MEG data we obtained a dataset from an epileptic patient with electrodes implanted intracranially. These electrodes covered a larger part of occipital and posterior parietal cortex (Fig 4A). To identify the equivalent of the alpha activity in the intracranial data we calculated the power spectra during eyes closed. We observed a strong peak ,7 Hz in many of the electrodes. We then selected the electrodes in which the 5-10 Hz power increased more than 10% when comparing eyes closed to open (electrodes marked in Fig. 4A). The power spectra averaged over these electrodes are shown in Fig. 4B. We consider this activity the equivalent of the alpha activity, even though the frequency is lower due to the drugs and/or the pathology (see Discussion). We subjected the data from these electrodes to the cross-frequency analysis (Fig. 4C). The phase of the ,7 Hz rhythms clearly modulated power at 20-40 Hz; this modulation was stronger for eyes closed compared to eyes open. Further, to check the robustness of our method, we have investigated whether the settings used in the analysis may favor or hinder the detection of the alpha-gamma coupling in the MEG signal. Fig.S1 shows that cross-frequency coupling can be detected irrespectively of the number of cycles used to extract the envelope of the fast signal and the number of data points used to compute coherence. Discussion Our findings show that gamma power in humans is phaselocked to ongoing alpha oscillations. By investigating the . To the right of the source, the gamma power is the strongest at the alpha troughs (E). Gamma power has therefore the same phase relationship to the left and the right of the dipole whereas the polarity of the alpha signal is flipped. F. The topography of the phase difference (cosine of coherency) between alpha cycles and the time course of the gamma power originating from the source that generates the signal (A). The phase difference has a dipolar pattern similar to that observed in the measured MEG data (comparable to Fig. 2E). G. The first hypothesized mechanism for how alpha activity modulates gamma power in case when alpha peaks are modulated stronger than troughs. Gamma activity only occurs when the alpha signal is sufficiently low, e.g. at the troughs. The stronger the alpha the shorter the windows of gamma bursts. H. The second hypothesized mechanism for how alpha activity modulates gamma power when peaks and troughs of an alpha oscillation are modulated similarly. Gamma bursts occur at troughs of the alpha oscillation only when the alpha oscillation reaches a particular amplitude. I. The third hypothesized mechanism for how alpha activity modulates gamma power when peaks and troughs of an alpha oscillation are modulated similarly. Gamma bursts occur at peaks of the alpha oscillation only when the alpha oscillation reaches a particular amplitude. doi:10.1371/journal.pone.0003990.g003 topography of the phase relationship of gamma power to alpha phase our findings can be explained by a single source in the posterior area. This source is producing a gamma signal which is modulated by the phase of the alpha rhythm. To further support the findings we subjected intracranial data from an epileptic patient to the same analysis. Strong activity was seen around 7 Hz. We still consider this to be alpha activity due to the posterior distribution and the increase in power with eyes closed. Slowing of the alpha rhythm can be explained by the antiepileptic drugs and the pathology. For instance, the drug carbamazepine which was administered to the subject is known to reduce the alpha frequency [22,23]. There are numerous cases in which slowing of alpha frequency is reported in epileptic patients [e.g. 24,25,26]. Based on these considerations, we will consider the 7 Hz activity in the patient the equivalent of the posterior alpha activity observed in healthy subjects. The cross-frequency analysis revealed a strong coupling between 20-40 Hz power and the phase of the 7 Hz activity. This 4-5 fold ratio corresponds to the ratio between 30-70 Hz gamma power and 8-12 Hz alpha phase observed in healthy subjects. We conclude, that strong alpha to gamma coupling can be observed in intracranial data albeit there is slowing in both the alpha and gamma band. Alpha oscillations have been proposed to represent functional inhibition of the human visual system [reviewed in 27]. Note that functional inhibition reflects a reduction in neuronal processing which is not necessarily the same as GABAergic inhibition. Occipital gamma activity, in turn, although present in various tasks, particularly strongly manifests itself during visual processing [2,[28][29][30]. Although e.g, Chorlian et al. [31] have reported modulations in gamma power phase-locked to alpha oscillations during visual stimulation, our results demonstrate that these rhythms interact even in the absence of visual input. However, unlike in MEG studies on sustained visual attention [e.g., 18], no peak in the gamma band was observed in the power spectra: the gamma activity only became detectable when studied in relation to the alpha phase. What is the functional role of alpha modulating gamma power? As it has been proposed, e.g., by Klimesch et al. [e.g. 27], alpha activity might provide phasic inhibition of posterior areas. There are three possibilities (Fig 3 G,H,I): (1) Gamma burst responsible for visual processing can only occur when the alpha signal is low enough e.g. at the troughs (Fig. 3G). Thus, the periods of gamma activity become briefer with stronger alpha. When alpha is sufficiently weak, gamma can occur during the whole cycle. This, however, requires that the alpha signal is asymmetric (Fig. 3A), i.e. that peaks are modulated stronger than the troughs (or vice versa) [32,33]. This notion is consistent with alpha activity reflecting functional inhibition. The bouts of gamma in each alpha cycle allow for some neuronal processing in posterior areas. (2) Alpha signal is symmetrically fluctuating around zero (peaks and troughs are modulated similarly) and gamma bursts occur at troughs only when alpha oscillation reaches a particular amplitude (Fig. 3H). (3) Alpha signal is symmetrically fluctuating around zero and gamma bursts occur at peaks only when alpha oscillation reaches a particular amplitude Fig. 3I). It is physiologically plausible that spontaneous fast oscillations are produced by a network of neurons whose excitability is modulated by a slow rhythm. Such a mechanism is consistent with a computational model of White et al [34] in which concurrent fast and slow oscillations in the hippocampus are generated by neuronal circuits containing GABAergic interneurons with both fast and slow synaptic kinetics. Such network dynamics has been described in a number of brain structures. It is plausible that a related mechanism can be used to account for our findings. In conclusion, our findings show that phase to power crossfrequency couplings can be observed in non-invasive as well as invasive recordings in humans. Further work investigating the interaction between alpha and gamma oscillations in cognitive tasks would help to elucidate the functional role of this phenomenon. MEG data Ongoing brain activity was recorded (sampling rate: 1200 Hz, low-pass filter: 300 Hz) using a whole-head MEG system with 151 axial gradiometers (VSM/CTF Systems, Port Coquitlam, Canada) from 14 young healthy volunteers (mean age 26.862.4, 8 females). In addition to the MEG, the electrooculogram (EOG) was recorded from the supra-and infra-orbital ridge of the left eye for the subsequent artifact rejection. The study was approved by the local ethics committee (Commissie Mensgebonden Onderzoek -Regio Arnhem Nijmegen, CMO 2001/095), and a written informed consent was obtained from the subjects according to the Declaration of Helsinki. Subjects were presented with tones (500 ms duration, 333 Hz). One tone indicated that a subject had to close his/her eyes; two tones indicated that the eyes should be opened. Only data from periods of eyes closed have been used in the subsequent analysis. Partial artifact rejection was performed by rejecting segments of the trials containing eye-blinks, muscle and SQUID artifacts. By this procedure smaller segments, rather than a whole trial, can be rejected. In order to ensure that segments were sufficiently long for the subsequent analysis, segments shorter than 1 s were discarded as well. On average, 95.4 s615 s of data underwent subsequent analysis. Independent component analysis (ICA) was used to remove heart artifacts and eye movements remaining after artifact rejection routines [35]. Intracranial data We obtained a data set from one subject (36 years old; female) who had been surgically implanted with subdural electrodes on the cortical surface. The patient had received the following drugs at the day of recording: Tegretol (carbamazepine) 1200 mg, Difantoine (phenytoine) 200 mg and Kefzol (antibioticum) 3000 mg. The electrodes were placed to best localize epileptogenic regions (see Fig. 4A). The iEEG signal was recorded with a 128 channel Micromed system with platinum electrodes (2.3 mm diameter) with an inter-electrode spacing of 1 cm. The signal was sampled at 512 Hz, and bandpass filtered between 0.15 and 134.4 Hz. The subject was asked to open and close the eyes in periods of 30 s respectively; this was repeated 5 times. Epochs with major epileptic artifacts were removed from the data and the electrodes over the subsequently resected areas were excluded from the analysis. Data analysis In order to investigate cross-frequency interactions we have developed a tool calculating coherence between a low frequency signal and the time-course of the power at higher frequencies. Let the signal {X t } be represented by the time series X 1 , X 2 , …, X N . First, the time-course of power S 1 (f 2 ), S 2 (f 2 ), …, S N (f 2 ) was estimated for frequency f 2 by applying a sliding tapered timewindow followed by a Fourier transformation: Here Dt~1=F s where F s is the sampling frequency. The function h t is a Hanning taper K data points-long equaling the length of the sliding time window. The length of the timewindow decreased with frequency: K~F s M=f 2 where M denoted the numbers of cycles per time window. We chose to use M = 7 cycles (for the intracranial data we used M = 6 cycles). Next, the coherency Coh(f 1 ,f 2 ) was estimated between signal {X t } and the estimate of the time-course of power{S t (f 2 )} for a given frequency f 1 . The coherency was calculated with respect to the two time series divided into M segments being L = 2048 data points long (for the intracranial data we used L = 1024 due to the lower sampling frequency): The coherence was the absolute value of the coherency Coh f 1 ,f 2 ð Þ j j . The phase difference between the signal at f 1 and the power at f 2 is given by the angle of the coherency arg Coh f 1 ,f 2 ð Þ ð Þ . In this case h l refer to a 2048 points Hanning window and * to the complex conjugate. This allowed us to characterize the phase-to-power cross-frequency interaction with respect to f 1 and f 2 sensor by sensor. To assess reliability of the estimated coherence, a statistical analysis was performed by randomly shifting the time course of signal {X t } (at least 3001 points) with respect to the estimated power {S t (f 2 )} and recalculating the coherence for the channel with the most pronounced effect (based on visual inspection). Repeating this 200 times yielded a distribution of coherence values. Further, a maximum coherence value between the alpha band and any of the frequencies between 2 and 100 Hz has been identified in every randomization. The proportion of the randomization coherence values above the coherence value to be tested corresponded to the p-value. In order to assess phase relationship between alpha and gamma power, we bandpass-filtered the MEG data +/24 Hz of the alpha peak frequency determined from the power spectra in each subject with acausal FFT filter. Next, 0.6 s of unfiltered epochs of data phase aligned to the alpha cycles were extracted. This was done by identifying the peaks of the alpha cycles in the bandpass-filtered data. Time-frequency representations (TFRs) of power were calculated for each segment using Fourier transforms calculated for short sliding time windows. Power estimates were averaged across trials. We applied a Hanning taper to an adaptive time window of 6 cycles for each frequency between 2-150 Hz (DT = 6/f). This analysis was done using the FieldTrip toolbox (http://www.ru.nl/neuroimaging/ fieldtrip). Figure S1 Cross-frequency plots for data from one representative subject calculated with different combinations of the number of cycles (4-7) that was used to extract the envelope of the fast signal, The number of data points (512-2048), was used to compute coherence between the envelope of the signal and the signal itself. Cross-frequency coupling can be detected reliably for a wide range of parameter settings. Found at: doi:10.1371/journal.pone.0003990.s001 (4.47 MB TIF)
5,318.6
2008-12-22T00:00:00.000
[ "Physics" ]
The relationship between gait and automated recordings of individual broiler activity levels Gait, or walking ability, is an often-measured trait in broilers. Individual gait scores are generally determined manually, which can be time-consuming and subjective. Automated methods of scoring gait are available, but are often implemented at the group level. However, there is an interest in automated methods of scoring gait at the individual level. We hypothesized that locomotor activity could serve as a proxy for gait of individual broilers. Locomotor activity of 137 group-housed broilers from four crosses was recorded from approximately 16 to 32 days old, using an ultra-wideband tracking system. These birds were divided over four trials. Individual gait scores were determined at the end of the tracking period, on a scale from 0 to 5, with higher scores representing worse gait. Given the limited number of birds, birds were subsequently categorized as having a good gait (GG; scores 0–2) or a suboptimal gait (SG; scores 3–5). Relationships between activity and gait classification were studied to determine whether individual activity has the potential to serve as a proxy for gait. When comparing GG and SG birds using robust linear regression, SG birds showed a lower 1) activity around the start of tracking (estimate = −1.33 ± 0.56, P = 0.019), 2) activity near the end of tracking (estimate = −1.63 ± 0.38, P < 0.001), and 3) average activity (estimate = −1.12 ± 0.41, P = 0.007). When taking day of tracking, trial, cross and body weight category (heavy versus light at approximately 2 wk old) into account, a tendency was still observed for SG birds having lower activity levels within lightweight birds, but not within heavyweight birds. This study provides indications for activity differences between gait classifications. However, given that there was considerable overlap in activity levels between the gait classifications, future research implementing additional activity-related variables is required to allow a more complete distinction between birds with different gait classifications. whether individual activity has the potential to serve as a proxy for gait. When comparing GG and SG birds using robust linear regression, SG birds showed a lower 1) activity around the start of tracking (estimate = À1.33 § 0.56, P = 0.019), 2) activity near the end of tracking (estimate = À1.63 § 0.38, P < 0.001), and 3) average activity (estimate = À1.12 § 0.41, P = 0.007). When taking day of tracking, trial, cross and body weight category (heavy versus light at approximately 2 wk old) into account, a tendency was still observed for SG birds having lower activity levels within lightweight birds, but not within heavyweight birds. This study provides indications for activity differences between gait classifications. However, given that there was considerable overlap in activity levels between the gait classifications, future research implementing additional activity-related variables is required to allow a more complete distinction between birds with different gait classifications. INTRODUCTION Broiler chickens are often kept in large groups, of several thousands of birds (de Jong et al., 2015). With these large numbers of animals, it can be very complex to observe individual behavior, health, and welfare states. Therefore, there is an increasing interest in easy-to-measure traits that are related to health or welfare states, or to specific behaviors of individual broilers. An often-measured trait in broilers is their walking ability, or gait, in order to examine leg weakness. Leg weakness is a general term to describe multiple pathological states resulting in impaired walking ability in broilers (Butterworth, 1999). The gait of birds is often classified according to a scoring system developed by Kestin et al. (1992), consisting of 6 categories, ranging from a score of zero that represents a normal gait with no detectable abnormalities to a score of five that represents birds that are incapable of sustained walking on their feet. Side effects of genetic selection, growth rate, body conformation, exercise, stocking density, and other factors have been suggested to be involved in causing leg weakness (reviewed in Bradshaw et al., 2002). Leg weakness has had a considerable prevalence in the conventional broiler industry. In a UK survey by Knowles et al. (2008) it was reported that 27.6% of the birds represented in the survey had a gait score of 3 (i.e., obvious gait defect which affects the ability to move about (Kestin et al., 1992)) or higher at an average age of 40 d, although there was considerable variation between flocks. Leg weakness may negatively affect the birds' welfare, as there are indications that leg weakness might be painful for the affected birds and, in severe cases, birds may have difficulties competing with others for resources and may be limited in performing specific behaviors (Kestin et al., 1992), such as dustbathing or preening while standing (Vestergaard and Sanotra, 1999;Weeks et al., 2000). Furthermore, lameness can have economic consequences for farmers. For example, in some studies, associations between gait score and footpad dermatitis have been observed in broilers, for example, with high odds of footpad dermatitis becoming more severe as the gait score increases, that is, with a worse gait (Opengart et al., 2018). In the Netherlands, if a threshold of a percentage of birds showing footpad dermatitis is crossed, farmers have to temporarily reduce their stocking densities (Afsprakenkader Implementatie Vleeskuikenrichtlijn, 2009), hereby affecting the farm's economics. It has been suggested that increased locomotor activity can contribute to a lower prevalence of leg weakness (Reiter and Bessei, 2009). Furthermore, different leg health traits have been shown to be heritable (Kapell et al., 2012). For example, tibial dyschondroplasia has been estimated to have a heritability of 0.10 to 0.27 (Kapell et al., 2012). Therefore, information on gait at the level of the individual can be of great value for breeding programs. Currently, gait scores of individual birds are often determined manually and require an experienced scorer to observe individual birds and grade their walking ability, for example according to the earlier-mentioned 6-scale scoring system from Kestin et al. (1992). However, this manual scoring can be time-consuming and subjective. Therefore, automated ways of estimating, or even predicting, gait scores are desired. Several studies have tested automated ways of scoring gait or expected correlated traits, for example using image technology (e.g., Aydin et al., 2010;Dawkins et al., 2012;Aydin, 2017;Silvera et al., 2017;N€ a€ as et al., 2018;Van Hertem et al., 2018) or inertial measurement units (IMUs; e.g., in turkeys; Bouwman et al., 2020). However, the main focus appears to have been on measurements at a group-level. For example, Aydin et al. (2010) implemented an automatic image monitoring system to study activity levels of small groups of birds, clustered based on their manually determined gait score, and observed a relationship between the activity level and the manually determined gait score. They observed that broilers with gait scores 4 and 5 showed the lowest activity levels, although they note that more experiments are needed to assess the repeatability of these findings. Van Hertem et al. (2018) implemented a camera-based automatic animal behavior monitoring tool, to assess, among other things, bird activity levels of flocks, and observed a negative correlation between gait score and flock activity. On the other hand, some automated measurements of individual locomotion have been performed, for example using IMUs (Bouwman et al., 2020). However, although steps could be detected in turkeys with this approach, the relationship with gait score was not studied (Bouwman et al., 2020). Another approach was implemented by Aydin (2017), who manually placed single birds in a test setup with a 3D vision camera system to record the number of lying events and the latency to lie down. Although this has potential to make gait scoring more objective, it was only tested on single birds and in the current setup likely remains a time-consuming and labor-intensive method, as it still requires handling of individual birds for each observation. Therefore, there is a need for a proxy trait that can be used as an indicator for gait score that can be recorded on multiple birds while they are housed in their normal environment. The relationship between gait and the level of locomotor activity of broilers that was reported in some studies (e.g., Aydin et al., 2010;Van Hertem et al., 2018) indicates that the level of locomotor activity at group-or flock-level is correlated with gait and may even have potential as a proxy for gait scores. However, to study the relationship between gait and activity of broilers at the individual level in more detail, individual recordings of gait score and activity are required. Previous work has shown that the measurement of activity, recorded as distances moved, in broilers can be automated at the individual level (van der Sluis et al., 2019). By tracking activity of individual birds automatically, one can potentially obtain insight into the relationship between activity and gait score of individual birds while they are in a more normal, group-housed situation. If a strong relationship between activity and gait score at the individual level would be observed, activity could potentially be used as a proxy for gait, thereby making scoring of individual birds' gait more time-efficient and objective. Furthermore, information on activity levels might at the same time be informative for other reasons. For example, activity levels could serve as an indicator of illness, as ill animals often spend more time resting (Gregory, 1998). This renders the collection of activity data at the individual level a potentially fruitful investment. In this research, data on activity levels, recorded as distances moved, of individual broilers were collected using an ultra-wideband (UWB) tracking system and were studied to determine the relationship between individual locomotor activity and gait. Different aspects of individual activity were studied in relation to gait: 1) the activity level at different time points, 2) the overall average activity level, and 3) the slope of activity over time. Furthermore, it was studied whether gait and activity over time were related while accounting for other potentially influential factors, including for example genetic background and body weight of the birds. Ethical Statement Data were collected under control of Cobb Europe. Cobb Europe complies with the Dutch law on animal well-being. This study is not considered to be an animal experiment under the Law on Animal Experiments, as confirmed by the local Animal Welfare Body (June 20, 2018, Lelystad, the Netherlands). Location and Housing All data were collected on a broiler farm in the Netherlands. The broilers were group-housed, with feed and water provided ad libitum and wood shavings as bedding. No perches or other additional enrichments were provided. Commercial lighting and temperature schedules were used, and all birds were vaccinated according to common practice (Cobb, 2018). Ultra-Wideband Tracking System A Ubisense UWB system with Series 7000 sensors and compact tags (Ubisense Limited, Cambridge, UK) was used, in combination with TrackLab software (version 1.4, Noldus Information Technology, Wageningen, the Netherlands), to collect data on activity of broilers. The system is described in more detail in van der Sluis et al. (2019). All broilers were fitted with battery-powered UWB tags on their backs, with a size of approximately 3.8 by 3.9 cm and a weight of approximately 25 g, using elastic bands around their wing base. This system recorded the locations of the birds over time, with a frequency of one sample per bird approximately every 6.9 s, and the resulting calculated distances moved of the broilers were used as a measure of individual activity. Activity Data Collection Four consecutive UWB tracking trials (T1−T4) were performed, that is using four production rounds, and activity data were collected on a total of 150 commercial male broiler chickens from 4 different crosses. Not all crosses were present in each trial, as each trial included birds from only 2 crosses, and not all crosses were equally represented in the study (Table 1). At approximately 2wk-old, the focal birds were selected from a larger group, based on their body weight. This was done to obtain approximately equal samples of lightweight and heavyweight birds within the respective cross and trial. The birds were tracked in a pen with a size of approximately 6 m 2 in T1, T2, and T4, and in a pen with a size of approximately 8 m 2 in T3. In all trials, the pen was divided into 2 equal-sized compartments, each housing a single cross. Additional birds from the same line without UWB tags were added before the tracking started in T3 and T4 to increase the housing density to approximately 12 birds/m 2 , compared to a density of approximately 6 birds/m 2 in T1 and T2. UWB recordings were made from 00:00 to 23:30 each day and the data from 16 to 32 days old (n = 17 d) were used in this study. For T4 there were no data available before 18 d old (n = 15 days of data included for this trial) and in T3 there was a technical issue resulting in no data for 26 and 27 days old (n = 15 days of data included for this trial). Due to too much missing data (see van der Sluis et al. (2019) for details on the data filtering), death of birds and mistakes in sexing, a total sample size of 137 birds was available for analysis. Table 1 shows the weights of the birds in the different weight categories and trials. For these 137 birds, the average distance moved in meters per hour was calculated per day and animal, and was used as the measure of locomotor activity. Gait Scoring For the gait scores, the data on gait that are routinely collected on this farm were used. Individual gait was determined at 33, 34 or 35 days old, depending on the trial. The gait was determined at 34 days old in T1, at 33 days old in T2, and at 35 days old in T3 and T4. The birds were observed while walking and given a gait score by an experienced human observer. For the different trials, this was not always the same observer, as two observers scored gait during this study. However, scoring within a trial was performed by a single observer. No data on inter-observer reliability was available, but both observers were trained in the same manner, that is, by scoring gait together with an experienced observer until sufficient experience and confidence were developed to start scoring individually. The gait scoring was performed in the pen, but was combined with individual weighing of the birds. Therefore, all birds were handled immediately before gait was scored. Upon placing the birds back in the pen after weighing, their gait was assessed. It must be noted that, as the birds were handled immediately before their gait was scored, stress from the handling may have impacted their gait. However, given that all birds were handled, this potential influence on gait is assumed to be similar for all birds. ACTIVITY LEVELS AND GAIT IN BROILERS For the gait scoring, the scoring system shown in Table 2 was used, which is the commonly used system at the farm where the study was conducted. Although this scoring system is not exactly the same as the commonly implemented scoring system from Kestin et al. (1992), the overall idea is similar and for comparing purposes the gait score categories from both scoring systems are assumed to represent similar gaits. The distribution of gait scores is shown in Table 3, where it is also indicated into which weight category the birds were categorized. Given the small sample sizes for some of the gait score categories, a further classification into a 'good gait' (GG) vs. a 'suboptimal gait' (SG) was made that was used in the subsequent analyses. The gait score categories 0 to 2 were classified as GG, whereas 3 and higher were classified as SG. This cut-off value was based on the general assumption that with gait score 3 and higher the welfare of the birds is potentially impaired (Kestin et al., 1992). As can be seen from Table 3, this resulted in 79 GG birds and 58 SG birds. Statistics For all statistics, R version 4.0.2 was used (R Core Team, 2020). The hourly average activity data were not normally distributed and untransformed data were used for the analyses. The slope of individual activity was calculated by means of linear regression, using the hourly average activity per day over the trial per individual. Linear regression models with sum-to-zero contrasts were implemented to study the relationship between gait classification as GG or SG and the following activity measures: 1) Activity at 18 to 20 days old, representing early activity with all trials having data available; average activity over the three days per animal and only including individuals with all three days available (i.e., no days with too many missing samples for an animal, threshold was set at 90% of samples present within each tracking session, see van der Sluis et al. (2019) for details on data filtering), n = 131. 2) Activity at 30 to 32 days old, representing late activity; average activity over the 3 d per animal and only including individuals with all 3 d available, n = 134. 3) Overall average activity level; only including individuals with all days of the respective trial available, n = 120. 4) The slope of activity over time; all animals included regardless of some missing data, n = 137. Here, each of the activity measures was separately modeled as a linear function of the gait classification only. This was done to gain insight into whether gait classification alone can be linked to activity levels, regardless of differences in genetic background of the birds, their body weight, or the trial in which the birds were recorded. Given that there appeared to be some outliers in the data, robust linear regression models from the robustbase package (Maechler et al., 2020) were used, which are less sensitive to outliers than common linear regression models. To study how gait classification was related to activity levels while accounting for other potentially influential factors, a linear mixed-effects model with sum-to-zero contrasts was implemented, using the lme4 (Bates et al., 2015) and lmerTest (Kuznetsova et al., 2017) packages. For this analysis, a total of 2,160 observations for 137 animals were used. The fixed effects tested were day of tracking, trial, cross, gait classification, start weight category and weight change. The distribution of crosses and start weight categories across trials is indicated in Table 1. Correlated random intercepts and slopes for individual animals were included in the model as random effects. To test the fixed effects, a backward stepwise approach without interactions was used that included all these effects. The resulting terms that were left were all included in 2-way interactions, except for the interaction between cross and trial, as not all crosses were present in multiple trials. Backward selection was then again performed, and both significant effects (P < 0.05) and effects showing a tendency (P < 0.1) were kept in the model. The resulting final model was Can only move by also using wings Table 3. Gait score distribution, shown for the two weight categories (see Table 1). where Y is the average distance moved per hour, m is the overall mean, b(DT) i is the i th day of tracking (i = 1 to 17), C j is the j th cross (j = 1 to 4), T k is the k th trial (k = 1 to 4), SW l is the l th start weight category (l = light or heavy), GSC m is the classification of gait (m = GG or SG), (b(DT) £ C) ij is the interaction between day of tracking and cross, (b(DT) £ T) ik is the interaction between day of tracking and trial, (b(DT) £ SW) il is the interaction between day of tracking and start weight category, (SW £ GSC) lm is the interaction between start weight category and the classification of gait, (1 + b(DT) i |ID n ) is the random effect of the n th animal's intercept and correlated slope, and e ijklmn is the residual term. Given that the 2 crosses within a trial were housed in 2 separate compartments in the tracking pen, there was a possible influence of side of the pen. However, including side of the pen as a fixed effect did not lead to different conclusions regarding the relationship between activity and gait and side of the pen was therefore not included as a fixed effect. No obvious deviations from normality or homoscedasticity were observed upon visual inspection of the residuals of the model. Reported P-values for the model estimates were obtained using the lmerTest package (Kuznetsova et al., 2017). The MuMIn package (Barton, 2020) was used to determine the R 2 values for the model. The ggplot2 (Wickham, 2016) and sjPlot (L€ udecke, 2020) packages were used to make the visualizations. The level of statistical significance was set at 0.05 and results that are reported in the text are rounded to two decimals. Relationship Between Gait Classification and Activity The start activity, as measured at 18 to 20 days old, differed between GG and SG birds, with a higher activity level for GG birds (estimate = 1.33 § 0.56, P = 0.019; Table 4; Figure 1A). This means that on average, GG birds moved 1.33 meters per hour more than the overall average distance recorded at 18 to 20 days old in the study, and thus 2.66 m more than SG birds. The end activity, as measured at 30 to 32 days old, also differed between GG and SG birds, again with a higher activity for GG birds (estimate = 1.63 § 0.38, P < 0.001; Table 4; Figure 1A). The average activity was also higher for GG birds (estimate = 1.12 § 0.41, P = 0.007; Table 4; Figure 1A). No relationship between slope of activity and gait classification was observed (Table 4; Figure 1B). Relationship Between Gait Classification and Activity in the Presence of Other Influential Factors To study the effect of GG versus SG on activity levels while taking other possibly influential factors into account, a linear mixed-effects model was implemented (Table 5). This model explained 57.80% of the variance when only fixed effects were included and explained 85.58% of the variance when random effects were included as well. The model showed a tendency for an interaction between gait classification and weight category (Table 5). Within lightweight birds, SG birds appeared to have a lower level of activity than GG birds (Figure 2). This difference between SG and GG birds was not observed within the heavyweight category ( Figure 2). Furthermore, a decrease in activity over time was observed, as well as an effect of trial. The degree of the decrease in activity over time differed between trials, crosses, and weight categories. DISCUSSION In this research, it was studied whether individual levels of activity were related to gait. To this end, the relationships between the individuals' gait classification and different measures of activity levels were analyzed. Indications for relationships between gait classification and different measures of activity were observed, but gait explained little of the variation in these activity measurements, as R 2 ranged from 0.04 to 0.12. When taking other possibly influential factors, like day, trial, cross and weight category, into account, a larger part of the variance in activity was explained and a tendency for an interaction between gait classification and weight category was observed. In this interaction, a difference in level of activity was observed between GG and SG in lightweight birds, but not in heavyweight birds. Relationship Between Gait Classification and Activity In this research, a difference between GG and SG birds was observed for several activity measurements. The relationships between gait classification and start activity (18−20 days old), end activity (30−32 days old) and average activity, respectively, indicated that birds with a suboptimal gait showed a lower Table 4. Results of the robust linear regression models for the relationship between gait classification and 1) start activity, 2) end activity, 3) average activity, and 4) slope of activity. ACTIVITY LEVELS AND GAIT IN BROILERS locomotor activity compared to birds with a good gait. This decrease in activity levels for SG birds, that is, birds with gait score 3 or higher, matches reports in literature in which lower activity levels for birds with higher gait scores were reported (e.g., Weeks et al., 2000;Van Hertem et al., 2018). This often-reported negative relationship between activity level and gait score can have different underlying causes that are difficult to separate from each other. First, it could be the case that the gait itself resulted in the birds limiting their locomotor activity. Several studies have indicated that gait problems might be painful for birds (e.g., McGeown et al., 1999;Danbury et al., 2000) and therefore lame birds might reduce their locomotor activity. On the other hand, it has been suggested that increased locomotor activity may contribute to preventing the development of gait problems (e.g,. Reiter and Bessei, 1998;Reiter and Bessei, 2009). For example, Reiter and Bessei (2009) compared 2 distances between feeders and drinkers, that is, 2 and 12 meters, in broilers. It was observed that the groups of birds with the larger distance had fewer cases of leg weakness and that the locomotor activity in this treatment was higher. Vasdal et al. (2019) performed a pilot study on broiler activity in enriched environments, including peat, bales of lucerne hay, and elevated platforms, and control environments. They observed that the birds in an enriched environment showed higher levels of several activities, for example, ground scratching and ground pecking while standing, and a tendency for a lower gait score than birds in a control environment. Kaukonen et al. (2017) also studied environmental enrichment, as a way to increase activity and thereby improve gait in broilers. They implemented perches and elevated platforms and observed, among other things, lower mean gait scores for the birds in the platform-equipped houses. It was hypothesized that the platform access increased walking to reach the platforms and enabled more versatile movement, which could have positively impacted gait (Kaukonen et al., 2017). Adding elevated platforms or perches, or potentially other types of environmental enrichment, seems a practical approach to improve activity and gait. However, it must be noted that several other studies did not observe a positive effect of perches or platforms on gait (e.g., Bailie andO'Connell, 2015, Baxter et al., 2020). Altogether, birds with higher activity levels might be less prone to gait problems. Moreover, increased activity early in the growing period has been suggested to reduce leg disorders (reviewed in Bradshaw et al., 2002). Unfortunately, in the current study it was not possible to study activity early in life, due to the weight of the tags being a limitation for smaller birds. For future work it would be interesting to look into activity in the first few days of life as well, to gain more insight into the causal relationship between activity level and gait score. No association between the slope of activity over time and gait classification was observed, suggesting that the difference in activity level between the gait classifications remained relatively constant over time, that is, from 16 to 32 days old. In other words, based on the current data, the activity does not appear to decrease faster over time for SG birds compared to GG birds. It is important to note, however, that the slope values were approached using linear regression, which may have masked some of the nuances in the patterns over time. In a study by Weeks et al. (2000), birds with gaits ranging from gait score 0 to 3 were observed on 6 d between 39 and 49 days old. Exactly which 6 d these were, was not specified further. Although not discussed in detail in their study, when comparing gait score 0 and 1 birds to gait score 2 and 3 birds, it appeared that on the first observation day, the absolute difference in percentage of time spent allocated to walking was smaller compared to d 2 to 5 of observation. However, on d 6 of observation, the absolute difference again appeared to be relatively small. During these 6 recording days, the gait score 2 and 3 birds initially showed a steep decline in the percentage of time that was allocated to walking, but seemed to stabilize over the remainder of the observation period. The gait score 0 and 1 birds showed a more constant decline over these 6 recording days. This suggests that there might be a difference in the activity pattern over time, at least in the period ranging from 39 to 49 days old, which was outside the range of our study period. More research is required to clarify this relationship, preferably over the full life span of broilers and with gait recordings at different time points. Relationship Between Gait Classification and Activity in the Presence of Other Influential Factors In the abovementioned discussion of the relationship between gait classification and activity, other possibly influential factors were not accounted for. Research has indicated that there are relationships between activity and age of the birds , weight of the birds (Tickle et al., 2018) and possibly genetics of the birds (Bizeray et al., 2000), respectively. Therefore, in the analysis implementing a linear mixed-effects model, other factors besides gait classification were taken into account. These included time (i.e., age), trial, cross and weight category effects, as well as the interactions between them. Only the main findings related to gait will be discussed here. Results for the other factors have been reported earlier (van der Sluis et al., 2019). Overall, taking the other factors into account still resulted in a tendency for an effect of gait classification being observed, in interaction with weight category. A difference in activity between GG and SG birds appeared to not be present in heavyweight birds, only in lightweight birds ( Figure 2). Earlier studies have indicated that birds with higher body weights often walk shorter distances compared to lighter birds, for example in an operant runway test (Bokkers et al., 2007). Also voluntarily, that is, when not necessarily walking for a reward, lightweight birds have been observed to walk longer distances. This was studied for example using weight load reduction, where the weight load on birds' legs was reduced by partially lifting the birds' weight using a suspension device (Rutten et al., 2002). A possible Figure 2. Linear mixed-effects model estimated average hourly distance for good gait and suboptimal gait birds, in interaction with weight category. Bars represent 95% CIs. explanation for this finding is that as body weight increases, the energetic cost of standing becomes larger than for sitting (Tickle et al., 2018). If heavy birds already limit their activity to the minimally required level to obtain sufficient water and feed, it could be that a suboptimal gait does not decrease the activity level further. Lightweight birds, however, might show activity levels that are higher than required solely for obtaining water and feed. If lightweight birds show a suboptimal gait, this may reduce their activity levels to the level required for solely obtaining water and feed, resulting in an overall decrease in activity. The effect of gait on feeder visits was studied by Weeks et al. (2000). They compared gait score 0 to gait score 3 birds, and observed that gait score 3 broilers visited the feeder less often per day, but increased the visit duration accordingly, resulting in an overall time spent feeding that was similar to that of gait score 0 birds. However, by reducing the number of feeder visits, the distance walked would decrease as well, which could explain the finding in the current study that lightweight birds with a suboptimal gait showed lower distances moved compared to lightweight birds with a good gait. Gait Score and Consequences for Welfare In this research, gait scores of birds were assigned using a 6-scale scoring system. However, given the relatively small sample size, the different gait score categories were later on combined into GG (gait scores 0 to 2) and SG (gait scores 3 to 5) classes for analysis. In this classification of GG versus SG, the cut-off point was positioned between gait scores 2 and 3. This was based on the general assumption that the welfare of birds is potentially impaired at gait score 3 and higher (Kestin et al., 1992). However, it is debatable whether this indeed is a very clear cut-off point. Skinner-Noble and Teeter (2009) compared gait score 2 and gait score 3 birds, and observed among other things that gait score 3 birds stood less and rested more, compared to gait score 2 birds. However, they also studied for example heterophil:lymphocyte ratios as a measure of long-term physiological stress and observed no difference between the 2 groups. Overall, they conclude that there are no indications in their study that the 2 gait score groups differ in their welfare (Skinner-Noble and Teeter, 2009). These findings make it difficult to state where a potential cut-off value may truly lie in terms of welfare. Therefore, if additional research indicates a different cut-off value, it would be advisable for future research to study the relationship between activity levels and the classification GG versus SG based on this new cut-off value. Furthermore, the different gait scores that comprise each gait classification may differ from each other. For example, gait score 0 is generally described as "[..] walked normally with no detectable abnormality; it was dexterous and agile. [..]", whereas gait score 2 is generally described as "[..] had a definite and identifiable defect in its gait but the lesion did not hinder it from moving or competing for resources [..]" (Kestin et al., 1992). These 2 gait scores are both classified as GG in this study, but the birds' behavior and well-being may differ as a consequence of their gait. In our research, it was not possible to study differences between the six gait score categories, due to the limited sample size, but future research with sufficient data on animals from all gait score categories could look into whether it is possible to distinguish each of the 6 gait scores individually, based on activity recordings. This would allow us to assess individual birds' gait and well-being at a more detailed level. Predicting Individual Broiler Gait Using Activity Levels In this research, we studied the relationship between individual activity and gait classification. Insight into this relationship could, for example, aid in assessing gait of individual birds based on their individual level of activity, which can be recorded in an automated manner. One example of a benefit of this approach for assessing gait is that the possibly confounding effect of stress induced by handling birds, to assess their gait, could be removed. Individual data on broilers' gait could be informative for many purposes, including for broiler management and for research into the development of gait problems. Furthermore, it has been suggested that some gait problems can be alleviated by selective breeding (reviewed in Bradshaw et al., 2002), which requires data on individual broilers' gait. It has been reported that out of 3 major broiler breeding companies, at least one implements walking ability, that is, gait, as a trait subject to genetic selection and all select for leg strength (Hiemstra and ten Napel, 2013). A fast way of obtaining gait scores would therefore be beneficial. Moreover, given that it is not feasible for breeding companies to have a single-observer score for all birds, automated gait scoring using activity levels could aid in making gait scoring more objective. However, this study shows that it is difficult to predict the gait score of individual broilers based solely on the here-present activity information, as individual broilers within a gait classification were observed to show quite different activity levels. Furthermore, the observed activity levels within one gait classification showed quite a large overlap with that of the other gait classification, making it difficult to distinguish between gait classifications, and in these models the proportion of the variance in activity that was explained by gait classification was very small. When taking other influential factors into account, a tendency for an interaction between gait classification and weight category was observed. This interaction suggests that activity recordings have the potential to aid in predicting gait of individual birds, when taking other influences on activity levels into account, but that this is only feasible for lightweight birds, as heavyweight birds might already have relatively low activity levels. Overall, it remains difficult to distinguish individual birds' gait based on distances moved during the period from 16 to 32 days old. Future research could focus on a longer period of time, preferably throughout the entire production period with manual gait recordings periodically implemented, to further study the development of gait problems and the relationship with (early life) activity. Furthermore, additional variables could be studied that are potentially related to gait problems, including, for example, feeder visits (based on findings in Weeks et al., 2000), acceleration and speed of movement (Kestin et al., 1992) and use of the available area. With these additions, automated scoring of individual gait may be feasible, but this remains to be investigated. In the current setup, the birds were housed in a small pen compared to common broiler housing systems. This potentially resulted in relatively low recorded distances, as activity levels can, for example, be influenced by the distance between feed and water (Reiter and Bessei, 2009), which is likely to be larger in common broiler housing systems. However, in the current study, the focus was on relative activity levels and the differences in activity between GG and SG birds. Therefore, the exact distances moved were not directly of interest. However, Baxter and O'Connell (2020) implemented an UWB system for broiler tracking under commercial conditions and concluded that this was an accurate method for tracking indoor locations of broilers and that, even though absolute distances were generally overestimated, the system can be used to study differences between groups. This suggests that the approach implemented in the current study also has potential for recording activity in larger areas. CONCLUSIONS In this research, it was studied whether individual levels of activity were related to gait of broilers. Indications for relationships between gait classification and different measures of activity were observed, with lower activity levels for birds with a suboptimal gait, but gait explained little of the variation in activity. When taking other possibly influential factors, including day, trial, cross. and weight category into account, a larger part of the variation in activity was explained and a tendency for an interaction between gait classification and weight category was observed. In this interaction, a difference in level of activity was observed between gait classifications in lightweight birds, but not in heavyweight birds. It has to be further investigated if this is a consequence of higher body weight already limiting activity levels. Overall, the differences in activity levels of birds with different gait classifications were not very clear and therefore it remains difficult to distinguish gait classifications based on distances moved during the period from 16 to 32 days old. It is recommended for future studies to look into the relationship between gait and multiple activity-related variables in more detail, throughout the life of broilers, to assess whether automated measures of activity have potential to serve as a proxy for gait at the individual level.
9,184
2021-05-29T00:00:00.000
[ "Biology" ]
Nanotubes from the Misfit Layered Compound (SmS)1.19TaS2: Atomic Structure, Charge Transfer, and Electrical Properties Misfit layered compounds (MLCs) MX-TX2, where M, T = metal atoms and X = S, Se, or Te, and their nanotubes are of significant interest due to their rich chemistry and unique quasi-1D structure. In particular, LnX-TX2 (Ln = rare-earth atom) constitute a relatively large family of MLCs, from which nanotubes have been synthesized. The properties of MLCs can be tuned by the chemical and structural interplay between LnX and TX2 sublayers and alloying of each of the Ln, T, and X elements. In order to engineer them to gain desirable performance, a detailed understanding of their complex structure is indispensable. MLC nanotubes are a relative newcomer and offer new opportunities. In particular, like WS2 nanotubes before, the confinement of the free carriers in these quasi-1D nanostructures and their chiral nature offer intriguing physical behavior. High-resolution transmission electron microscopy in conjunction with a focused ion beam are engaged to study SmS-TaS2 nanotubes and their cross-sections at the atomic scale. The atomic resolution images distinctly reveal that Ta is in trigonal prismatic coordination with S atoms in a hexagonal structure. Furthermore, the position of the sulfur atoms in both the SmS and the TaS2 sublattices is revealed. X-ray photoelectron spectroscopy, electron energy loss spectroscopy, and X-ray absorption spectroscopy are carried out. These analyses conclude that charge transfer from the Sm to the Ta atoms leads to filling of the Ta 5dz2 level, which is confirmed by density functional theory (DFT) calculations. Transport measurements show that the nanotubes are semimetallic with resistivities in the range of 10–4 Ω·cm at room temperature, and magnetic susceptibility measurements show a superconducting transition at 4 K. SI text: Characterization details Scanning electron microscopy (SEM) SEM imaging of the nanotubes was done with a Zeiss Sigma 500 model. A minute quantity of native powder sample was picked up by a capillary tube and spread over carbon tape for the SEM analysis. Energy-dispersive X-ray spectroscopy analysis (EDS) was performed with the Bruker XFlash/60mm retractable detector installed in the Zeiss Sigma SEM. The quantification of the chemical elements is based on standard-less and self-calibrating spectrum analysis, using the ZAF matrix correction formulas. The relative abundance (yield) of the NT was estimated by analyzing many SEM images of the product. ImageJ software 1 has been used for the analysis of the nanotubes' abundance and their size distribution. The determined abundancies were based on counting the number of nanotubes and flakes in each image and dividing the number of nanotubes by the total number of nanotubes and flakes. Similarly, the diameter of the nanotubes (> 100 tubes in each case) was measured using ImageJ software by calibrating the scale in the image. The abundance of nanotubes (number of nanotubes with a given diameter in the total number of nanotubes analyzed) was plotted as a function of the diameter. While being only semiquantitative in nature, the overall yield did not vary appreciably from one batch to the other. X-ray powder diffraction X-ray powder diffraction (XRD) measurements were performed using TTRAX III (Rigaku, Tokyo, Japan) theta-theta diffractometer. The set-up was equipped with a rotating copper anode X-ray tube operating at 50 kV/200 mA. The powders were spread on a zero-background Si holder and pressed with glass to flatten the surface. A bent graphite monochromator and a scintillation detector were aligned to the diffracted X-ray beam. They were scanned in specular diffraction (θ/2θ scans) from 3-80° (2θ) with a step size of 0.02° and a scan rate of 0.5° per min in Bragg-Brentano mode with variable slits. The XRD data was analyzed using JADE Pro software and PDF-4+ 2020 database (ICDD). Transmission electron microscopy, STEM-EDS and EEL Spectroscopy Transmission electron microscopy (TEM) and selected area electron diffraction (SAED) patterns analyses were performed using a JEOL JEM2100 microscope operated at 200 kV. The analysis of the TEM images, including intensity profiles along the c-axis, and the SAED has been performed with Digital Micrograph 3.1.0 (Gatan) software. A double aberration-corrected Titan Themis Z microscope (Thermo Fisher Scientific (TFS) Electron Microscopy Solutions, Hillsboro, USA) equipped with a high-brightness field emission gun and a Wiener-type monochromator was employed for the atomic resolution HR-STEM imaging and monochromated EEL spectroscopy at an accelerating voltage of 200 kV. HAADF-STEM images were recorded with a Fischione Model 3000 detector with a semi-convergence angle of 21.4 mrad, a probe current of 40 pA, and an inner collection angle of 70 mrad. Large angle bright-field images were taken with a TFS BF STEM detector with an outer collection angle of 18 mrad. EDS hyperspectral maps were collected with a SuperX G2 four-segment SDD detector with a probe semi-convergence angle of 21 mrad, a beam current of approximately 200 pA, a pixel dwell time of 10-20 μs and a total recording time of typically 10 minutes. Quantitative maps were analyzed with the TFS Velox software, through background subtraction and spectrum deconvolution. A correction of frame-to-frame beam/specimen drift was employed where required using custom software in order to refine net intensity profiles. Monochromated EEL spectra were recorded at a system energy resolution of 80 meV in Dual-EEL spectrum mode on a Gatan Quantum GIF 966ERS energy loss spectrometer (Gatan Inc., Pleasanton, USA) with an Ultrascan1000 CCD camera. The EEL spectra were recorded with a STEM probe with a semi-convergence angle of 24 mrad and a beam current of 200 pA by summing multiple 2 ms spectrum acquisitions from a spectrum image map taken over a larger field of view to distribute the electron exposure. The outer semi-collection angle of the spectrometer was 50 mrad. DigitalMicrograph (Gatan Inc., Pleasanton, USA) was used for the quantification of the chemical shift of the La M core loss from the monochromated EEL spectra. In all cases of atomic-resolution HRSTEM analyses prior specimen cleaning steps, e.g. plasma cleaning, were avoided in order to preserve the surface structure of the nanostructures. Preparation of nanotube cross-section lamella with focused ion beam (FIB) A cross-section lamella of (SmS)1.19TaS2 nanotube was prepared using FIB-SEM Helios 660 Dual Beam microscope (Thermo Fisher Scientific (TFS) Electron Microscopy, Hillsboro, USA) equipped with Ga ion source (SmS)1.19TaS2 was dispersed in ethanol, drop cast on Si(100) substrate, and the sample was dried under normal conditions The lamella was prepared by a standard lift-out process using carbon EBID (50 nm) followed by carbon IBID (2 µm) deposited sequentially with stage tilt ±20° to minimize gaps between the protective layer and the nanotube. Final polishing was done at 2 kV ion beam energy. As prepared lamellae of a nanotube cross-sections were further inspected using Talos FX 200 and Titan-Themis Z double corrected transmission electron microscope. X-ray photoelectron spectroscopy (XPS) The powder samples were prepared in a glove box, under N2 atmosphere, and put on a carbon tape such as to get a dense yet very thin layer of grains, intentionally a monolayer of grains. Then, sample transfer to the vacuum chamber involved exposure to ambient for only a fraction of a minute. Base pressure below 1·10 -9 torr was kept in the analysis chamber. The XPS measurements were performed on a Kratos AXIS Ultra DLD spectrometer, using a monochromatic Al kα source at 15-75 W and detection pass energies of 20-80 eV. In-situ work-function measurements on the 'as received' samples were conducted under extremely low power of the X-ray source, 0.2 W. In order to eliminate charging effects, measurements under both positive and negative charging conditions were compared, yielding no observable line shifts of the major lines (yet, some of the oxidized components underwent small shifts, in the range of 70-200 meV). Consequently, for both types of samples, all line-shape analyses relevant to the discussion in this report were practically free of charging-related artefacts. Complementarily repeated scans on given spots were conducted to identify potential beam-induced sample damage during extended exposures to the X-ray irradiation. No observable damage signatures were found. Curve fitting of the leading signals in the misfit nanotubes yields two components of 'perfect' constituents, SmS and TaS2, with stoichiometries very close to the theoretical ones, however with an additional component of partially oxidized TaS2-xOx. The latter component is associated with surface imperfections, to which XPS presents enhanced sensitivity. Representative atomic concentration ratios are given in Table S1. Note the slight deviation of S/(2Ta+Sm) from the ideal value of unity. Also, note that Sm/Ta = 1.12 is slightly lower than the ideal 1.19 value. Both latter deviations from perfectly stoichiometric ratios are associated with small amounts of oxidized Ta (Ta 4 ) that could be resolved within the Ta 4f spectral window. Once taken into account, the related concentration ratios become very close to the ideal values, as given in brackets in Table S1. X-ray absorption spectroscopy (XAS): Sample preparation and measurements Samples were mixed with cellulose binder (mixing ratio 1:3) and pressed with the hydraulic press (pressure: 2 ton) to produce 10 mm pellets suitable for analysis in transmission geometry. Mixing and handling of the powders were performed inside the Ar-filled glove box (H2O and O2 concentration < 1 ppm) to avoid possible oxidation of the samples. Before analysis, the sample pellets were placed between two Kapton foils (thickness: 50 micron) inside a special sample holder forming a closed volume to save from atmospheric air. XAS experiments in transmission mode were performed at DESY P23 "In-situ and X-ray imaging beamline". The experimental set-up consisted of entrance slits, the first X-ray detector, a rotating sample stage, and a second X-ray detector. Liquid N2 cooled Double Crystal Monochromator (Si 111) was used for choosing the required energy, X-ray mirrors with B4C or Rh coating were used for harmonic rejection depending on the edge. Silicon avalanche photodiodes (APD) were used for measuring the intensity of incoming and transmitted X-ray beam. The sample pellet was mounted on the OWIS DRTM 40 rotary stage and rotated with a speed of 180°/sec during the analysis. In accordance with the proposed setup, X-ray absorption of the sample was measured at different X-ray energies in the vicinity of Ta L3 and Sm L3 X-ray absorption edges. The X-ray absorption of the sample at a certain energy point was measured for 10 sec. The energy of the incoming X-ray was also corrected by measuring the absorption spectra of pure Ta (99,99 %, L-III edge) and Mn (99,99 %, K-edge) foils. APD detector data were corrected for a dead time, 2 EXAFS spectra processing and analysis were done by Larch, 3 Athena and Artemis software. 4 Fig. S1. (a) SEM micrographs of (SmS) 1.19TaS2
2,448.6
2022-02-10T00:00:00.000
[ "Materials Science", "Chemistry", "Physics" ]
Changes in the Dielectric Properties of ex-vivo Ovine Kidney Before and After Microwave Thermal Ablation In this work, a comparison in terms of dielectric properties (i.e. relative permittivity and conductivity) between ablated and non-ablated kidney samples (N = 3) was conducted. Measurements before and after ablation (N = 54) were performed on three different tissues (cortex, outer medulla, inner medulla) of ex-vivo ovine kidneys across the frequency range of 500 MHz – 8.5 GHz. Results show that both relative permittivity and conductivity decrease after ablation because of the decrease of the water content in the tissue. In particular, the highest difference between pre-ablation and post-ablation was observed in the tissue characterised by the highest water content. Introduction Electromagnetic-based thermal ablation techniques, such as Microwave Thermal Ablation (MTA), are widely used in interventional oncology to locally remove cancerous tissue by inducing a cytotoxic temperature increase. MTA is safely adopted in clinical treatments of solid tumors in different organs; clinical MTA procedure have been widely adopted in the treatment of solid tumours such as hepatic tumours and renal cells carcinoma in non-surgical candidates [1,2]. The increasing clinical acceptance of MTA procedures leads to the need for a thorough characterisation of tissue changes occurring during the ablative process. In MTA, an increase in temperature (up to 120 • C in the treated tissue) is induced by the electromagnetic energy absorbed by the tissue. Complete necrosis of the tissue is achieved almost instantaneously when temperatures exceed 60 • C [1][2][3]. The rapid decrease of water content and the protein denaturation occurring in ablated tissues correspond to distinctive changes in the electromagnetic, thermal, and mechanical properties [3,4]. The interaction between the deployed electromagnetic energy and the tissue is primarily determined by the specific dielectric properties of the tissue. Accordingly, accurate information about the changes in dielectric properties of the tissue with the temperature increase are needed to optimize the treatment outcomes and better predict the induced thermal ablation zone [5,6]. This information also helps to support the development of novel non-ionizing monitoring techniques [7]. An accurate broadband characterization of the dielectric changes occurring during thermal ablation and in the ablated tissues is not available in the literature. A number of studies have been conducted to characterize liver tissue at physiological and hyperthermic temperatures [8][9][10]; whereas dielectric properties of kidney have been extensively investigated mainly at body temperature [11], and only limited data are available at hyperthermic temperatures and only for selected frequencies [4,12]. In this contribution, we propose a preliminary investigation of the changes in dielectric properties occurring in kidney tissues due to a MTA procedure. The dielectric properties of ablated and non-ablated ex-vivo ovine kidney tissues are measured with the open-ended coaxial probe technique and compared across the frequency range 500 MHz -8 GHz. This frequency range covers the frequencies of the MTA applicators used in the clinical practice: 915 MHz and 2.45 GHz. It also covers recently investigated operating frequencies, such as 5.8 GHz, which allow miniaturization in the design of the MW applicator enabling smaller and more focused ablation zones [13]. Materials and Methods Ovine kidneys (N = 3) were excised from sheep and were obtained the same day from a local abattoir. The samples were ablated with a fully cooled triaxial-based monopole antenna optimized to operate at 2.45 GHz [14]. The applicator was connected to a peristaltic dispensing pump (DP2000, Thermo-Fisher Scientific Inc, Waltham, Massachusetts, US) operating at 19.4 ± 1.2 • C and at a flow rate of 40 ml/min. A microwave generator (Sairem, SAS, France) was connected through a low-loss coaxial cable (50 Ω characteristic impedance) to the SMA connector of the applicator. Three ablation procedures (one ablation for each sample) were performed powering the applicator at 30 W for 1 min. The baseline temperature of the samples was 22.2 ± 0.8 • C. During the ablation procedure, the temperature was monitored via two fiber optic sensors (Neoptix Inc., Québec, CA) placed on the transversal plane of the applicator at a distance of 4 ± 0.4 mm from the antenna feed. The fiber optic sensors were used to ensure that a temperature of at least 60 • C was reached [1]. As shown in Fig. 1, three different tissues can be distinguished inside the kidney: cortex, outer medulla and inner medulla [16]. As these tissues are different in terms of histological composition and water content, relative permittivity (ε r ) and electrical conductivity (σ ) were measured on each of the three types of tissues in the sample. Fig. 2 shows one of the samples after the MTA. We can clearly see the ablation zone and we can verify that all three types of tissue are ablated. This was confirmed for all three samples. The measurements of the dielectric properties (ε r and σ ) were performed using an open-ended slim-form coaxial probe (Keysight 85070E) connected directly to the vector network analyser (VNA Keysight 5063A) [17]. The measurement system uses a one port calibration method requiring three different loads: open circuit, short circuit and deionised water. The temperature of deionised water was 23.9 • C measured before the calibration. After the calibration, a 0.1 mol/L sodium chloride (NaCl) solution was used for the validation procedure; then the dielectric properties acquired during the validation were compared to the reference values reported in [19], and the percantage error is shown in Table 1. The validation procedure was repeated approximately every 30 minutes. All of the measurements were performed acquiring 101 linearly spaced frequency points within 500 MHz -8.5 GHz frequency range. Three consecutive dielectric properties measurements were performed on each tissue (N = 3) for each sample (N = 3) before and after the ablation procedure; in total 54 measurements were performed. The measurements were performed following the measurement protocol for characterization of dielectric properties of tissues [20]. All experimental data and associated metadata was collected in line with the Minimum Information for Dielectric Measurements of Biological Tissues (MINDER) guidelines for reporting of dielectric data of biological tissues [21]. Results and Discussion The results of the measurements of the dielectric properties of three kidney samples before and after the ablation are shown in Fig. 3. The results are plotted as the average of the three consecutive measurements performed on each part of each sample. The relative permittivity is plotted in red, with values displayed on the left axis. Conductivity is plotted in blue and the conductivity values are on the right axis. The results of the measurements before the ablation are plotted with solid lines. The results of the measurements that were performed after the ablation are plotted in dashed lines. The results of the measurements on cortex tissue are in the first row and the results of the measurements on inner and outer medulla tissue are plotted in the second row. All subplots are plotted with the same values on both axes for comparison. The values of the relative permittivity after the ablation are lower than before the ablation for every sample and for every tissue type (average difference in relative permittivity 5.55). The values of the conductivity of the cortex and of the outer medulla of all three samples mostly stay the same (average difference in conductivity 0.21 for cortex and 0.43 for outer medulla); while the conductivity of the inner medulla decreases after the ablation (average difference in conductivity 0.83). The largest decrease in both relative permittivity and conductivity after the ablation is observed in the inner medulla. The inner medulla is the site where the urine drains into the renal pelvis, i.e. the initial part of the ureter [22]. Therefore, before the ablation, the values of the relative permittivity and the conductivity higher than the values in the surrounding tissues are observed in the inner medulla because of its higher water content. We can also see that the difference within tissue types in relative permittivity and conductivity is minimised after the ablation. The reduced range of properties is due to the fact that different tissue water content of the different tissue types mostly influences the dielectric properties of the tissues. After the ablation all the tissues are dehydrated and therefore the differences in tissue water content are minimised. Conclusion Dielectric properties measurements were conducted on ovine kidney tissue, distinguishing between cortex, outer medulla and inner medulla. The results suggest that both relative permittivity and conductivity decrease after the MTA treatment. We conclude that this change is mainly due to the decrease of water content in the tissues dur-ing the MTA. Indeed, the most significant change in relative permittivity and conductivity is observed for the tissue characterised by the highest water content, i.e. the inner medulla. We also observed that after the ablation all three tissue types have similar dielectric properties; the decrease in water content and the denaturation of proteins occurring during the ablation minimise the difference in the dielectric properties of the different tissue types.
2,070.6
2020-08-01T00:00:00.000
[ "Physics" ]
Robust Optimum Design of Thrust Hydrodynamic Bearings for Hard Disk Drives This paper describes the robust optimum design which combines the geometrical optimization method proposed by Hashimoto and statistical method. Recently, 2.5′′ hard disk drives (HDDs) are widely used for mobile devices such as laptops, video cameras and car navigation systems. In mobile applications, high durability towards external vibrations and shocks are essentials to the bearings of HDD spindle motor. In addition, the bearing characteristics are influenced by manufacturing error because of small size of the bearings of HDD. In this paper, the geometrical optimization is carried out to maximize the bearing stiffness using sequential quadratic programming to improve vibration characteristics. Additionally, the bearing stiffness is analyzed considering dimensional tolerance of the bearing using statistical method. The dimensional tolerance is assumed to distribute according to the Gaussian distribution, and then the bearing stiffness is estimated by combining the expectation and standard deviation. As a result, in the robust optimum design, new groove geometry of bearing can be obtained in which the bearing stiffness is four times higher than the stiffness of conventional spiral groove bearing. Moreover, the bearing has lower variability compared with the result of optimum design neglecting dimensional tolerance. Introduction Recently, demand of hard disk drives (HDDs) has been continued to expand because of development of information technology in industries.Especially, 2.5″ HDDs are widely used for many information devices such as laptops, digital video cameras and car navigation systems.Consequently, convenience and performance of information devices have been improving year by year.In the meantime, usage environments in which HDDs are used under the conditions occurring vibrations have been being severe than before.Therefore, improvement of vibration characteristics of HDDs has been strongly demanded.Hydrodynamic bearings, which are used in spindle motor of HDDs, are one of key machine elements to improve vibration characteristics of HDDs.The hydrodynamic bearings which mainly have grooves called the spiral or herring-bone groove are traditionally used for HDDs spindle motor.There are several researchers [1][2][3][4][5][6][7] who attempted to investigate the research related to the improvement of vibration characteristics of the hydrodynamic bearings.However, they are still insufficient for a significant improvement of bearing characteristics.One reason is because the groove geometry is fixed on the spiral or herring-bone grooves.On the other hand, Hashi-moto and Ochiai [8] dealt with the geometrical optimization aimed at discovering the optimum groove geometry which have never found before and improving dramatically bearing stiffness of thrust air bearings.In addition, the higher performance of the bearing having optimum groove geometry is experimentally verified.Moreover, Ibrahim et al. [9] applied the same method of the geometrical optimization to thrust air bearings on HDD spindle motor and the effectiveness was theoretically discussed. In the process of manufacturing thrust hydrodynamic bearings for 2.5″ HDDs, it is compulsory to maintain high dimensional accuracy due to high influence by manufacturing errors towards the bearing characteristics.Therefore, it is necessary to severely design considering the dimensional tolerance of design variables of the bearings.In that case, a new designing method which treats the tolerance numerically in advance is reasonable compared with the conventional method which determines the tolerance by trial-and-error method.In the conventional designing method of the bearings, a deterministic method neglecting the dimensional tolerance of design variables is being mainly used.For that reason, there are several researchers [10][11][12][13][14] who attempted to investigate the influence of manufacturing errors on the bearing characteristics through sensitivity analysis.However, as far as the authors know, there is no research which had carried out the optimum design with the consideration of manufacturing errors for the hydrodynamic bearings of HDD spindle motors.In this paper, the influence of dimensional tolerance on bearing characteristics is conducted using the statistical method.And then, the robust optimum design based on the geometrical optimization combined with the statistical method [15] is applied to thrust hydrodynamic bearings of 2.5″ HDD spindle motor.The results obtained are compared with the result by the conventional optimum design neglecting tolerance to verify the effectiveness of the proposed method. Geometrical Optimization In this paper, the geometrical optimization proposed by Hashimoto and Ochiai [8] is applied to thrust hydrodynamic bearings for 2.5″ HDD spindle motor to drastically improve the bearing stiffness.In the process of optimizing the groove geometry, the initial geometry is established and a groove shape, which is likely to provide a maximal bearing stiffness, is determined by using the method of successive evolution from the initial geometry as shown in Figure 1.Therefore, spiral groove bearings are considered as the initial geometry to raise calculation efficiency because the spiral groove bearings have relatively high bearing stiffness compared with other types of bearings.The initial spiral groove geometries are flexibly modified using the cubic spline interpolation function as shown in Appendix I.The whole groove is partitioned into n parts in the r direction, and then each nodal point   , i i i P r  is provided to the intersection with the cubic spline interpolation function.At the time when a groove geometry is gradually evolved Original geometry (Spiral grooved) (k+1) th step geometry Then, the groove geometry is revised, using the new coordinate value that can be obtained using this way, and gradually evolved until the value of the bearing stiffness becomes maximum. Analysis of Bearing Characteristics In this paper, the calculation method of bearing characteristics is derived by applying the boundary-fitted coordinate system to adjust the geometrical optimization method.Moreover, in the process of analyzing the static and dynamic characteristics of the bearings, the perturbation method is applied to the Reynolds equivalent equation.The Reynolds equivalent equation can be solved by using the Newton-Raphson iteration method, and the static component p 0 and dynamic component p t of pressure are obtained.These detailed calculation method is shown in Appendix II. The load-carrying capacity W is obtained by the following integration: where p a indicates the atmospheric pressure.The minimum oil lubricating film thickness h rmin is simultaneously determined from the equilibrium condition between the axial load acting on a thrust bearing and the bearing load-carrying capacity.The minimum oil lubricating film thickness h rmin is obtained by solving the following force balance equation: The spring coefficient k and damping coefficient c can be obtained, respectively, by integrating the real and imaginary parts of the dynamic pressure components, p t , as follows: Finally, the bearing stiffness is given by the following equation. Estimation of Variability Using Statistical Method Designing of the bearings for HDD spindle motor is re-quired the high bearing performance and quality.Therefore, in this case, the optimum design is applied to the bearings for obtaining high bearing performance.On the other hand, bearing performance is highly influenced by manufacturing error because of small size of the bearings of HDD.However, the conventional method of designing the bearings was conducted using deterministic method, which neglected the dimensional tolerance.Hence, considering the influence of dimensional tolerance under product design is very important in terms of the quality and productivity of HDD spindle motors. Figure 2 shows a conceptual diagram between design variables and variability of bearing performance.The vertical axis shows the typical bearing performance and the horizontal axis shows the typical design variables, for example, groove depth, groove width ratio and so on.As can be seen in the figure, variability of bearing performance shows non-uniform under the same range of dimensional tolerance.In the conventional optimum design neglecting tolerance, solution A can be obtained as optimized solution.Solution A is maximum value of bearing performance, but the variability becomes large when design variable is changed due to manufacturing error.On the other hand, variability of bearing performance of solution B is lower than that of solution A, although the bearing performance of solution A is better than solution B. That means, solution B has larger robustness of bearing performance compared with solution A. Therefore, it is necessary to find the design variable which can be obtained low variability with high bearing performance like solution B. Figure 3 shows the relationship between bearing performance and variability.The figure shows the trade-off correlation between an increase of the variability and the bearing performance.In addition, it is possible that the spatial distribution of bearing performance is a multimodal distribution because the hydrodynamic bearings have relatively a lot of dimensions.In this case, it is important to treat multi-objective problem to obtain high bearing performance and robustness.Moreover, optimum design is needed to consider a statistical factor for designing of commercial HDDs.In this paper, the robust optimum design with high robustness is newly introduced to thrust hydrodynamic bearings.The evaluation method for the variability of the bearing performance using the statisticcal method is described as follows. The variability of design variables is expressed by the probability density function, and then the robustness is estimated based on the expectation and standard deviation. In the robust optimum design of thrust hydrodynamic bearings, the design variables such as groove depth and groove width ratio are given by the following expression as the random variable vector s. Figure 4 shows a conceptual diagram considering random variables s 1 and s 2 .In the figure, a target value μ i (i = 1 ~ n, where n represents the number of random variable) is the central value of the distribution of design variable, in which the design variable is assumed to distribute according to the Gaussian distribution within the range of ±Δs i from the central value μ i .Consequently, the marginal probability density function for the component s i of random variable vector s can be expressed as follows. In this paper, the tolerance of the bearing dimension is considered as ±3σ i .When the marginal probability density functions of s are independent each other, the conjunctive probability density function is expressed as follows. As can be seen in Figure 4, when the variability is given to the design variables by the Gaussian distribution, the bearing performance will be distributed at the same time.Therefore, in estimating the bearing performance, it is necessary to use the expectation and standard deviation.The expectation and standard deviation of bearing performance are obtained, respectively, as follows, where q(s i ) indicates the bearing characteristics. Robust Optimum Design of HDDs Considering Dimensional Tolerance In this paper, the optimum problem is defined as maximizing the bearing stiffness of thrust hydrodynamic bearings to improve vibration characteristic of HDDs. Figure 5 shows a schematic diagram of a 2.5″ HDD spindle motor with hydrodynamic bearings.A rotor which consists of a shaft, a hub, and two disks is supported by two thrust hydrodynamic bearings in the axial direction.In designing the bearings, we fixed the following design variables, in which outside radius of bearing r 1 is 2.55 mm, inside radius of bearing r 2 is 1.25 mm, rotational speed of shaft n s is 4200 rpm, seal ratio In the present study, realizing an optimal design is first done by examining the magnitude of the bearing stiffness by changing the number of partitions of the cubic spline interpolation function in the r direction from two to six to process the optimal number of partitions.As a result, because the maximum value of the bearing stiffness is found when there are four partitions, the number of partitions in the r direction is fixed to four.In addition, groove depth h g , groove width ratio α and number of grooves N are given as design variables.Consequently, The design variable vector X consisting of the bearing dimensions is defined as follows: , , , , , , where  1 −  4 are extents of angle change from the initial geometry. In the robust optimum design, the dimensional tolerance of design variables is considered.In this case, it is essential to consider the tolerance for all design variables.However, there is a possibility that computational time will be enormously large because of relatively large number of design variables.Therefore, the bearing dimensions with high sensitivity on the bearing performance are checked through the sensitivity analysis.As a result of the sensitivity analysis, groove depth h g and groove width ratio α have high sensitivity to the bearing characteristics, which are the bearing stiffness K and oil lubricating film thickness h r .Consequently, the dimensional tolerances of groove depth h g and groove width ratio α should be considered.In this paper, we determined experimentally the tolerance ranges of groove depth of Δh g = ±0.5, ±1.0, ±1.5, ±2.0 μm and groove width ratio of Δα = ±0.03,±0.05. On the other hand, as for the constraint functions, the upper and lower limits of the design variables in Equation (11) are considered.In addition, the allowable oil lubricating film thickness h a and non-negative damping coefficients c within variability are also considered as the constraints to guarantee the safety operation of the bearing.Moreover, the minimum groove width L min of grooved part as shown in Figure 6 is considered because there are some cases where the groove geometry will be in irregular shapes.The minimum groove width L min should be larger than the diameter of the industrial tool d a .Then, in this paper, the diameter of industrial tool of d a = 0.10 mm is determined with reference to prove diameter of industrial tool for an actual 2.5″ HDD spindle motor.The values of constraint conditions are shown in Table 1. In the robust optimum design, it is necessary to reduce the variability of bearing characteristic value.Therefore, in this paper, the value of the standard deviation of bearing characteristic value obtained by Equation (10) including 3σ (F(X)) has to be less than 20% of the expectation value obtained by Equation (9).Then, the constraint equation is defined as follows, All of the constraint conditions are summarized as the following inequality expression: where the constraint functions in Equation ( 13) are defined as follows., , The objective function is expressed as follows: The optimum design problem of thrust hydrodynamic bearings is formulated from the above equation as follows: This optimum design problem is a typical nonlinear optimum design problem because the objective function and constraint functions are nonlinear including seven design variables.Therefore, the objective function has complicated multidimensional distribution.Consequently, at first several solutions are investigated through parametric study to find some local optimum solutions.Then, only the global optimum solution will be calculated using sequential quadratic programming (SQP) [8]. To clarify the validity of present robust optimum design, the results of the robust optimum design are compared with the results neglecting tolerance.Figures 7 and 8 show the flowcharts of the present robust optimum design and the conventional optimum design neglecting tolerance, respectively.As shown in these figures, the process of obtaining the values of the expectation and standard deviation neglecting tolerance are different from that of the robust optimum design.The calculation method of these values is described as follows. In the optimum design neglecting tolerance, the same design variables and prescribed values used for the robust optimum design are given.However, the constraint condition of variability in Equation ( 12) and minimum groove width L min are excluded.Moreover, the objective function neglecting tolerance is different from the function of the robust optimum design because the dimensional tolerance is neglected.The objective function is defined as follows.   The flow of the optimum design neglecting tolerance is shown in the range enclosed with single dotted line in Figure 8.The values of the expectation and standard deviation are calculated using the same tolerances of groove depth Δh g and groove width ratio Δα by Equations ( 9) and (10). Example of Optimized Results and Discussions Figure 9 shows the results of groove geometries and static pressure distributions, respectively.In the figures, the groups (1) and ( 2 have new groove geometries with similar geometry of spiral groove bearing in the inner vicinity; with an opposite spiral geometry in the outer vicinity.In this paper, such types of bearings that have an outer vicinity bends are called modified spiral groove bearing.Table 2 shows the example of optimized results.As can be seen in Table 2, the values of bearing stiffness of modified spiral groove bearings is more than four times the value of spiral groove bearing.The bearing stiffness is increased with decreasing the oil lubricating film thickness because the relationship between the oil film thickness and the bearing stiffness is trade-off.The pressure of outer bearing boundary is decreased by the inverse step effect as shown in Figures 9(1) and (2) by accompanying the oil lubricating film thickness is decreased.However, minimum oil lubricating film thicknesses h rmin are the same value as the allowable film thickness (h a = 5.0 μm).Consequently, there is a low risk of contact between the bearing and housing.On the other hand, the value of minimum groove width L min by the robust optimum design is exceeded the constraint value of 0.10 mm, while the value of the optimum design neglecting tolerance is 0.034 mm.As can be seen in the Table 2, the number of grooves N obtained by the robust optimum design is reduced compared with the number of optimum design neglecting tolerance.Additionally, the extents of angle change are different from the changes of the optimum design neglecting tolerance.As a result, the optimized values of the number of grooves N and extents of angle change  1 −  4 are newly found to secure the minimum groove width.Therefore, in the robust optimum design, it is possible to determine the diameter of industrial tool under product design by providing the constraint condition for groove width and to reduce the manufacturing costs due to easy manufacturing of bearing grooves. Figure 10 shows a comparison of the results of robust optimum design and optimum design neglecting tolerance.In the figures, (a), (b) and (c) show the minimum oil lubricating film thickness within variability, the variability of bearing stiffness and the expectation of bearing stiffness, respectively. As can be seen in Figure 10(a), the values of minimum film thickness obtained by the optimum design neglecting tolerance are less than the allowable film thickness.On the other hand, the values of the robust optimum design are exceeded the allowable film thickness for all tolerances.This means that there is a low risk of contact between the bearing and housing when the bearings are manufactured within the setting ranges of tolerance. In Figure 10(b), the vertical axis shows the ratio of expectation value and standard deviation of bearing stiffness.This means that when the ratio becomes smaller, the expectation becomes larger and standard deviation becomes smaller.The values of ratio obtained by the robust optimum design are less than the ratio by the optimum design neglecting tolerance.In addition, the tendency of ratio becomes flat and the values have not increased for all tolerances of groove depth.As a result, it is confirmed that the robust optimum design is effective of suppressing the variability of bearing stiffness. As can be seen in Figure 10(c), the expectation of bearing stiffness by the robust optimum design is equivalent to the value neglecting tolerance when the tolerance of groove depth is set within Δh g = ±1.5 μm.On the other hand, the value is decreased from Δh g = ±1.5 μm to Δh g = ±2.0μm.Because the relationship between the oil film thickness and the bearing stiffness is trade-off, the expectation of bearing stiffness is decreased with increasing the minimum oil film thickness in the tolerance range as shown in Figure 10(a).The reason why such result obtained is because of suppressing the variability of bearing stiffness by Equation (12).Consequently, the tolerance of groove depth of Δh g = ±1.5 μm is recommended to sup-press the variability with high bearing stiffness. Conclusions This paper described the methodology and sample of robust optimum design considering dimensional tolerance based on the statistical method combined with the geometrical optimization for thrust hydrodynamic bearings of a 2.5″ HDD spindle motor.The conclusions are briefly summarized as follows. 1) The optimized groove geometries obtained by the robust optimum design and by the conventional optimum design neglecting tolerance are the modified spiral groove with bends in the vicinity of the outer circumference of the bearing.The bearing stiffness of the modified spiral groove bearing becomes more than four times compared with the stiffness of spiral groove bearing. 2) It is possible to determine the diameter of industrial tool under product design by providing the constraint condition for groove width. 3) The expectation of bearing stiffness by the robust optimum design is equivalent to the value neglecting tolerance, and the standard deviation can be suppressed compared with the standard deviation neglecting tolerance when the bearings are manufactured within the tolerance of groove depth of Δh g = ±1.5 μm. Nomenclature where where d in Equation (I-3) is expressed as the following.   An arbitrary groove geometry can be represented by finding the spline function in all sections of the target groove geometry using Equations (I-1) and (I-2).realize.Therefore, an analysis of the bearing stiffness is performed first by using a boundary-fitted coordinate system as shown in Figure 11 [16,17] and transforming a complex groove geometry into a simple fanlike geometry.A boundary transformation function used for the transformation is given as The following Reynolds equivalent equation can be obtained from the equilibrium between the mass flow rates of oil inflowing into and outflowing from the control volume due to the shaft rotation and the squeezing motion: where Q  , Q  and are expressed, respectively, as follows: In that case, Then, in Equation (II-3), subscripts l and 2 and I-IV In that case, ε indicates the amplitude of small variations of the oil lubricating film thickness and p 0 and p t express a static component and a dynamic component, respectively. The substitution of Equation (II-6) into Equation (II-3) and the negligence of terms of small magnitude of ε of above second order allow the introduction of two equations for terms ε of orders 0 and l as follows: In that case, subscript 0 indicates a static component of the mass flow rate determined from Equation (II-4) and subscript t similarly indicates a dynamic component. Solving Equations (II-7a) and (II-7b) in turns by the Newton-Raphson iteration method, the static and dynamic components, p 0 and p t , are obtained. Figure 1 . Figure 1.Method of changing groove boundary geometry. Figure 2 . Figure 2. Variability of design variable and bearing performance. Figure 3 . Figure 3. Trade-off correlation between bearing performance and variability of bearing performance. angle β is 15 deg., rotor mass m is 18.6 g and oil lubricant viscosity μ is 1.308 × 10 −2 Pa•s.The values are defined by referring specifications of an actual 2.5″ HDDs. Figure 5 . Figure 5. Schematic diagram of spindle motor of 2.5″ HDD and hydrodynamic thrust bearing, (1) Overall view of spindle motor, (2) Spiral grooved thrust bearing; (a) Position of thrust bearings and (b) Groove geometry. Figure 6 . Figure 6.Relationship between groove width and diameter of industrial tool. Figure 7 . Figure 7. Flowchart of present robust optimum design. Figure9shows the results of groove geometries and static pressure distributions, respectively.In the figures, the groups (1) and(2) show respectively the results obtained by the robust optimum design (tolerance range, Δh g = ±1.0μm, Δα = ±0.05)and by the optimum design neglecting tolerance and the group (3) shows the result of spiral groove bearing.The optimized groove bearings by the robust optimum design and by the optimum design neglecting tolerance as shown in Figures9(1) and (2) have new groove geometries with similar geometry of spiral groove bearing in the inner vicinity; with an opposite spiral geometry in the outer vicinity.In this paper, such types of bearings that have an outer vicinity bends are called modified spiral groove bearing.Table2shows the example of optimized results.As can be seen in Table2, the values of bearing stiffness of modified spiral groove bearings is more than four times the value of spiral groove bearing.The bearing stiffness is increased with decreasing the oil lubricating film thickness because the relationship between the oil film Figure 8 . Figure 8. Flowchart of calculation of objective function by conventional optimum design neglecting tolerance. Figure 10 . Figure 10.Comparison of results of robust optimum design considering tolerance and optimum design neglecting tolerance: (a) Minimum oil lubricating film thickness; (b) Variability of bearing stiffness; and (c) Expectation of bearing stiffness. b 1 : 2 : 1 : 2 : 1 ,n Width of groove (m) b Width of land (m) c : Damping coefficient of oil lubricating film (N・s/m) d a : Diameter of industrial tool (m) E(F(X)) : Expectation of bearing characteristics f(X) : Objective function (N/m) f p (X) : Probability density function g : Acceleration of gravity (m/s 2 ) g i (X) (i = 1~ n) : Constraint function h g : Depth of groove (m) h r : Oil lubricating film thickness (m) k : Spring coefficient of oil lubricating film (N/m) K : Bearing stiffness (N/m) N : Number of grooves n s : Rotational speed of shaft (rpm) P i (i = 1~n) : Nodal points partitioning cubic spline interpolation function in the r direction p 0 : Static component of oil lubricating film pressure (absolute pressure) (Pa) p a : Atmospheric pressure (Pa) p t : Dynamic component of oil lubricating film pressure (Pa/m) r : Coordinate of radial direction (m) r Outside radius of bearing (m) r Inside radius of bearing (m) r s : Seal diameter (m) R s : Seal diameter ratio (=r s r 1 ) T r : Friction torque of bearing surface (nm) W : Load-carrying capacity of bearing (n) X : Vector of variables used in calculations  : Groove width ratio (=b 1 (b 1 + b 2 ))  : Inflow angle (deg.)r : Equipartition space of r (m)  : Coordinate of circumferential direction (rad)  i : Angle of initial geometry (spiral curvature) at the ith nodal point (rad) (F(X)) : Standard deviation of the bearing characteristics  i  : Extent of angle change from initial geometry (spiral curvature) at the ith nodal point (rad) δ i : Extent of angle change during evolution at the ith nodal point (rad)  : Viscosity of oil lubricating film (Pa・s)  : Density of oil lubricating film (kgm 3 )  : Coordinates of change based on boundary-fitted coordinate system (m)  : Coordinates of change based on boundary-fitted coordinate system (rad)  f  : Squeeze frequency of the rotating shaft (rad/s) Subscripts Max : Maximum value of state variables Min : Minimum value of state variablesAppendix IThe cubic spline function is a cubic polynomial equation in each section of   tion required for the cubic spline function is a continuity of the second order derivative of the function at each nodal point.The cubic spline interpolation function is expressed as the following equation.  the mass flow rates, across the boundary of  = const.and across the boundary of  = const.as shown in Figure 12.On the other hand, Q  indicates the mass flow rate due to squeezing motion inside the control volume. Figure 11 . Figure 11.Bearing geometry transformation based on the boundary-fitted coordinate system. Figure 12 . Figure 12.Definition of control volume.indicate the domains in the control volume.Assuming that variations of the bearing clearance are microscopic, the minimum oil lubricating film thickness h and pressure p can be expressed by the following equation: 0 0 f f j t t h h e p p p e j t         (II-6)
6,354.6
2012-10-31T00:00:00.000
[ "Engineering" ]
Data Framework for Road-Based Mass Transit Systems Data Mining Project † The current paradigm of intelligent transport systems (ITS) is based on the continuous observation of what is happening in the transport network and the continuous processing of data coming from these observations. This implies the handling and processing of a massive amount of data, and for this reason, data mining and big data are fields increasingly used in transportation engineering. A framework to facilitate the phases of data preparation and knowledge modeling in the context of data mining projects for road-based mass transit systems is presented in this paper. To illustrate the utility of the framework, its utilization in the analysis of travel time in a road-based mass transit system is presented as a use case. Introduction The current paradigm of intelligent transport systems (ITS) is based on the continuous observation of what is happening in the transport network and the continuous processing of data coming from these observations. This facilitates the fulfilment of the objectives of transport systems, which are the improvement of safety, environmental sustainability and the fulfilment of mobility needs [1]. To this end, ITS must provide the resources to analyze what is happening in the transport network, which implies the handling and processing of a massive amount of data. The purpose of this data management and processing is to extract non-trivial and useful information that is implicit in these data. For this reason, data mining and big data are fields increasingly used in transportation engineering [2]. According to classical methodology CRISP-DM [3], for the development of data mining projects, the related processes are grouped into six main phases that are presented in Figure 1, where the continuous lines between phases make the project move forward and the discontinuous ones make it move backwards and replant strategies as a consequence of the results obtained. In their adaptation to the ITS, the tasks contemplated in each of them can be summarized as follows: • Business understanding. Determination of the objectives of the data mining project, which may be of various kinds: prediction of demand or evaluation of the quality of service, among others. • Data understanding. Exploration of the available data, fundamentally related to the definition of the transport network, the planning of services, and those registered in the systems installed on board the vehicles (such as positioning and payment systems), identifying quality problems and selecting those that allow the proposed objective to be reached. • Data preparation. Construction of the data set to be modeled, cleaning, merging and selecting data, and defining subsets when necessary. • Modeling. Application of techniques to obtain new knowledge from the data set created in the previous phase to describe the data in the form of intelligible patterns (such as behavior patterns of travel times) or to predict the behavior of any of the factors analyzed (such as demand). • Evaluation analysis of the new knowledge found and verification that it corresponds to the initially set objective. • Deployment. Application of new knowledge found through the adjustment of service planning policies or transportation system resources. In any data mining project, the costliest phase, in terms of time required for its execution, is the data preparation phase [4]. This contribution presents a framework whose objective is to facilitate the phases of data preparation and knowledge modeling in the context of data mining projects for roadbased mass transit systems. The main contribution of this work is that, starting from realistic data requirements and based on standard specifications of transport data models, a common framework is proposed in which to develop machine learning or statistics processes, whose aim is the continuous improvement of public transport. Figure 1 shows the classical methodology of a data mining project and contextualizes the framework proposed in the data preparation phase. In addition to this introductory section, this article is organized into five more sections. The next section is dedicated to related works. The third section presents the formal model used for the proposed framework design. The framework is presented in the fourth section, focusing this description on the data handled. The fifth section is dedicated to presenting a use case. The main conclusions are presented in the sixth section. Related Works The review presented in this section is focused on those works in which data mining techniques are proposed to face relevant challenges of mass transit systems. According to the main source of data used, these works can be divided into two groups. The first group is made up of those that use payment systems as the main data source. The works that use the data provided by the positioning systems constitute the second group. Below are works from the first group. In [5] data from contactless smart card-based payment systems are modeled to obtain the profiles and habits of travelers. Key to this proposal are the data that identify travellers and those that specify the details of travellers' trips, e.g., origin and destination stops, and the date and time at which the journeys are made. In [6] data from payment systems are used to measure the use of transport infrastructure, based on data describing the basic aspects of trips made on the transport network: origin, destination, date and time. In [7] data are handled in order to provide personalised information services. There is another group of works for demand predicting. These works use time series obtained from the data provided by the payment systems to carry out the predictions. In [8] statistical techniques are used to predict short-term demand, and in [9] neural networks. These works use data describing the trips: origin, destination, date and time of the trip. In [10] the spatial and temporal behaviour of travellers is studied using clustering techniques. Positioning data is processed using data mining techniques in order to reach different goals. In [11], through clustering techniques developed by the authors, the demand and traffic conditions on the operations performance are analysed. Also, using clustering techniques, in [12] a methodology, based on vehicle GPS data, for improving the design of the transport network is proposed, covering detection and classification of stops, generation of the routes and estimation of the times of passage through the stops. In [13] a new metric is proposed to assess bus punctuality using vehicle positioning data. In [14] the causes of irregularities in service planning are analysed. In [15], clustering techniques and ad-hoc metrics are used to process vehicle positioning and passenger movement data in order to evaluate the quality of service. Finally, there is a bibliography describing different travel time prediction techniques. In [16] neural networks are used, in [17] a vector support machine, in [18] classification techniques, in [19] clustering techniques, in [20] time series statistical techniques and in [21] state models. In all these works, the basic data used by the different methods are the position of the vehicle and the instant in which this position was acquired. Subsequently, these data are related to transport operations carried out by vehicles. Formal Conceptualization A line of a road-based mass transit system is the first entity to be formalized. For the purposes of the framework, a Line is defined as systematic, scheduled route travelled by public transport vehicles. Systematic means that the bus always follows the same path and pre-established stops that do not vary. Scheduled means that there is a schedule that establishes when the buses must run the route. In the formalization, L represents a generic line, and a specific line is specified by means of the notation Li, where the subscript i is a value, normally integer, that uniquely identifies the line. The operation of a line by a public transport vehicle shall be termed a Line Service. The set of Line Services of Li are specified by the notation Ei, where i is the identifier of the line. In the model, time is specified by the notations T and t. T is used to represent a time interval and t to represent a time instant. All Line Services by line Li made over a period of time T are named by the notation Ei,T. Similarly, a Line Service that begun at a time instant t is specified by the notation ei,t. Stops on the route of Li are specified by the notation Si. The section of the route that runs from one stop to the next is called a Route Segment or Arc (see Figure 2). In the context of road-based mass transit systems, the Travel Time (TT) in a Line Service of Li is the result of the sum of two times: Dwell Time, DT, and Nonstop Running Time, RT. DT represents the time that the vehicle is stationary at stops for passengers to board or alight from the vehicle. RT represents the time taken by the vehicle to go from one stop on the route to the next. If a route has N stops, then the total TT of a line service is: Finally, the term Arrival Time is the time at which the vehicle arrives at that stop. Framework Model The proposed framework for the above-mentioned data preparation phase is described below. This is shown in Figure 3, where, on the one hand, the essential data sets in this phase appear in yellow (i.e., the initial data set extracted from the TDB, and the resulting data set, which will intervene in the next phase of knowledge extraction) and, on the other hand, in blue are the elements that constitute this proposal, formed by a new dynamic data structure and three new processes: data loading, data validation and generation of the set to be modeled. All of them are detailed below distinguishing between data module and process module. Data Module This module consists of two components. The first is made up of data obtained from the transport database (TDB) provided by the transport operator. The second component is made up of dynamic data tables generated by the framework processes accessing the TDB. Therefore, this module assumes the typical functionality of the data warehouse systems and is therefore called transport data warehouse (TDWH). TDB Data The data entities constituting this element are those related to transport network design, service scheduling and production. The entities belonging to the transport network design are space entities. They represent the deployment of the transport network in a geographical area. Basically, these are: • Bus stops and relevant points of the transport network. The framework assumes the geographical specification GPS as positioning model. Therefore, each entity of this type is located geographically by latitude, longitude and altitude. • Arc of the transport network. This is a relationship between two points in the transport network, indicating that there is a road that connects them without passing through another node. The specification of each arc entity is composed of an identifier, the point of origin of the road, the point of destination, the type of road used (street, road, dual carriageway, etc.) and the length. • Line route, which is represented in an orderly sequence of arcs, determining the path to be systematically made by the public transport vehicles. Each entity of a line route type is represented by a unique code associated with a line path and an arc's n-tuple of the transport network. The scheduling entities specify the transport operations that are carried out, when these must be carried out, where these must be carried out and the resources that must be used (vehicle and driver). The entities of this type used by the framework are the following: • Line service. This refers to an expedition that travels a planned route, starting in an instant of time. The specification of a line service is carried out by means of the scheduled start time, the estimated duration and the code of the route to be travelled. • Service. This entity is composed of a set of operations ordered in time. The specification of each service entity consists of a code that uniquely identifies the service, the vehicle that must perform the service, the driver and the instant in which it should begin. • Service scheduling. This entity is composed of all the services that are scheduled during a period T. The specification of this entity is composed of a code that identifies the scheduling, the first calendar day of T and the last calendar day of this period. Finally, the framework uses tree entities related to production: • The realization of a line service. Its representation is composed of the following attributes: instant when the line service started, instant of completion, vehicle that performed the route, driver, the line route and the line service identifiers. that public transport vehicles have an automatic localization system (AVL). This assumption is not very restrictive, because nowadays AVL systems are commonly used. The structure of the AVL register is shown in Table 1. The use of the dynamic data tables of this component allows two objectives to be reached. The first, to be able to reproduce relevant events in any node or arc of the transport network and second, from this reproduction, to obtain useful and dynamic data used by the knowledge modelling process. The relevant events contemplated in the current implementation of the framework are: • Travellers who in a time instant start a trip at a stop on a route of a line service, performed by a given vehicle. • Travellers who in a time instant end a trip at a stop on a line service route, made by a given vehicle. • For each line service route performed, RT and DT times. The framework processes access the TDB data previously described, obtain the data representing the previous events and, from these data, generate the records of the different TDWH tables. Figure 4 shows the schema of the dynamic tables and Table 2 shows the data fields of the records of these tables. The line service entity is the entity from which these dynamic tables are generated. The different planned routes are represented in ROUT; each record in this table represents a planned route in the transport network (Route Code field), and each route has a line field associated with it. As the path of a route can change, the Version field is used to record the different versions of a route. Each line service performed is represented by a LSRT record that contains the Route Code field, the Starting Instant field and the Termination Instant field. ROUT is generated from the service planning registered in the TDB. Tables LSRT, TRPT, RT and DT are obtained from the production records stored in TDB. Each TRPT record represents a trip made by a public transport user, recording the payment medium identifier (Card Code field), the route (Route Code field), the vehicle (Vehicle field), the Departure Stop Code field, the Arrival Stop Code field and the time instants in which the check-in (Check-In Instant field) and the Check-out (Check-Out Instant field) were made. These data are provided by the payment systems used. The data of this table are especially useful for data mining processes that aim to know the behaviour of demand, the habits of travellers and the use of resources (vehicles, stops and stations) by users. Each RT record reflects the time taken by a vehicle (Vehicle field) to cover each segment (Segment field) of the route (Route Code field) of the line service that started at a time instant (Service Line Starting Instant), which is the nonstop running time between each pair of stops (Nonstop Running Time field). Each DT record represents the dwell time of a vehicle (Vehicle field) at each stop (Stop field) of the route (Route Code field) in a line service that began in a time instant (Line Service Starting Instant field). The data stored in the RT and DT tables are especially relevant for data mining projects related to the TT, such as the TT forecasting or the reliability evaluation of the service scheduling. The Processes Framework procedures are classified into three types. The first type is responsible for generating the records of the dynamic tables. They are scripts that by means of Structured Query Languag (SQL) sentences consult the TDB. The second type has the objective of guaranteeing the integrity of the data sets used by the knowledge modelling techniques that are implemented. Finally, the third type of procedures are responsible for generating the data sets used by the knowledge modelling techniques. In a data mining project, ensuring the integrity of the data sets used is a requirement. In the proposed framework, data integrity control is performed in two phases. In the first phase, each data record is analysed individually in order to eliminate those with erroneous or incoherent data. The causes of these errors are of two types: malfunction of the devices that produce the data or errors in the manual procedures in which these data are generated. Given the importance of factors of spatial and temporal nature in the knowledge to be modeled, the control of errors in data related to vehicle positions and the time instants in which events are recorded is a key factor. The detection of erroneous or low-quality positioning measurements is performed by the field that indicates the quality of the GPS measurement, however, there are other positioning errors that are related to external factors, for example environmental factors, which produce errors in GPS measurements that are more difficult to detect. The second phase of data set validation is more complex, since it consists of an analysis of completeness and consistency in data related to transport entities. For example, in an analysis of the TT of the services of a line, it is necessary to guarantee that the considered routes travelled by the vehicles coincide with the planed route and, therefore, to discard from the analysis those routes which, for different reasons, for example, works, accidents, events, etc., do not coincide with the planned route. In this case, there is a process of validation of the line service routes that verifies that the reproduced routes coincide with those planned. Use Case The proposed framework has been used to analyse interurban transport on the island of Gran Canaria. This analysis has made it possible to obtain knowledge about different important variables such as demand, punctuality and TT of the different routes and corridors of the transport network of the mentioned island. The following describes how the framework has been used to obtain data that have allowed the analysis of TT on two important routes of the transport network. These routes are the routes identified by L1 and L303. The routes start from the most populated city on the island, the city of Las Palmas de Gran Canaria, and run through, in the first case, the most populated, tourist, commercial and industrial centres on the island and, in the other, residential and regional areas. Therefore, they are routes in which there are a large number of passenger movements, implying a large number of registers. The objective was to know the behaviour of TT and to develop a model for predicting this travel time. The details and results of this study were presented in [22] where TT behaviour was analysed throughout 2015. We now present the data obtained by the proposed framework that made this study possible. The number of GPS readings used for TT analysis for each analysed line are shown in Table 3. Of a total of 51,499,404 GPS records that were generated throughout the vehicle fleet in 2015, 2038668 were applied from line L1 and 615813 from line L303, being validated by the integrity control processes described in the previous section. For the two lines analysed, 11,847 line services of line L1 and 9887 line services of line L303 were planned. These data were obtained by consulting the ROUT and LSRT tables, whose records were generated by the framework script processes querying the TDB. From the data records representing vehicle, start time and the end time of each line service, all the trips made in the line services of lines L1 and L303 throughout 2015 were reproduced, based on the GPS readings acquired in those line services. The data integrity verification processes, described in the previous section, analysed each of the reproduced routes to check whether the route was complete and matched the planned path. As a result of this verification, 8419 routes on line L1 and 7862 on line L303 were selected as valid. For each of these valid routes, by means of an analysis of the sequence of GPS readings, the instants of arrival at each bus stop were obtained, resulting in 58,933 arrival time data for line L1 and 39,310 arrival time data for line L303. These arrival time data were the data sets used by the knowledge modelling techniques to analyse the TT. Specifically, the TT profiles of the line services of the analysed routes were generated. For each reconstructed route of line services, using the RT table of the framework, an n-tuple containing the TT was generated in each of the selected stops of the route. Each of these n-tuples represented the TT profile of the associated performed travel. Different clustering techniques were applied to classify the whole set of TT n-tuples obtained, the K-medoids technique with Manhattan as distance-based similarity metric being the one that produced the best results. From these results, a TT prediction method based on TT similarity patterns (PSM) was developed [21]. The method consists of assuming that in a line service that is being carried out, the TT from the last stop that has been passed to the next one, is equal to the TT that indicates the pattern (medoid) with the behaviour most similar to the profile of the TT observed since the beginning of the expedition. Data were generated from the RT table to measure the accuracy of the prediction method, which was compared with two alternative methods widely used as baseline in short-term TT prediction work: prediction based on mean TT (AVG) and prediction based on a multilayer perceptron neuronal network (ANN). Using the data provided by the RT table, the mean TT of the AVR technique was calculated and, for the ANN method, learning and validation datasets were created. Table 4 shows the results obtained with the three methods in path L1 and Table 5 Conclusions In data mining projects, the costliest phase, in terms of the time required for completion, is the phase that aims to prepare the data that will be used by knowledge modelling techniques. When the project requires the management of a large volume of data, such as those developed in the contexts of mass transit systems, the time required for this phase grows exponentially. In this paper, a framework for facilitating this phase of data preparation in data mining projects for road-based mass transit systems has been presented. This framework has been used for the analysis of very important aspects in this type of transit system: in the analysis of demand and in the analysis of the TT, specifically in its prediction, and in the evaluation of the punctuality of the services. The case of use presented in this article corresponds to the development of the TT prediction model, and served to compare the developed model, PSM, with two other alternative techniques: AVG and ANN. Considering the different nature and amount of data required by each of these techniques, which were obtained using the proposed framework, it is concluded that it achieves the objective of providing a data environment that facilitates the execution of knowledge modeling techniques in the context of data mining projects for road-based mass transit systems, constituting an environment independent of the modeling techniques used. Finally, there are indications that there is a future line of research to be considered, which is the possibility of automating data mining processes to run on a constant basis for the continuous improvement of service quality. Author Contributions: conceptualization, T.C. and C.R.G; analysis and software development, all the authors of this article; integration and configuration of database, G.P. and F.A.; validation, T.C. and A.Q.-A.; research supervision, C.R.G.; all the authors participated in the writing of this manuscript. Funding: This research received no external funding.
5,647.6
2019-11-20T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
A 4H-SiC CMOS Oscillator-Based Temperature Sensor Operating from 298 K up to 573 K In this paper, we propose a temperature sensor based on a 4H-SiC CMOS oscillator circuit and that is able to operate in the temperature range between 298 K and 573 K. The circuit is developed on Fraunhofer IISB’s 2 μm 4H-SiC CMOS technology and is designed for a bias voltage of 20 V and an oscillation frequency of 90 kHz at room temperature. The possibility to relate the absolute temperature with the oscillation frequency is due to the temperature dependency of the threshold voltage and of the channel mobility of the transistors. An analytical model of the frequency-temperature dependency has been developed and is used as a starting point for the design of the circuit. Once the circuit has been designed, numerical simulations are performed with the Verilog-A BSIM4SiC model, which has been opportunely tuned on Fraunhofer IISB’s 2 μm 4H-SiC CMOS technology, and their results showed almost linear frequency-temperature characteristics with a coefficient of determination that was higher than 0.9681 for all of the bias conditions, whose maximum is 0.9992 at a VDD = 12.5 V. Moreover, we considered the effects of the fabrication process through a Monte Carlo analysis, where we varied the threshold voltage and the channel mobility with different values of the Gaussian distribution variance. For example, at VDD = 20 V, a deviation of 17.4% from the nominal characteristic is obtained for a Gaussian distribution variance of 20%. Finally, we applied the one-point calibration procedure, and temperature errors of +8.8 K and −5.8 K were observed at VDD = 15 V. Introduction 4H-polytype silicon carbide semiconductor material is widely used for high temperature applications [1][2][3], and the possibility to fabricate integrated circuits (ICs) allows for an extension of its application fields.Up to now, both unipolar and bipolar 4H-SiC IC technologies have been developed: on bipolar technology, a Bipolar Junction Transistor based on multi-epitaxial stacks is used in analog [4][5][6] and digital [7] ICs, and its performance has been demonstrated up to 873 K.The 4H-SiC Complementary Metal Oxide Semiconductor Field Effect Transistor, CMOS technology proposed by Raytheon, has been developed, and analog and digital building blocks have been fabricated [8], such as, for example, a Positive-To-Absolute-Temperature (PTAT) circuit in the range between 298 K and 573 K, and with a maximum deviation from the ideal linear curve of 33% [9].Recently, Fraunhofer IISB provided a 4H-SiC 2 µm-CMOS technology [10] and several ICs have been proposed, like CMOS Complementary-To-Absolute-Temperature (CTAT), a sensor in the range between 298 K and 438 K and with a sensitivity of 7.5 mV/K [11], or a temperature sensor based on a p-n diode from 297 K and 873 K, with an R 2 = 0.9998 of the voltage-temperature characteristic.Moreover, such technology is also compatible with other device structures that are useful for sensing temperature and ultraviolet radiation [12]. All the 4H-SiC-based proposed temperature sensors transduce temperature in electrical quantities, either through the difference of Gate Source Voltages (∆V GS ) between two MOSFETs [11], or through the difference of diode forward voltages [2,3,9,13,14].However, 4H-SiC diodes with good performances have vertical structures and they are incompatible with VLSI circuits, whereas, although MOSFETs can be used, they need an integrated circuit in order to read out the voltage-temperature signal.Hence, the study of 4H-SiC CMOS circuits is a relevant topic, both for investigating their potentiality as read-out circuits for sensors and to propose new types of sensors.Among these last, temperature sensors based on time conversion can be a valid, fully compatible, 4H-SiC CMOS alternative, but they have been reported only in Si CMOS technology [15,16].It is worth noting that the fabrication process quality is still poor because the 4H-SiC CMOS technology is at an early stage, and the variation in the fabrication process parameter on circuit performances is a mandatory analysis. In this paper, a first all-CMOS temperature sensor based on an oscillator circuit is proposed.It converts the operating temperature into an oscillation frequency of triangular or square waveform voltages.The circuit is fully compatible with the 4H-SiC 2 µm-CMOS technology and its performances are extracted from numerical simulation up to an operating temperature of 573 K. Considering that the oscillation frequency is related to the charge and discharge of an integrated capacitance through the device currents, we can obtain a temperature-frequency conversion because those currents have a temperature dependency.The paper is organized as follows: in Section 2, the topology, the operating principle, and the design of the sensor are reported; in Section 3, numerical simulation results are shown, focusing both on the differences from the analytical design due to second-order device phenomena and on the effects of the fabrication process variations; Sections 4 and 5 are, respectively, the layout and the conclusions. The Topology of the Temperature-Sensing Oscillator The circuit of Figure 1 is the proposed sensor and is based on an oscillator whose frequency is uniquely related to the temperature.In the following, we report the single sub-circuits and the design of the circuit.The circuit is an astable multivibrator and it is composed of a CMOS Schmitt trigger, three CMOS inverters, a n-type MOS capacitor, and a common drain NMOSFET amplifier.The circuit can generate a square waveform voltage, OUT 3 , thanks to the CMOS inverter I NV 3 , which is opportunely sized to load capacitance C 2 , and a triangular waveform voltage at the OUT 1 .The waveforms of OUT 1 and OUT 3 are related to each other thanks to the integrator composed of I NV 2 and NMOS Cap.The sensing circuit is biased to positive voltage, V DD , and a negative voltage, V S , which needs to bias the Class-A amplifier. CMOS Voltage Schmitt Trigger The 4H-SiC CMOS voltage Schmitt trigger schematic and transcharacteristic are reported, respectively, in Figure 2a Until V I N is lower than V TH N 1 , which is the threshold voltage of the NMOSFET M N 1 , the stacked PMOS transistors M P 1 and M P 2 are in linear operation region, and the source follower M N 3 is in saturation operation mode, whereas M P 3 , M N 1 , and M N 2 are turned off.The drain potential of the , M N 1 is turned on and biased in the saturation region.When V I N is higher than , the transistor M N 2 turns on so that the positive feedback loop, composed of M N 2 and M N 3 , quickly drops the OUT 0 voltage to ground [19].Meanwhile, the source follower M P 3 turns on and brings the source potential of M P 2 to a low value so that this last one turns off.The high threshold voltage, V T+ , can be calculated as follows [17]: where OX and K P = µ P C OX are, respectively, the transconductance coefficients of the NMOSFET and PMOSFET, µ N(P) is the carrier channel mobility, C OX is the gate oxide capacitance, and W i and L i are, respectively, the width and the length of the i-transistor.Moreover, at V I N = V T+ and before the positive feedback loop intervenes, it is reasonable that V OUT0 = V DD and that the following relation is valid [19]: In the reverse direction, i.e., where V I N reduces from V DD to ground, the symmetrical process takes place.At V I N = V DD , the transistors M N 1 , M N 2 , and M P 3 conduct, while M N 3 , M P 1 , and M P 2 are turned off.Then, at V I N ≤ V DD − |V TH P |, the transistor M P 1 conducts, and when V I N becomes lower than V DD − V SD P 1 − |V TH P 2 |, the transistor M P 2 turns on and the positive feedback loop, composed by M P 2 and M P 3 , is closed and brings the output voltage to V DD .Simultaneously, the source follower M N 3 turns on and increases the source potential of M N 2 , inducing its turn-off.The low threshold voltage, V T− , can be calculated using the following equation [17]: Moreover , at V I N = V T− and before the positive feedback loop intervenes, it is reasonable that V OUT0 = 0 and the following relation is valid [19]: By defining the values of V DD , V T + , and V T − , one can design the channel sizes of M N1 and M N3 from (2), and of M P1 and M P3 from (4); instead, M N2 and M P2 are obtained, respectively, from (1) and (3). In the proposed circuit of Figure 1, the OUT 0 is followed by a CMOS inverter, I NV 1 , composed of M N 4 -M P 4 , in order to invert the transcharacteristic and to make it compatible with the second part of the circuit.Moreover, I NV 1 drives the loads connected to the Schmitt trigger so that the dynamic behavior is preserved, as shown in the Sections 2.2-2.4. Integrator and Output Stages The NMOS capacitor, M N,CAP , of Figure 1, is an NMOS-based capacitor and it has been preferred to MIM-caps due to the lack of a Verilog-A model of this last in our actual 4H-SiC CMOS technology.Its gate is controlled by the output of the Schmitt trigger and is driven by the CMOS inverter I NV 2 , composed by M N 5 -M P 5 , with a constant current so that a triangular waveform appears at the input of the Schmitt trigger, i.e., V OUT1 . The ramp time is defined by the charging and discharging of M N CAP : when V G N,CAP increases toward V DD , starting from V T − , the time constant is τ P = R P C L and the charging stops at V G N,CAP = V T + because the Schmitt trigger changes state; when V G N,CAP decreases toward ground starting from V T + , the time constant is τ n = R N C L and the discharging stops at V G N,CAP = V T − .The capacitance C L I is the total load capacitance at the OUT 1 terminal and the equivalent resistances of the time constants are [20]: Moreover, to avoid frequency and waveform distortions, we added two output stages needed to charge the probe capacitances C 1 and C 2 , which are of the order of tens of pF.Observing the circuit of the Figure 1, one is a Class-A amplifier based on NMOS M N 7 for the triangular waveform output, V OUT2 , and the other is a CMOS inverter I NV 3 , composed of M N 6 -M P 6 , for the square waveform output, V OUT3 .The V OUT2 is the same waveform of V OUT1 with a level-shift voltage of V GS N7 , and hence, the maximum and minimum voltages of , respectively, where V + GS7 and V − GS7 are the V GS N7 when V G7 is equal, respectively, to the V T + and V T − values.Moreover, upon observing Figure 1, the Class-A amplifier is biased by a resistor, R S and a bias voltage, V S , which are externally applied to the OUT 2 terminal. Evaluation of the Oscillation Frequency The oscillation frequency, f OSC , can be calculated considering the charge and discharge times of M N CAP , t CAP , together with the time propagation delays of the Schmitt trigger and of I NV 1 .Indeed, one obtains: where t pHL(LH) trigger and t pHL(LH) I NV 1 are the time propagation delays high-low (low-high) of the Schmitt trigger and of I NV 1 , respectively, and t CAP is the mean time of the charge and discharge time of M N CAP .The latter has the following relation in order to design the circuit: Therefore, we obtain the same time to charge and discharge M N,CAP , which means a saw-tooth waveform signal for V OUT1 as well as for V OUT2 , and a duty cycle of 50% for the square waveform of V OUT3 .To calculate f OSC , the single terms of ( 6) are evaluated as follows [20]: where the capacitances are as follows: where C G , C GD , and C DB are, respectively, the Gate, the Gate-Drain, and the Drain-Body capacitance. The design can be simplified, assuming the following conditions: • for C N CAP that is higher than other capacitances, one has C L I C N CAP ; • for t CAP that is higher than the propagation delays, one obtains f OSC 0.5t −1 CAP .About the OUT 3 signal, I NV 3 loads the probe capacitance C 2 and the time propagation delay is evaluated as follows: Therefore, assuming that all the channel lengths are equal to the minimum one, and that the time propagation delays are negligible for t pHL(LH) = 10 −3 t CAP , the design is completed in this way: , the channel widths of I NV 1 are calculated from (10); • fixing t p trigger , (9) complete the equations used together with ( 1)-( 4) to design the Schmitt trigger. Finally, because the M N 7 class-A output stage loads the probe capacitance C 1 in order that the triangular waveform is undistorted, its current has to respect the following relation (see Figure 1): Also, considering that M N7 is in saturation operation mode and for a selected value of V S , it is possible to write: Hence, the values of W N7 , L N7 , and R s can be found from ( 13)-( 14). Design of the Circuit The equations from ( 1) to ( 14) are used to design our circuit with the project specifications of Table 1.The values are from a trade-off among the expected performances of the 4H-SiC CMOS technology and the possibility to make our sensor compatible with the current electronic and the occupied wafer area.The device physical parameters used in (1)-( 14) are the threshold voltages of the MOSFETs, V TH N = 5.8 V and V TH P = −8 V, the channel mobility, µ N = 17.14 cm 2 V −1 s −1 and µ P = 3.52 cm 2 V −1 s −1 , and the oxide capacitance, C OX = 62.78 nFcm −2 .In the Appendix A, the device physical parameters, characteristics, and extraction procedure are reported. Parameter Unit Value By using the results of the previous subsections, the design of the proposed circuit is reported in Table 2.The proposed procedure is based on an analytical approach and allows for the definition of a first-order design of the circuit, which can be used as a starting point for the numerical simulations.Indeed, a tuning procedure is required to achieve the project specifications, as has been shown in the following section, due to second-order effects of the transistors, like 4H − SiC/SiO 2 interface defects or fabrication process non-uniformity. Numerical Simulation Results and Process Variability Once the design has been completed, numerical simulations have been performed in the Cadence Virtuoso environment [21] by using a Verilog-A BSIM MOSFETs model [22] whose parameters are opportunely tuned to fit the experimental curves of the 4H-SiC MOSFETs in the temperature range from 298 K to 573 K.The model has been developed by Fraunhofer IISB, and a more detailed description of the 4H-SiC MOSFETs is reported in Appendix A. Numerical results report some inconsistencies with the project specifications; for example, the oscillation frequency is 85.94 kHz instead of 90 kHz, or the trigger threshold voltages are V T + = 8.14 V and V T − = 4.4 V compared to 10 V and 5 V, respectively, as reported in Table 1.Keeping the same L N and L P of Table 2 but varying W N and W P , we obtained the design reported in Table 3, and the circuit results show f OSC = 93.8kHz, V T + = 10.66V, and V T − = 5.37 V. Indeed, in Figure 3, the comparison between the two designs shows how the asymmetry of the trans-characteristics obtained using the analytical design approach, disappears in the one resulting from the tuning of the transistor sizes.Such differences can be explained by second-order effects, which are neglected in the simplified current model used to extract (1)-( 14): for example, they are the saturation of the carrier velocity, the channel length modulation, the bias effect of the body, and the effects of defects at the 4H − SiC/SiO 2 interface on the MOSFET electric behavior [23].For example, observing Figure A3b, the high density of interface defects modifies the NMOSFET channel mobility dependency on V GS compared to the typical shape in Silicon technology: indeed, in Silicon technology, we expect a step-like curve at V GS V TH N , and then, a slight decay of µ N for V GS > V TH N [24]; instead, in 4H-SiC technology, the mobility has a continuous increase with V GS , and for V GS > 15 V, it remains constant.As our temperature sensor is an oscillator, the total harmonic distortions, THDs, of V OUT2 and V OUT3 for Table 3 have been evaluated.Indeed, our sensor can be followed by a read-out circuit, like a frequency counter or microcontroller, whose aim is to measure the frequency; such a measurement is as accurate as the waveform is not distorted.These respectively show triangular and square waveforms in Figure 4 at T = 298 K, with a THD, respectively, of 9.89% and of 33.18%, which are slightly lower than the pure symmetric waveforms, i.e., THD = 12.1% for the triangular and THD = 48.3%for the squared waveforms [25].Moreover, an estimation of the total average power dissipation gives a value of 2.45 mW at 298 K and the bias-frequency sensitivity is 9.28 kHz/V, as shown in Figure 5.In the following, the effects on the oscillation frequency of the temperature and of the fabrication process variations are analyzed and investigated for the design reported in Table 3. Oscillation Frequency Dependency on the Temperature The oscillation frequency dependency on the temperature, f OSC -T, can be shown, beginning with the following relation: where it is assumed that t CAP (t CMOS I NV I + t trigger ) in (6).The dependence of the MOSFETs on the temperature is described through the channel mobilities and the threshold voltages, which are explicitly reported in the Appendix A. However, for a first analysis of (15), only the channel mobility is considered because V TH N and V TH P (i) appear in V T− and V T+ , which are divided by V DD and they are also the arguments of a logarithm function; and (ii) are in R P 5 and R N 5 , which are divided by V DD .Hence, one obtains: where A includes all the parameters related to the fabrication process and is independent from the temperature, in a first approximation.In Figure 6a, the resulting f OSC -T curve at V DD = 20 V is reported and the best fitting of ( 16) gives A = 98.66 kHz and α = 1.22 with an R 2 = 0.9656.We also investigated the effects of V DD on the linearity of the f OSC -T curve, varying from 12.5 V to 20 V, and an improvement is obtained at V DD = 12.5 V with an R 2 = 0.9992, as shown in Figure 6b.The f OSC -T curve at V DD = 12.5 V is reported in Figure 6a and it is compared with the model having A = 23.88 kHz and α = 1.883,where a better fitting is evident; however, as is expected from ( 16), there is a reduction of the value of f OSC from 93.8 kHz to 23.88 kHz, evaluated at T = 298 K.For completeness in Figure 6a, we report the f OSC -T characteristic for the bias voltage of 15 V and we obtain R 2 = 0.9928, A = 45.92 kHz, and α = 1.64. Effects of Process Parameters Variation Fabrication process variations are expected, and for example, in 4H-SiC CMOS technology, they can be related to variations on the activation process of the Aluminum p-type doping atoms [26], on the uniformity of the doping concentration and of the oxide thickness, or on the quality of the contact resistance of doped regions [27].All these fabrication process variations can be modeled, in a first analysis, on variations in the channel mobility and of the threshold voltage, and then used for Monte Carlo analysis to assess the sensitivity of the circuit.In particular, the analysis is a 1000 point process Monte Carlo and consists of evaluating the f OSC -T curves, considering a Gaussian distribution for µ N(P) and V TH,N(P) , which is defined by a standard deviation (σ) and a mean value (µ) through the following distribution: During the analysis, we varied the supply voltage from 12.5 V to 20 V and the ratio σ/µ of the Gaussian distribution from ±10% to ±20%, either for both parameters or singularly.To compare the cases, we use the oscillation frequency variation, f OSC,var , defined as follows: In Figure 7a-c, the f OSC -T curves for a σ/µ = ±0.1 and at different values of V DD are reported.Although the curve of Figure 7a for the case V DD = 12.5 V shows a better linearity, a higher f OSC,var appears: indeed, it is almost 23%, and in Figure 7d, the f OSC,var -T curve is shown.Instead, observing Figure 7c for V DD = 20 V, a maximum f OSC,var of 8% is achieved, but a worse linearity is obtained, as shown in Figure 6.Hence, a supply voltage of 15 V allows for a good trade-off between the process parameter variation and the linearity, as is clearly shown in Figure 7b,d.To understand the effects of the single device parameter on the performance of the circuit, we performed Monte Carlo analysis by singularly varying either V TH,N(P) or µ N(P) .In Figure 8, a σ/µ of ±10% for V TH,N(P) with a constant value of µ N(P) at the nominal value reported in Table 1 shows a stronger variation in the oscillation frequency as a function of the temperature than the case of a σ/µ of ±10% for µ N(P) with a constant value of V TH,N(P) , as reported in Figure 9. Indeed, observing Figure 8d, the maximum f OSC,var related to the variations in V TH,N(P) are 22.64%, 11.76%, and 7.24% for V DD , equal to 12.5 V, 15 V, and 20 V, respectively, whereas for the variations in µ N(P) , they are around 5.8% for all bias conditions (see Figure 9d).That is also confirmed, considering that the maximum variation for the cases of Figure 7 is almost similar to that of Figure 8, and in Table 4, they are reported for ease of reading.However, the best trade-off in terms of the linearity and maximum variation is still at V DD = 15 V for both cases. To stress the effect of process variations on the circuit performance, we increase σ/µ to ±15% and ±20% for V DD = 20 V, this being the bias condition defined during the design specification (see Table 1).In Figure 10, the results for a σ/µ of both parameters indicate an expected increase in the divergence from the nominal value, and observing Figure 10c, the percentage of variation of f OSC from the nominal value decreases with the temperature, but it has a peak at around T = 400 K and a further increase over 500 K.To understand the behavior, we separately analyzed the σ/µ of the parameters, and the results are reported in Figure 11 and in Figure 12, respectively, for µ N(P) and V TH,N(P) .It is clear that the variation reduces for the µ N(P) -case with the increase in the temperature, whereas the V TH,N(P) -case is almost constant, except for a maximum at T = 400 K, which is related to that of Figure 10c.Moreover, both cases have an increase for temperatures that are higher than 500 K.In Table 5, we summarized the maximum variation for f OSC , and the greater effect of the process variations for the V TH,N(P) -case compared to the µ N(P) -case is clear.Considering that the best trade-off between linearity and circuit specification is at V DD = 15 V, we also performed Monte Carlo analysis for this case by increasing the σ/µ, and in Figure 13, the results are reported.Observing the curves, the 15 V-case has an increased variation compared with V DD = 20 V, which is more evident for T > 400 K.It can be summarized through the f OSC,var -T curve of Figure 13c, with a maximum variation of 27.11% at 425 K for a σ/µ = 0.2.Instead, the deviation reduces in the high temperature range, contrary to the 20 V-case.Moreover, the separation of the effects of the process variation of µ N(P) and V TH,N(P) have been analyzed and shown, respectively, in Figure 14 and in Figure 15: the prominent effect of V TH,N(P) is evident with respect to µ N(P) , which has at least double the value and defines the behavior of Figure 13c.It is interesting to note that the increase of the f OSC,var at T > 500 K for V DD = 20 V disappears for the case of V DD = 15 V.In Table 5, we also report the maximum values of f OSC,var at V DD = 15 V, and higher values are shown than the 20V-case, so that although the 15V-case has a higher linearity, it is more greatly affected by process variations. Results after Sensor Calibrations If, on one hand, the bias voltage of V DD = 20 V reduces the dependency on the fabrication process of the oscillation frequency, on the other hand, the f OSC -T characteristic has a scarce linearity.To overcome it, (16) suggests that it is possible to apply the one-point calibration procedure [16], where f OSC -T curve is normalized by a f OSC,1point selected at a fixed temperature, T 1point , as follows: In this way, the negative effect of the process variation can be partially eliminated.Indeed, we applied it in the case of σ/µ = ±0.1, both for V TH,N(P) and for µ N(P) , and in Figure 16, the results are shown for different bias conditions and T 1point .For V DD = 12.5 V of Figure 16a, a better correction is for T 1point = 473 K compared to the room temperature case, as well as for V DD = 15 V of Figure 16b, where T 1point = 423 K; instead, for V DD = 20 V, a T 1point = 298 K can be used (see Figure 16c).This last result makes the bias supply of V DD = 20 V advantageous because an easier calibration procedure is applicable.Indeed, in term of the calibration easiness, the normalization at T 1point = 298 K is of great advantage, but in the case of V DD = 12.5 V, it has a maximum variation of almost 8.65% from the nominal f OSC,norm ; instead, the same case evaluated at T 1point = 423 K has a maximum variation of 5.93%. Finally, once the calibration has been done, one can extract temperature from (16) using the measured frequency as the input variable.Hence, for the case of V DD = 15 V, one can use the parameters A = 1.02 and α = 1.64, and the calibration at T 1point = 298 K reported in Figure 16b.In these conditions, the error between the extracted and effective temperatures, T ERR , is shown in Figure 17a, and its maximum absolute value is 8.8 K across the whole temperature range between 298 K and 573 K. Furthermore, the process variations of σ/µ = ±0.1,singularly either for V TH,N(P) or for µ N(P) , give errors of 8.8 K and 6.58 K, respectively.For the 20 V-case of Figure 17c, a maximum T ERR of −11.25 K has been found, both for the nominal case and for a process variation of σ/µ = ±0.1.The error T ERR can be reduced if a third-order curve is used as model [16], and after a similar analysis, we obtained the results reported in Figure 17b for V DD = 15 V and in Figure 17d for V DD = 20 V, where T ERR reduces, respectively, at −5 K/4.78 K and −8.15 K/4.49K. Comparisons with the State-of-the-Art An exhaustive comparison between the state-of-the-art and our proposal is difficult due to the limited availability of 4H-SiC temperature-frequency converter sensors.However, in Table 6, we report sensors based on similar operating principles and on other technology in order to understand how our proposal improves the state-of-the-art.The circuit of [16] is a Silicon CMOS technology circuit based on delay-locked loops and it shows an error of between +4 K and −4 K, which is slightly lower than ours, but in a narrower temperature range, i.e., [273.15; 373.15]K. Indeed, as an example from a comparison with [28], our proposal has a greater temperature error, i.e., around 10%, but it can operate within a wider temperature range, even to 200 K, thanks to the higher performance of 4H-SiC technology compared to the Silicon one.Then, the proposals of [29,30], based on an Si 65nm-CMOS technology, use an area that is lower than ours, and in particular, [29] have −83% power dissipation and −66% temperature error, whereas [30] has, respectively, −44% and −83% compared to ours.Anyway, the dynamic power can be justified with the reduced channel length; instead, the error is lower because both the temperature variations of the Si transistor parameters are reduced due to the better Si/SiO 2 interface quality, and the temperature range is only 100 K compared to the value of 275 K for our sensor.In order to stress the comparison, we reduced the temperature range between 298 K and 443 K, extracting a T ERR of −3.15 K/4.58 K, which reduces to −3 K/3.84 K if a third-order model is used. In Table 6, we report time-based temperature sensors fabricated using Silicon-On-Insulator technology.Ref. [31] is a Full-Depleted SOI at 28 nm CMOS and it has a very low T ERR as well as power consumption, and similarly, the SOI 32nm CMOS temperature sensors of [32] have a T ERR = ±1.95K.However, in both cases, although SOI-CMOS is much more mature technology than the 4H-SiC CMOS one, the temperature range is up to 385 K, which is 188 K lower than our proposal. Layout The design results of Table 3 are used to draw the final layout of the circuit reported in Figure 18 with the Cadence Virtuoso 6.1.8Layout software.The 4H-SiC 2 µm-CMOS process has 14 masks and two metal layers, whereas the active area of the sensor of Figure 1 is 0.163 mm 2 . Conclusions In this paper, the design of a 4H-SiC CMOS temperature sensor based on an analog oscillator is presented, as well as an analysis of its performances in terms of the process fabrication variations and the bias voltage.Unlike Silicon technology, our circuit showed a reduction in the propagation delay with an increase in the temperature, because the channel mobility improves at high temperature, and consequently, the frequency increases.The relation between the oscillation frequency and the temperature is almost linear, with an R 2 = 0.9992 at V DD = 12.5 V, but with a smaller influence of the process fabrication variation, i.e., 17.4% σ / µ = ±20%, is at V DD = 20 V, with an R 2 = 0.9681.On the other hand, performing a one-point calibration, we obtain a temperature error of +8.8 K and −5.8 K when V DD = 15 V. The use of a simple model for the MOSFET current results are useful for a firstorder approximation of the circuit design, but the effects of a high defects density at the SiO 2 /4H − SiC interface should be considered for a better description of the circuit.It means that a more accurate model has to be developed and the µ N(P) -dependency on the temperature should be focused on. .By using the second derivative method [33], we extracted the threshold voltages reported in Table A1, whereas the channel mobility dependency on the V GS is obtained using the following equation: and are reported in Figure A3a,b for PMOSFET and NMOSFET, respectively.Their mean value is the channel mobility reported in Table A1 and it is used to design the proposed circuit in Section 2. In Figure A3b, the unusual behavior of the NMOSFET channel mobility as a function of V GS is shown, and it has been described in Section 3 in order to justify the inconsistency between the expected specification and the numerical simulation results of the design in Table 2. Indeed, only PMOSFET channel mobility (see Figure A3a) shows typical step-like behavior, as well as a decrease with the increase in V GS ; instead, µ N has a continuous increase until it reaches a maximum at around V GS = 15 V.The temperature effect on the transistor currents are reported in Figure A4 for a |V DS | = 20 V, and it is clear that the increase in the current with the temperature occurs both for PMOSFET and NMOSFET.Such a behavior is due to a combination of phenomena: on one hand, the channel mobility increases up to 400 K and then decreases, as reported in [34] for a 4H-SiC lateral NMOS with a bulk doped N A = 5 × 10 17 cm −3 ; on the other hand, the threshold voltage continuously decreases with the temperature [35].In order to take into account such behavior, a temperature dependency of V TH can be modeled with the following linear function [24]: where β is a negative constant and we can neglect it from our analysis in a first approximation, because it is of the order of mV/K; on the other side, the temperature dependency of the channel mobility is modeled as follows [24,34]: where α is a positive fitting parameter and is higher than 1. It is worth noting that the high density of defects at the 4H − SiC/SiO 2 interface is the reason for the improvement in the MOSFETs' performance with the increase in temperature [36], whereas 4H-SiC bipolar devices showed worst behaviors under hightemperature operations; in particular, a thermal runaway of the current is observed [37,38], limiting their applications under harsh environment conditions.The numerical simulations performed in Cadence Virtuoso are based on the compact BSIM4SiC model [22] and they have been opportunely developed by Fraunhofer IISB for the IISB's 2 µm 4H-SiC CMOS technology.It is a extension of the widely used BSIM4 compact model, in which the typical physical phenomena of the 4H-SiC lateral n-channel and p-channel MOSFETs are considered.Indeed, the lower quality of the 4H − SiC/SiO 2 interface induces a high density of interface trapped charges, differing significantly, in terms of the transfer and output characteristics, from the Silicon MOSFETs ones. The effects include mobility degradation, a lower subthreshold slope, soft saturation, and an increase in the flat band and threshold voltages [22].For an accurate description of the IISB's 2 µm 4H-SiC CMOS technology, the parameters of the BSIM4SiC model have been determined through a parameter extraction procedure based on the measurement data of n-and p-type transistors.The developed compact model fits the experimental curves quite well, as shown in Figures A1 and A2, with a higher accuracy for the NMOSFET case with respect to the PMOSFET case, especially for the output characteristics displayed in Figure A2a.The modeling of 4H-SiC PMOSFET is generally more challenging, due to the more limited knowledge about the defects at the 4H − SiC/SiO 2 interface in the n-type body region with respect to 4H-SiC NMOSFET, which has had much more interest thanks to the power electronics applications.Then, the model also includes channel geometry scaling, which makes the parameter extraction and the achievement of satisfactory results even more complicated for all of the considered geometry.Moreover, in Figure A4, its validity is shown over a temperature range from 298 K to 573 K. Figure 1 . Figure 1.The topology of the proposed temperature sensor, based on an oscillator circuit. ,b [17,18].It has three pairs of CMOS transistors, which are M N1−3 and M P1−3 , and the input and output signals are, respectively, I N and OUT 0 . Figure 3 . Figure 3. Numerically simulated trans-characteristics of the Schmitt trigger from analytical and tuning designs at T = 298 K. Figure 4 . Figure 4. Waveform signals of (a) V OUT2 and (b) of V OUT3 at T = 298 K. Figure 5 . Figure 5. Oscillation frequency, f OSC , dependency on power supply voltage, V DD , at T = 298 K, showing a sensitivity of 9.28 kHz/V. Figure 6 . Figure 6.(a) Comparisons of the f OSC -T curves between numerical simulation results, obtained from the Verilog-A BSIM MOSFETs model and the analytical model of (16) at V DD = 12.5 V and 20 V. (b) R 2 − V DD curve obtained from the f OSC -T curves. Figure 7 . Figure 7. Results of a 1000 points process Monte Carlo analysis for (a) V DD = 12.5 V, (b) V DD = 15 V, and (c) V DD = 20 V, with σ/µ = ±0.1 for V TH,N(P) and for µ N(P) .(d) f OSC,var as function of the temperature in terms of V DD resulting from (a-c). Figure 8 . Figure 8. Results of a 1000 points process Monte Carlo analysis for (a) V DD = 12.5 V, (b) V DD = 15 V, and (c) V DD = 20 V, with σ/µ = ±0.1 for only V TH,N(P) , whereas µ N(P) is constant at nominal value of Table 1.(d) f OSC,var as function of the temperature in terms of V DD resulting from (a-c). Figure 9 . Figure 9. Results of a 1000 points process Monte Carlo analysis for (a) V DD = 12.5 V, (b) V DD = 15 V, and (c) V DD = 20 V, with σ/µ = ±0.1 for only µ N(P) , whereas V TH,N(P) is constant at nominal value of Table 1.(d) f OSC,var as function of the temperature in terms of V DD resulting from (a-c). Figure 10 . Figure 10.Results of a 1000 points process Monte Carlo analysis for (a) σ/µ = ±0.15,(b) σ/µ = ±0.2 for µ N(P) and V TH,N(P) at V DD = 20 V. (c) f OSC,var as function of the temperature for different values of σ of µ N(P) and V TH,N(P) at V DD = 20 V. Figure 11 . Figure 11.Results of a 1000 points process Monte Carlo analysis for (a) σ/µ = ±0.15 and (b) σ/µ = ±0.2, for only µ N(P) , whereas V TH,N(P) is constant at nominal value of Table 1 at V DD = 20 V. (c) f OSC,var as function of the temperature for different values of σ of µ N(P) with V TH,N(P) at nominal value and at V DD = 20 V. Figure 12 . Figure 12. Results of a 1000 points process Monte Carlo analysis for (a) σ/µ = ±0.15 and (b) σ/µ = ±0.2 for only V TH,N(P) , whereas µ N(P) is constant at nominal value of Table 1 at V DD = 20 V. (c) f OSC,var as function of the temperature for different values of σ of V TH,N(P) with µ N(P) at nominal value and at V DD = 20 V. Figure 13 . Figure 13.Results of a 1000 points process Monte Carlo analysis for (a) σ/µ = ±0.15 and (b) σ/µ = ±0.2 for µ N(P) and V TH,N(P) at V DD = 15 V. (c) f OSC,var as function of the temperature for different values of σ of µ N(P) and V TH,N(P) at V DD = 15 V. Figure 14 . Figure 14.Results of a 1000 points process Monte Carlo analysis for (a) σ/µ = ±0.15 and (b) σ/µ = ±0.2 for only µ N(P) , whereas V TH,N(P) is constant at nominal value of Table 1 at V DD = 15 V. (c) f OSC,var as function of the temperature for different values of σ of µ N(P) with V TH,N(P) at nominal value and at V DD = 15 V. Figure 15 . Figure 15.Results of a 1000 points process Monte Carlo analysis for (a) σ/µ = ±0.15 and (b) σ/µ = ±0.2 for only V TH,N(P) , whereas µ N(P) is constant at nominal value of Table 1 at V DD = 15 V. (c) f OSC,var as function of the temperature for different values of σ of V TH,N(P) with µ N(P) at nominal value and at V DD = 15 V. Figure 17 . Figure 17.Temperature error as function of the temperature of the f OSC,norm in Figure 16 with σ/µ = ±0.1 for V TH,N(P) and µ N(P) .The error is with respect to (16) at V DD (a) 15 V and (c) 20 V, and to a third-order curve model at V DD (b) 15 V and (d) 20 V. Figure 18 . Figure 18.Layout of the proposed sensor. Figure A1 . Figure A1.Comparison between the experimental and Verilog-A numerical model transcharacteristics of lateral 4H-SiC.(a) PMOSFET with W P /L P = 100 µm/10 µm and (b) NMOSFET with W N /L N = 10 µm/10 µm.All the curves are at r.t., |V DS | from 0.1 V to 20 V, and |V BS | = 0 V.The curves of Figure A1 at |V DS | = 100 mV are used to extract both the threshold voltage, V TH , and the mobility, µ N(P).By using the second derivative method[33], we extracted the threshold voltages reported in TableA1, whereas the channel mobility dependency on the V GS is obtained using the following equation: one obtains W P5 from (8a) and W N5 from (8b); • fixing t p I NV 3 and C 2 , the channel widths of I NV 3 are calculated from (12); • fixing t p I NV 1 Table 2 . Results of the analytical design of the proposed circuit using (1)-(14). Table 3 . Results of the design after the tuning using numerical simulations.The changed parameters compared to Table2are highlighted in red-bold. Table 5 . f OSC,var maximum value for different σ/µ and at V DD = 20 V.V T H,N(P)µ N(P) V T H,N(P) − µ N(P) Table 6 . Comparison between our proposal and the state-of-the-art.
10,028.2
2023-12-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Very Low Energy Collision Induced Vibrational Relaxation : An Overview Recent experimental and theoretical studies of very low energy collision induced vibrational relaxation in diatomic and polyatomic molecules are surveyed. Emphasis is placed on the novel features of the very low energy process; these require a full quantum mechanical treatment of the collision to account for the observations. INTRODUCTION Although collision induced vibrational relaxation has been studied for many years, it has only recently been discovered that very low energy collisions can lead to very efficient energy transfer.Conventional theories, such as that due to Schwartz, Slawsky and Herzfeld, 2 predict that the vibrational relaxation cross section decreases as the collision energy decreases, and is vanishingly small for collision ener- gies near 1 cm-1.][5][6] This paper presents an overview of the extant experiments and theoretical studies of very low energy collision induced relaxation phenomena.We shall show that both qualitative and quantitative interpretations of the observed behavior follow from a full quantum mechanical treatment of the atom-molecule collision.The most direct way of studying atom-molecule collision induced vibrational relaxation employs crossed, velocity selected, molecular beams.The conditions employed to generate molecular beams make it difficult to study the very low collision energy regime; to date only one such study has been reported. 11The overwhelming majority of atom-molecule collision induced vibrational relaxation studies, irrespective of the collision energy range of interest, are carried out under "bulb conditions."In these experiments the average collision energy is varied by changing the temperature; and the lowest tem- perature that can be achieved is limited by a thermodynamic con- straint, namely, condensation of one or both of the species that define the collision. Rice and co-workers have devised a method for bypassing the thermodynamic constraint characteristic of bulb studies, thereby per- mitting study of the collision energy dependence of vibrational relaxa- tion in the very low collision energy regime. 3Their method takes advantage of the characteristics of supersonic free jets.In a supersonic -' lO : :10(4.g 0 n0 E (cm-1 FIGURE 1 Number o[ collisions per molecule per second in a free jet as a function of distance from the nozzle, measured in nozzle diameters. jet, generated by adiabatic expansion through a nozzle, the transla- tional temperature and density of the gas varies with distance from the nozzle.The residual collisions characteristic of the temperature at any given distance from the nozzle can be used to induce vibrational relaxation in an excited seeded molecular species.Thus, by changing the downstream location of the excitation region and dispersing the fluorescence it is possible to probe the effect of collision energy on vibrational relaxation.Calculations based on the kinetic theory of the jet show that the collision energy range that can be studied is of the order 100-1 cm -1 for a gas originally at 300 K, and it can be extended upwards by heating the nozzle.Furthermore, the range of collision energies selected is very narrow when the local temperature is low (Figure 1).Finally, by adjusting the carrier gas pressure it can be arranged that there are one or less collisions per lifetime, or if so desired that there are several collisions per lifetime, of the excited molecule with carrier gas molecules. A few details concerning the properties of free supersonic jets are pertinent to our discussion.Under reversible adiabatic flow conditions the supersonic expansion is isentropic and, for an ideal gas, 7"0 (1 where n is the gas density, 3' Ct, I C,,, the ratio of heat capacities, and the subscript zero refers to conditions in the source.If the expansion is considered to originate from a source sphere of radius r,, the temperature of the gas along the jet centerline is, for Mach number M >> 1, T 2 (y-l) 00=y+l y+l where x is the axial distance downstream and r. depends on the nozzle diameter D. 13 Sufficiently far downstream the gas density decreases to the point that the assumption of hydrodynamic flow, made above, ceases to be valid, and the velocity distribution for motion parallel to the jet is characterized by a different temperature (1) than that for motion perpendicular to the jet (T+/-).Cattolica, Robben, Talbot and Willis 13 have shown, from an extensive investigation of the domain of validity of hydrodynamic flow in free jets, that as poD increases the ellipsoidal velocity distribution becomes nearly spherical (p0 is the source pressure).Rice and coworkers choose source conditions such that the difference between 2ql and T+/-is small for all x used in the experiment, o that the velocity distribution function at any position x can be characterized by one temperature, T. Cattolica et al. have shown that (r,/D) 0.742 in this poD regime. Consider some point x along the jet axis, at which the local tem- perature is T(x).The differential number of collisions per seeded molecule per unit time is The carrier gas is labelled 1, the seed species is labelled 2, 2 is the seeded molecule-carrier gas molecule collision cross section (assumed constant), v is the relative velocity between carrier gas and seeded pecies molecules, and u is the bulk flow velocity.After transformation to center-of-mass and relative coordinates, and integration, the total number of collisions per seeded molecule per unit time with relative speed between v, and Vb is found to be 2 N 3 -v/2kaT Z ab n dv v e (5) where is the reduced mass of the colliding pair.Given the functional forms for n(x) and T(x) implicit in Eqs. ( 1) and (2), Eq. ( 5) is conveniently rewritten in the form Z C:-8/[e -(y + 1)] ( 6) Yb where we have assumed the carrier gas is monoatomic with y 5/3.In (6), C is a constant, x/D and / v2 4/3 .S8 k vo)X (7) Values of Z b for representative collision parameters and source conditions are displayed in Table I.The available data base concerning very low energy collision induced vibrational relaxation consists of studies involving I2(3IIo+,), C6HsNH2(1B2), C6HsCH3(1B2), C6HsF(B2), C6H6(B2u) and C2H202(Au).We shall cite particular points from the results of each of these studies. In the experiments by Tusa, Sulkes and Rice, I2 was seeded in He, Ne or Ar, a particular vibrational level of I2(3IIo+,) was excited, and the vibrational relaxation to other levels followed as a function of relative kinetic energy of the collision partners.'a Typical results for I2 in He are shown in Figures 2,3 Av :-1 0 20 3O 40 FIGURE 2 Relative intensity of levels 27 and 28 of I2(31-Io+.)as a function of distance from the nozzle.The system was initially excited to level 28.The crosses represent the calculated ratio of intensities under the assumption that all collisions with 0 < E < 3 cm -1 are equally effective.Collision partner: He. in obtaining the data of Figure 2 correspond to their being one He'I2 collision per excited state lifetime, whereas there were, respectively, two and three collisions per lifetime for the data of Figures 3 and 4. Results for I2 in Ne, shown in Figures 5, 6, and 7, are similar in the dependence of depopulation of the initial level on relative kinetic energy. There are two features of these observations that are striking.First, collisional depopulation persists even when the relative kinetic energy of the collision partners is sensibly zero, as shown by the existence of emission from new levels even under conditions where T 1 K. Second, the cross section for this proccess is very large.The validity of this point will be demonstrated below. (25-o)/(2e-) vs for 12 in He, Po =20psi FIGURE 5 Relative intensity of levels 27 and 28 of I2(31Io,+,) as a function of distance from nozzle.The system was initially excited to level 28.The crosses represent the calculated ratio of intensities under the assumption that all collisions with 0< E < 1.9 cm -1 are equally effective.Collision partner: Ne. A consistent interpretation of the observations can be constructed as follows.Suppose collisions wtih Emi,E Ema,, are equally effective in producing depopulation of the initial vibrational level, and consider the case that there is only one collision per excited state lifetime.The collision energy distribution per molecule-second obtained from the kinetic theory of the jet can be integrated over this range of E for each position along the jet axis.For the correct range (Emin, Emax) the computed ratio of populations as a function of distance from the nozzle should reproduce the experimental curve.If there is more than one collision per excited state lifetime similar (26-0)/(28-1) vs for I2 in Ne, Po =20psi Av =-2 20 0 40 FIGURE 6 Same as Figure 5 except for monitoring of level 26.calculations can be performed.Tusa, Sulkes and Rice have examined the case of successive collisions, each inducing a one quantum transi- tion, and the case of a single collision inducing two, three,.., quantum transitions.3 They find that the experimental data for relaxation of 12 by He or Ne are well represented by the successive collision-single quantum transfer relaxation model (see Figures 2-7), whereas for relaxation of I2 by Ar roughly half the population corresponding to Av =-2 is generated by a collision which induces a two quantum transition and the other half from succesive collisions that induce single quantum transitions.The energy ranges over which these col- lisions are effective, computed from the kinetic theory fits to the population decays as a function of position along the jet axis, and assuming a constant cross section equal to the hard sphere collision cross section for the range Emin < E < Emax and zero for all other E, are displayed in Table II. Since a successful fit of the kinetic theory-hard sphere collision/relaxation cross section model to their data requires that only very low energy collisions are effective, Sulkes, Tusa and Rice suggest that orbiting resonances, or metastable collision complexes, participate in the relaxation process observed.They also note that the particular collision induced downward transitions monitored in the relaxation Ot I2(3I'Io.+) behave similarly when involved in the predissociation of the corresponding van der Waals complex.For example, predissoci- ation 0t excited HeI2 and NeI2 complexes produce I2(3IIo.+) with Av =-1, whereas for ArI2 complexes Av ranges up to -3. Clearly, the observed very efficient-very low energy collision induced vibrational depopulation of I2 implies the existence of a relaxation mechanism which is qualitatively different from that dominant at higher collision energy.A simple picture of what that mechanism might be was proposed by Rice and co-workers; 3 a quanti- tative verification 9 of this picture will be described in Section 4. Briefly, Rice and coworkers observed that at the midpoints of the energy ranges in which the collisions are effective, the de Broglie wavelengths of He, Ne and Ar are 11, 7 and 4 A, respectively. Therefore, the atom must be treated as having a delocalized wavefunc- tion with spatial extent comparable to or greater than the internuclear spacing of I2.In general, a vibrational relaxation process is efficient if there are Fourier components of the driving force which are close to the frequency of the driven oscillator.In an orbiting resonance, or even an "ordinary collision" under the conditions considered herein, the I2 is effectively embedded in the delocalized wavetunction of the atom, which spreads over the entire molecule.Then, because of the strong repulsion between the I atom and the He atom at small separations, vibration of the I2 generates amplitude oscillations in the delocalized He spatial distribution, and these in turn create a reaction force at the driving frequency; this reaction force is appropriate for promoting vibrational transitions. Sulkes, Tusa and Rice also find that there is little or no angular momentum transfer for Av 0 and that for Av =-1 the angular momentum transfer occurs predominantly in the range -6 < AJ < 6 (He:I2(3IIo:) system).Blazy et al. 14 reached similar conclusions for the rotational state distributions of the Av 0 and Av =-1 com- ponents of the products following predissociation of the excited He--I2 van der Waals complex.These observations are consistent with the simple mechanism described above if the scattering resonance involves rotational excitation of the I2 molecule or, in the.absence of such excitation, by virtue of the anisotropy of the He--I2 interaction; a more detailed analysis is described in Section 4. Is it generally the case that very low energy atom-molecule col- lisions lead to very efficient vibrational relaxation?A partial answer to this question can be obtained from a series of studies by Rice and coworkers.4'5'6 Briefly put, in all cases studied to date except One, it is observed that" 1) very low energy atom-molecule collisions do lead to efficient vibrational relaxation; 2) the energy ranges in which collisions are efficient are different for vibrations with different point group symmetries; 3) there is a similarity between the relaxation pathways accom- panying predissociation of a van der Waals molecule and the corres- ponding very low energy collision. An illustration of observation (1) is displayed in Figures 8 and 9, for the case of collisions of He with 1B2 aniline.The spectra show clear evidence for collision induced vibrational relaxation under con- ditions for which the collision energy is very small.Similar observa- tions have been made for collisions of He with B2 toluene, XB2 fluorobenzene and 1Au glyoxal.Different behavior is observed in collisions of He with 1B 2u benzene" this observation will be discussed later in this paper. An illustration of observation (2) can be obtained by comparing Figures 8 and 9 with Figures 10, 11 and 12.It is easily seen that the distance downstream at which collision induced vibrational relaxation ceases is different for vibrations with different point group sym- metries.4 This effect is unambiguous, and has been observed in the very low energy collision regime both for benzene derivatives and for glyoxal.There is no apparent correlation between the energy of effective collisions, which can be deduced from the distance down- stream where relaxation ceases, and the energy of the initially pumped vibration.On the other hand, there is a rough correlation between the energy of effective collisions and the number of nodes of the initially pumped vibrational wave function. An illustration of observation (3) is presented by the entries in Table III.These data show that cold collisions are nearly as selective in generating product states as is van der Waals complex dissociation.It might have been expected (in the sudden approximation to the collision process) that more relaxation pathways would be open in collision induced relaxation than in complex dissociation, corresponding to the many different orientations of collision partners as compared to the well-defined geometry of the complex.It is also noteworthy that three quantum relaxation with a large energy gap (-1200 cm-) The bracketed SVL's in Columns B and C indicate final levels energetically inaccess- ible in the case of complex dissociation. occurs for collisions while it is absent in the van der Waals complex dissociation. A qualitative interpretation of the observations cited, which is consistent with the interpretation of the very low energy collision induced relaxation of I2, can be constructed as follows.We first note that over the entire range of effective collision energies the de Broglie wavelength of the He atom is never less than a typical internuclear spacing of the polyatomic molecule.Indeed, in the case of the lowest energy collisions the He de Broglie wavelength is of the order of the size of the molecule.We now suggest that the scattering resonances associated with different vibrational levels have different energies, ordered with respect to the number of nodes of the vibrational level. 4 A correlation of this type is consistent with a collision configuration in which the slowly moving He atom "envelopes" a polyatomic collision partner which supports vibrational motion that is very rapid relative to the He atom motion.Then, because of the short range repulsion between the He and the bound atoms of the molecule, we must expect the molecular vibration to force motion in the delocalized He distribution with the same symmetry as the vibration.The existence of this forced motion has two consequences.First, the He delocalized wavefunction will have nodes which correspond to the nodes of the vibrational mode, which leads to an increase in the energy of the scattering resonance when the number of nodes increases.Second, the driven motion of the He atom distribution generates a reaction force with the correct frequency and symmetry to affect rapid depopulation of the driving mode.For example, mode 11 of 2B2 aniline is a breathing motion of the aromatic ring, so it is capable of supporting a nodeless scattering resonance with He, whereas mode 131 has six nodes in the carbon skeleton motion.Our interpretation suggests that because of this ditterence in nodal pat- terns, the scattering resonance supported by 1 will lie lower than that supported by 131 with corresponding cessation of effective collision induced depopulation at smaller for 131 than for 11, as is observed. The one case we have found for which very low energy collision induced vibrational relaxation is not extraordinarily efficient is ben- zene.s Given that both benzene and 12 are nonpolar (whereas all other species studied have a nonzero dipole moment), this is a very puzzling inconsistency.Perhaps the inefficiency of the very low energy process for benzene arises from the nature of the repulsive interaction between the scattering atom and the polyatomic molecule.In the case of I2 the overall interaction, except for anisotropy, is not badly represented by a simple Lennard-Jones potential, but in the case of benzene the center of attraction is displaced from the centers of repulsion, so the effective atom-molecule repulsion is a much steeper function of separation of centers of mass than for a Lennard-Jones potential.The consequence of this difference in repulsive terms is to modify the potential well shape and, plausibly, to destabilize a scatter- ing resonance as one goes from I to benzene. There are a few features of the very low energy collision process, revealed by the He and Hz-A, glyoxal experiments, which we have not yet mentioned.First, there can be competition between collision induced intersystem crossing and collision induced vibrational relaxa- tion, with the relative importance of the two processes apparently dependent on vibrational level, but not dependent on rotational level.Thus, Jouvet and Soep find that very low energy collisions between He and A, glyoxal induce intersystem crossing with cross section 20-30 fold larger than the room temperature cross section for the same process.This observation is consistent with the scattering reson- ance mechanism described above.Second, when there are near degeneracies in the vibrational manifold, single collision multiple quantum transitions appear to make an important contribution to the relaxation pathway.Jouvet, Sulkes and Rice 6 find, for the case of very low energy collisions of He and A, glyoxal, many quantum relaxation to a variety of final levels involve the transitions 8---6 or 81--7151 (see Figure 13).Now, 81 and 61 are nearly degenerate (AE 17 cm-1) as are 81 and 7151 (AE =7 cm-1).In the isolated molecule the symmetries of modes 8 and 6 are such that both Fermi resonance and Coriolis interaction are impossible.But, given the near degeneracy, it is possible that the removal of symmetry characteristic of the scattering resonance configuration, or of a collision, permits a small amount of mode mixing.If such mixing does occur during a collision, the apparently exceptional multiple quantum transition 81 71(Z 502 cm-1) can be thought of as the sequence 81 --7151 via mixing in the scattering resonance, and 75 7 via dissociation of the scattering resonance.Finally, comparison of the energy distribu- tions in the products of He-lAu glyoxal very low energy collisions and He-lAu glyoxal van der Waals molecule dissociation reveals that FIGURE 13 A comparison of the vibrational relaxation pathways characteristic of dissociation of the H2" glyoxal van der Waals complex and of very low energy He' gly- oxal collisions.The former data are from ref. 8. it is primarily the branching ratios to the final states that differ, with the identities o the final states or the two processes being sensibly the same.This observation is consistent with the finding, by Halber- stadt and Soep, that for the 8 level of the H:-glyoxal complex the branching ratio depends on the collision partner and on the feature of the complex excitation profile that is pumped. RESULTS OF THEORETICAL STUDIES As already mentioned in Section 1, understanding the observed anomalous enhancement of vibrational de-excitation at low collision energies provides an interesting theoretical challenge.All of the simple theories of collision induced vibrational relaxation, such as that originally proposed by Landau and Teller 17 predict that the probability of vibrational relaxation becomes vanishingly small when the collision energy is very much less than the vibrational spacing. Theories using these ideas are qualitatively correct in the energy domain for which the de Broglie wavelength of the atom is very small relative to the internuclear spacings in the polyatomic molecule.For example, they adequately account for the falloff in rate of collision induced vibrational relaxation as the temperature of a gas falls.4][5][6][7] Furthermore, the rapid decrease in scattering cross section with increasing energy for E > 3-5 cm -1 is not observed for rotational relaxation under the same conditions, 8 although the observed cross section is also very large. To go beyond the qualitative analysis described in Section 3, and provide a proper theoretical basis for interpreting these results, Rice and coworkers have used several approaches.We shall describe these as follows" First, a formalism based upon an analogy with the analysis of neutron diffraction by molecules is proposed as a replacement for the exact close-coupling formalism of scattering theory.Since atom- polyatomic molecule scattering is not amenable to a full quantum mechanical treatment, a qualitative means of understanding the experimental results is necessary.Even for smaller systems which might be studied by, say, the close-coupling formalism, a method based upon physically intuitive concepts will provide greater insight into the mechanism of scattering and energy transfer.Second, in the case of the simplest system studied experimentally, He--I2, full three dimensional quantum mechanical scattering calculations have been performed within the close-coupling formalism. 9These results yield the most accurate description of the scattering event possible.Third, a rotational infinite order sudden model is utilized to examine the He--I2 rotational relaxation cross section.TM This approximation, which is expected to be valid under the experimental conditions, provides support for the observed trends. 8 may seem that the theory of neutron diffraction can be immedi- ately applied to describe very low energy atom-polyatomic molecule scattering, since the atomic wavefunction is primarily influenced by the geometry of the molecule and does not modify the internal structure of the molecule.Unfortunately, this is not the case.In neutron scattering, the neutron-nucleus interaction is of such short range that each nucleus is a distinct and isolated scattering center with initial and final state neutron wave functions accurately described by plane waves.In atom-molecule scattering, the potential interaction is always strong enough to distort the wave function of the atom, invalidating the plane wave (Born) description.In order to retain the simplicity and physical content of the correlation function representa- tion which describes neutron scattering, it is necessary to include the effects of the atom-molecule interaction, and the most direct method for doing so is to introduce the Distorted Wave Born Approximation (DWBA). Cerjan, Lipkin and Rice have implemented the DWBA-correlation function representation of atom-polyatomic molecule scattering.The collision induced transition rate is, in terms of the T-matrix elements, R 2r ., pl(k'a'lTIka)lES(E'--E'-e) (8) where t and a' represent the exact initial and final eigenstates of the molecule, k, and k', the corresponding momentum states of the projectile, and T is the full transition matrix.(Atomic units are used throughout.)The delta-function conserves energy in a process in which e is the energy transferred to the atom as the molecular energy changes from E to E'.The probabilities, p,, refer to the distribution of initial states.In the case where all the target molecules are prepared in a single state, a *, as in a perfect jet experiment, p, is the Kronecker delta 8,,,.. Following Micha, 19 the rate expression (8) may be written as (9) where use has been made of the Heisenberg representation and the completeness of the final states.To use the DWBA, the potential describing the atom-polyatom interaction is written as the sum of two terms, V VI d-VII (10) where the influence of Vn is presumed small relative to that of V. It is usual to choose the separation (10) so that one can solve the scattering problem excatly for VI, generating a form for the scattered wave functions.Then corrections are obtained as a power series in Vn. Further insight into the observed propensity rule for vibrational transitions 3 (see Section 3) can be obtained by writing the general T-matrix element as Tk,k(R) E e ir'(R>Cj(R) (15) where f/and Cj are real and include the possible anisotropic couplings. The rate expression is given by R I_ dte-i'E ((e-(n('))C'(R(t)) err(n)C(R))) (16) Expanding the time dependent terms in the rate expression to first order in the normal mode displacements of the molecule, and retaining only the downward transitions, the rate becomes R " (') t ) avib (R)t ,) (R)Ib)8 (e) ii' where avib denotes the ruth vibrational state of the molecule and aa is a constant depending upon the normal mode characteristics. Several general features may be seen in the rate expression (17). First, the distorting effect of the potential, and possible multipolar couplings, are contained in the time independent elements.The magnitudes of these factors will significantly affect the possible scatter- ing outcomes, controlling the amplitudes of the different potential contributions to the rate.Second, if each of the successively higher order processes decreases in magnitude, corresponding to an expected decrease in coupling as the separation of the states increases, then the vibrational transitions to nearest neighbor levels will generally be favored.For the weak interaction potentials assumed here, this assumption is probably valid, as manifested by the observed pro- pensity rule for one vibrational quantum transfers.Third, if special symmetry restrictions constrain the internal motion of the system, then it is to be expected that certain vibrational energy transfer processes, which might otherwise be dominant, are not allowed or greatly suppressed. It is also important to examine the low energy vibrational relaxation process as completely as possible from first principles.Due to the complexity of the quantum mechanical inelastic scattering problem, only the simplest system, He--I2, can be adequately treated within the full close-coupling framework. 9 The SchriSdinger equation for the triatomic system He--I2 is, for motion on one electronic surface, [ 1 C3(R2C3 ) 12 lC3(r2C3) j2 2/zR 2 OR -+2/xR 2mr 2 Or r + 2mr 2 (18) + v() + vo_ (R, r, 0) TM(R, r) :(R, r) where R is the vector between the projectile He and the center of mass of the diatom 12; r is the internal co-ordinate vector of the diatom; R and r are their associated magnitudes; O is the angle between the vectors R and r; m is the reduced mass of the diatom I2 and /z is the He--I2 reduced mass; is the diatomic angular momentum operator and is the He--I2 angular momentum operator; V(r) is the I2 interaction potential and VI-I-(R, r, 0) is the He-I2 interaction potential.Introducing a target state expansion and using the helicity body- fixed co-ordinate frame to simplify the interaction potential matrix element evaluations, the Schr6dinger equation ( 18) may be integrated by a variety of techniques.These equations may be partially decoupled by the use of parity conservation which separates positive and negative total angular momentum projections, and by the homonuclear sym- metry of the I2, which separates even and odd rotational states. After completion of the asymptotic analysis, the resulting T-matrix elements provide cross sections for the energetically allowed processes.The cross sections are given by the standard expression r 1 y (2J + 1) Y [T,r,-.12 ( 19) cr(n'/' (-'n/) 2/'+ 1 k =o ll' where the -,'j'r,-,,j are the T-matrix elements for different (n,/', l) transitions with total angular momentum J and where K 2- (o-E). The potential chosen to represent the atom-molecule interaction is a pairwise sum of Morse functions, which is believed to be an accurate description for these systems2.The I2 potential was deter- mined by fitting spectroscopic data, and the He--I interaction was determined by examining the van der Waals pre-dissociation data for He--Ia.zl With these choices, the integration of the partially decou- pied equations was carried out by using both the log-derivative and VIVS integration schemes3.The basis set and integration parameters were varied until convergence was achieved.The variation of the calculated cross section with respect to increasing translational energy is given for the processes (24,/' (---25, 0), (24,/' (---25, 2), (0,/'(---1, 0) and (0,/" (--1, 2) in Figures 14, 15, 16 and 17 respectively.For the (24,/" (--25,/') cross sections only zero total angular momentum was included in the total cross section sum, whereas J =0, 1, 2, are included in the (0,/" (---1,/') results.These calculations show that the qualitative interpretation of the enhanced low energy vibrational relaxation cross section proposed by Rice and co-workers is correct' the cross section becomes vanishingly small or energies greater than a few cm-1. Finally, calculations of the rotational relaxation cross section were performed TM using the same techniques as in the vibrational relaxation study.These calculations, though, are not exact since a rotational infinite order sudden approximation was used.That is, it was assumed that the basis set expansion for the entire wave function is restricted to one vibrational manifold.This approximation is suggested by a similar analysis of the vibrational pre-dissociation of He--I2.With this restricted basis set expansion, total cross sections (summed over all contributing total angular momentum J) could be obtained for three different sets of parameters for the pairwise atom-atom Morse functions" a) D 7.0 cm -, fl 1.24/k-, re 4.0/k b) D 18.5 cm-,/3 1.14 A-1, re 4.0/k c) D 17.5 cm-,/ 1.20/k-, re 4.6/k Figures 18 and 19 display the rotational de-excitation processes (0, 0--0,/') for the first two sets, while Figure 20 presents the (25, 0 -25, ]) processes for the third parameter set.For comparison, the uncorrected data of Tusa, Siflkes and Rice s are also included.(It should be noted that these data are for the n 28 state rather than the n 25 state.)Overall, it is clear that the calculations provide qualitative support for the experimental observations.FIGURE 14 Energy variation of the zero total angular momentum cross section for vibrational de-excitation from (n 25, j 0) to (n' 24, ,/') for j' 0, 2, 4, 6.FIGURE 15 Energy variation of the zero total angular momentum cross section for vibrational de-excitation from (n 25, j 2) to (n' 24, j') for/" 0, 2, 4, 6. ." -.,,. x.. ..,. .\ . . .'.. ..<.. CONCLUDING COMMENTS The very low energy collision induced relaxation of polyatomic molecules is rather different from the behavior predicted by models which do not fully incorporate the quantum dynamical features of the scattering process.The research reported in this paper, although showing a rather satisfying agreement between observation and theory, only scratches the surface--we are convinced that many more aspects of the very low energy process remain to be discovered, and that their interpretation will challenge the quality of our theoretical understanding of scattering phenomena..02.0 3.0 Elr0ns(Cm FIGURE 19 Energy dependence of the total cross section for two deexcitation processes in the n 0 vibrational state, tr(0, 0 0, 2) and tr(0, 0 0, 4), using para- meter set (b) of the text.FIGURE 20 Energy dependence of the total cross section for two rotational de- excitation processes in the n 25 vibrational state, tr(25, 0 --25, 2) and cr(25, 0 --25, 4) using parameter set (c) of the text.The Tusa-Sulkes-Rice uncorrected cross section is displayed for comparison. FIGURE 3 FIGURE 3 Same as Figure 2 except for monitoring of level 26. FIGURE 7 FIGURE 7 Same as Figure5except for monitoring of level 25. TABLE I . Collisions/molecule-s for p0 30psi; He carrier gas, To TABLE II Collisional energy fits for I2*--M
7,397.6
1983-01-01T00:00:00.000
[ "Physics", "Chemistry" ]
E-SAP: Efficient-Strong Authentication Protocol for Healthcare Applications Using Wireless Medical Sensor Networks A wireless medical sensor network (WMSN) can sense humans’ physiological signs without sacrificing patient comfort and transmit patient vital signs to health professionals’ hand-held devices. The patient physiological data are highly sensitive and WMSNs are extremely vulnerable to many attacks. Therefore, it must be ensured that patients’ medical signs are not exposed to unauthorized users. Consequently, strong user authentication is the main concern for the success and large scale deployment of WMSNs. In this regard, this paper presents an efficient, strong authentication protocol, named E-SAP, for healthcare application using WMSNs. The proposed E-SAP includes: (1) a two-factor (i.e., password and smartcard) professional authentication; (2) mutual authentication between the professional and the medical sensor; (3) symmetric encryption/decryption for providing message confidentiality; (4) establishment of a secure session key at the end of authentication; and (5) professionals can change their password. Further, the proposed protocol requires three message exchanges between the professional, medical sensor node and gateway node, and achieves efficiency (i.e., low computation and communication cost). Through the formal analysis, security analysis and performance analysis, we demonstrate that E-SAP is more secure against many practical attacks, and allows a tradeoff between the security and the performance cost for healthcare application using WMSNs. Introduction During the last few years, we have seen the great emergence of wireless medical sensor networks (WMSNs) in the healthcare industry. Wireless medical sensors are the cutting edge components for healthcare application and provide drastically improved quality-of-care without sacrificing patient comfort. A wireless medical sensor network is a network that consists of lightweight devices with limited memory, low computation processing, low-battery power and low bandwidth [1]. These medical sensors (e.g., ECG electrodes, pulse oxi-meter, blood pressure, and temperature sensors) are deployed on patient's body and collect the individual's physiological data and sends the collected data via a wireless channel to health professionals' hand-held devices (i.e., PDA, iPhone, laptop, etc.). A physician can use these medical sensor readings to gain a broader assessment of patient's health status. The patient's physiological data may include heartbeat rates, temperature, blood pressure, blood oxygen level, etc. A typical patient monitoring in hospital environment is shown in Figure 1. Several research groups and projects are working in health monitoring using wireless sensor networks, for example, CodeBlue [2], LiveNet [3], MobiHealth [4], UbiMon [5], Alarm-Net [6], ReMoteCare [7], SPINE [8], etc. Thus, healthcare systems are the applications that most benefit from using wireless medical sensor technology that can perform patient care within hospitals, clinics and homecare. In addition, E-SAP provides secure session key establishment between the users and the medical sensor nodes, and allow users to change their password. Furthermore, we demonstrate the formal verification of the proposed protocol by the Burrows, Abadi and Needham (BAN) logic model [48], where two main security properties are verified: authenticity and secure session key establishment. Moreover, the proposed scheme resists many practical attacks (e.g., replay, user and gateway masquerade, smartcard stolen-verifier, gateway secret key guessing, password guessing, and information-leakage). To attain the low computational overheads, our scheme uses one-way hash functions along with XOR operations and symmetric cryptosystem. The rest of paper is organized as follows: Section 2 discusses the healthcare architecture using wireless medical sensors, adversary attack model, and wireless healthcare security requirements. Section 3 briefly reviews the related literature for secure healthcare monitoring using medical sensor networks. Section 4 introduces and describes a novel E-SAP: efficient-strong authentication protocol for healthcare application using WMSNs. Section 5 describes the brief introduction of BAN logic and provides formal verification of E-SAP using the BAN logic model. Section 6 discusses the security analysis and efficiency evaluation in contrast to exiting schemes and finally, in Section 7 conclusions and future directions are presented. Healthcare Architecture, Adversary Attack Model, and Security Requirements for Healthcare Application in WMSN This section presents healthcare monitoring architecture for hospital environments, adversary attack models and security requirements for healthcare application using WMSNs. Healthcare Architecture A patient healthcare monitoring architecture is depicted in Figure 2, where usual patient monitoring is needed after patient hospitalization (e.g., after cardiac infarction). When a patient is hospitalized, he/she can get some suitable medical sensor devices, deployed strategically on the patient's body. These sensors sense the health parameters, (e.g., blood pressure, movement, breathing, ECG etc.) and send physiological parameters to the professionals' mobile devices (such as PDA, smart phone and laptop). Later, a professional may store patient data on the backend server for further processing, which is currently outside the scope of this paper. It is obvious that a professional can access the patient's health parameters directly from the medical sensor, in an ad-hoc manner. As shown in the Figure 2, the healthcare architecture has three active entities, namely, user, medical sensors and base-station/gateway. We assume a real-time scenario, and suppose a professional wants to query the patient's medical sensors for physiological information, as follows: (a) the user (U i ) sends a query to the gateway node (GW); (b) upon receiving the professional's request, the gateway node forwards the user's query to the medical sensor; and (c) thereafter, the medical sensor responds to the user. Here, the gateway node plays an important role between the professional and the medical sensor. Based on the above scenario, the next sub-section describes an adversary attack model for healthcare application using WMSNs. Adversary Attack Model The patient's physiological information is very sensitive and may attract many attackers, such as insurance companies, corrupt media persons, individual enemies, etc. Furthermore, the patient's medical sensors and the professionals' hand-held devices are wireless in nature. So, these wireless devices may attract unauthorized users or thieves, more especially. For example, they (unauthorized users or thieves) can roam to the hospital ward and easily eavesdrop on the patients locally, so we have categorized the attack models as follows: Eavesdropping on Wireless Medical Data As the medical sensors sense the patient's body data, they transmit it over the radio communication channel. The wireless transmission ranges are not confined to hospital wards and these wireless channels are highly susceptible. As a result, an attacker may eavesdrop air messages (i.e., a patient's physiological information), and can disclose the patient's physiological information. Hence, the patient privacy is breached. Active Attack In an active attack scenario, the capability of an attacker depends on his/her skill (i.e., ability to monitor all the communication). An attacker may inject bogus messages into the wireless channel and may alter the wireless medical sensor data during the communication. Any spurious messages injection into the healthcare network could cause mistreatment. Furthermore, an attacker may replay the old messages again and again, which could cause overtreatment (i.e., medicine overdose). Thus, active attacks endanger and may pose a life-threatening risk to the patients. Strong User Authentication The major problem in wireless healthcare environments is the vulnerability of wireless messages to access by unauthorized users, so it is desirable that strong user authentication be considered, where each user must prove their authenticity before accessing the patient's physiological information. Furthermore, strong user authentication, also known as two-factor authentication, provides greater security for healthcare application using wireless medical sensor networks [47]. Mutual Authentication In real-time healthcare applications, the user and the medical sensor must authenticate each other; hence, they can ensure the communication is established between the authenticated user and the medical sensors. Confidentiality The patient health data are highly sensitive and medical sensors are wireless in nature, therefore patient physiological data should remain confidential from passive attacks such as eavesdropping or traffic analysis. Thus, patient's health data is only accessed or used by authorized professionals. Session Key Establishment A session key should be established between a user/professional and a medical sensor node, so that subsequent communication could take place securely. Low Communication and Computational Cost Since wireless medical sensors are resource constrained devices, and the healthcare application's functions also need room for executing their tasks, the protocol must be efficient in terms of communication and computational cost. Data Freshness Generally, professionals need patient physiological data at regular intervals, so there must be guarantee that patient health data is recent or fresh. Furthermore, it (data freshness) also ensures that an adversary cannot replay the old messages. Secure Against Popular Attacks In real-time healthcare environments the protocol should be defensive against different popular attacks, such as replay attack, impersonation attack, stolen-verifier attack, password guessing attack, and information-leakage attack. As a result, the protocol can be easily applicable to the real-time wireless healthcare applications. User-Friendliness The healthcare architecture should be easy to deploy as well as user-friendly; such as, a user can update his/her password securely, whenever he/she needs to. Related Work This section discusses the literature reviewed for secure healthcare monitoring using wireless sensor networks and general user authentication protocols for wireless sensor networks that have been recently proposed. Malasri et al. [31] designed and implemented a secure wireless mote-base medical sensor network for healthcare applications. The main components of their scheme are: (i) two-tier architecture is designed for the patient data authentication; (ii) a secure key exchange protocol (i.e., elliptic curve cryptography (ECC)) is used to establish secret shared keys between the sensor nodes and the base station; and (iii) a symmetric encryption/decryption algorithm provides confidentiality and integrity to patient data. Moreover, in their architecture each sensor mote has incorporated a fingerprint scanner; by doing so, the patient's identity is verified with the aid of a base station. Although, their scheme provides adequate security to patients, it does not care about the strong professional authentication (i.e., who is accessing the patients' vital signs), whereas user authentication is a prime concern under various laws [29]. Hu et al. [32] have designed and proposed a software and hardware based real-time cardiac patient healthcare monitoring system named 'tele-cardiology sensor network' (TSN). TSN is particularly intended for the U.S. healthcare society. It enables real-time healthcare data collection for elderly patients in a large nursing home. In this architecture, a patient's ECG signals are automatically collected and processed by an ECG sensor and transmitted in a timely way through a wireless channel to an ECG server for further analysis. TSN integrates with large number of wireless ECG communication units; each unit being called a mobile platform. A block cipher algorithm (i.e., skipjack) is used for securing ECG data transmission, and protecting patient privacy. Although their proposal provides privacy in term of confidentiality and achieves integrity, strong user authentication is not addressed effectively. Huang et al. [18] proposed a secure hierarchical sensor-based healthcare monitoring architecture. The proposed architecture has three network tiers (i.e., sensor network, mobile network, and back-end network), and has considered three real-time healthcare applications (i.e., in-hospital, in-home, and nursing-house) scenarios. The authors used wearable sensor systems (WSS) and wireless sensor motes (WSM) at the sensor network tier. The WSS are Bluetooth enabled and integrated with biomedical sensors; and the WSS are strategically placed on the patient's body, whereas, the WSMs are deployed within the building, and are used to collect the environmental parameters and transmit through the Zig-bee wireless network standard. WSS and WSM broadcast data securely to the upper layer. Here, WSS uses an advance encryption standard (AES)-based authentication and encryption, while WSM uses a polynomial-based encryption scheme to establish secure point-to-point communication between two WSMs. In the mobile network tier, mobile computing devices (MCDs) such as PDAs are organized as an ad-hoc network and connected to the local station. MCD has the more computational capabilities to analyze the WSS and WSM data. The back-end tier is structured with a fixed station as a server, that provides application level services for lower tiers and process various sensed data from MCDs. Even though Huang et al. proposed a secure pervasive hierarchical sensor-based healthcare monitoring, they did not consider the need for strong user authentication, which is an imperative security for healthcare applications according to laws (i.e., HIPAA [29]). Very recently, Le et al. [34] suggested a mutual authentication and access control protocol (MAACE) where legitimate professionals can access their patient's data. The MAACE facilitates mutual authentication and access control, which is based on elliptic curve cryptography (ECC). Furthermore, these authors argue that their scheme is secure enough in practical attacks, e.g., replay attack, and denial-of-service attacks. Their architecture (i.e., MAACE) consists of three layers: (i) sensor network layer (SN); (ii) coordination network layer (CN); and (iii) data access layer (DA). In their architecture, the SN transmits data to the CN (i.e., PDA, laptop or cell phone), later, the data is forwarded to the DA for future record. Although, Le et al.'s protocol facilitates sufficient security against practical attacks, but their scheme susceptible to information-leakage attacks, which could be risky for the patient's privacy. As a result, patient vital signs are could exposed to illegal users (e.g., insurance agents, media persons, etc.), which is not acceptable for real-time healthcare applications. Thus, a strong user authentication is required for the healthcare application using sensor networks. In 2009, Das [42] has proposed two-factor user authentication protocol for wireless sensor networks. Das claimed that his protocol is safe against many attacks (i.e., replay attack, password-guessing attack, user impersonation attack, node compromise attack, and stolen-verifier attack). Later, others [44,46] have pointed out that Das protocol is susceptible to the gateway bypass attack, user impersonation attack, insider attack, etc. Furthermore, Das' protocol does not provide message confidentiality, and mutual authentication between the sensor and the user. Consequently, this protocol is not applicable to healthcare applications using sensor networks. In [49], Kumar-Lee has shown that some authentication protocols [44,46] have security weaknesses and the computation costs of their protocols are very expensive. Thus, the protocols in [44] and [46] are not suitable for such wireless healthcare applications. As we can notice from the above literatures, strong user authentication for healthcare application using wireless medical sensor networks has not yet been addressed adequately. Hence, a significant research effort is still required to explore the user authentication for WSN healthcare application. So, next section proposes an efficient-strong authentication protocol, named E-SAP, for healthcare applications using WMSNs. The Proposed E-SAP Protocol This section presents the proposed efficient-strong authentication protocol (E-SAP) where only legitimate professionals can access the patient's body data in an authentic manner. The proposed protocol can be applicable to hospitals, homes and clinical environments. The basic idea of E-SAP is quite simple: professionals need to register with the gateway node at hospital registration center. Upon successful registration, the professional receives a smart card from the registration center. Then, professionals can access the patient physiological information's from the patient body area sensor network, whenever demanded. In order to prove the professional legitimacy, a professional sends his/her password and smart card based login request to the gateway node. Upon receiving the professional requests the gateway node first authenticates him/her, and then forwards the professional's request to the dedicated medical sensor, whose data the user is demanding. Thereafter, the medical sensor checks the authenticity of the gateway node and establishes a secure session key between the medical sensor and the professional and responds to the professional. In order to execute the proposed protocol, we have considered the following assumptions: 1. We assumed that the hospital registration center is a trusted authority. 2. The gateway node has three long master keys (i.e., J, K and Q (256 bits long each)). 3. Initially, it is assumed that the gateway and the medical sensor nodes share a long-term secret key SK gs = h(Q||ID g ) using any key agreement method [50,51]. Table 1 gives a list of notations with descriptions which are used throughout in the paper. The proposed E-SAP consists of four phases, namely, the professional registration phase, patient registration phase, login and authentication phase, and password change phase. Professional Registration Phase In this phase, the professional initially needs to register with the gateway node at the registration center, as follows:  User chooses ID i and PW i and submits to GW node using secure channel.  Upon receiving user's ID i and PW i , the GW node computes the following: Thereafter, the GW node issues a smart card to the professional with the following {h(.), C ig , N i , K}. Here, K is a long-term GW node secret, which is securely stored in the smart card. Patient Registration Phase In order to execute the proposed E-SAP, a patient needs to register at the hospital registration center [38], as follows:  Patient passes his/her name to the registration center.  After patient registration, registration center choose the suitable sensor kit (i.e., medical sensor and gateway) and designate professionals/users.  Later, registration center sends patient ID pt and medical sensors kit information (i.e., gateway, sensor etc.) to the designated professionals/users. Now, the technician deploys wireless medical sensors on the patient body area, strategically, as shown in Figure 2. Login and Authentication Phase This phase is invoked when a professional roams into the patients' ward and wants to perform a query or to access the patients' physiological information from the body network. This phase is further divided into login phase and authentication phase. Login Phase The professional inserts his/her smart card into the terminal and inputs keys, ID i and PW i . Upon receiving the login request, the smart card verifies the user locally with pre-stored values and performs operations, as follows:  N i * = h(ID i PW i K)and compare N i * = N i , if yes, then go to the next step, otherwise, terminates the request. Here, M is a random nonce that is generated by professional system, which is used to establish the secure session key. Then professional's system sends message <CID i , T′> to GW node. Here, T′ is the current time stamp of professional system. Authentication Phase This phase is invoked when the GW node receives a login request from a professional. Upon receiving the login request at time T′′, the GW node performs the following and authenticates him/her, as: Thereafter, the GW node sends a message <A i , T′′′> to the medical sensor that the professional wants to access. Furthermore, A i ensures to the medical sensor that the request has come from the legal gateway node. Upon receiving the gateway node message, the medical sensor node performs the following steps: Password-Change Phase The password-change phase is invoked when U i wants to change/update the password, when he/she requires. The password change procedure is as follows:  The user inserts his/her smart card into the terminal and enter keys (i.e., ID i and PW i ).  Smart card performs the operations: Formal Analysis of E-SAP Using BAN Logic Formal analysis ensures that the protocol functions are correctly modeled, and needs to be verified, (i.e., error free) before their real-time implementation [48]. In this regards, this section describes the formal verification of E-SAP using BAN logic, which is popular for formal verification of authentication protocols. The section is divided into: (A) brief overview of the BAN logic, which was introduced by Burrows, Abadi and Needham [48]; and (B) a demonstration of the formal execution and validity proofs of the proposed E-SAP using the BAN authentication logic model. BAN Logic The BAN logic is a popular authentication protocols analysis model, and it is useful to prove the validity of authentication and key establishment protocols, for more details the readers may refer to [48]. The notations used in BAN logic are defined as follows:  P believes X: The main construct of logic is 'P believes X' (i.e., the principal P believes on X) or P would be entitled to believe X.  P sees X: Only 'P sees X', i.e., suppose someone has sent a confidential message (i.e., encrypted message) containing X to P, then P can read X (i.e., after performing some decryption).  P said X: The principal 'P once said X'; means, at some time the principal P sent a message including X.  P controls X: The principal 'P has controls over X'; means, the principal P is an authority on X and should be trusted (e.g., a server is often assumed trusted and generate secret keys properly).  Fresh(X): Fresh(X) means, X has not been sent recent in a message during the protocol execution. Furthermore, Fresh(X) protects from replay attack.  Q K P  : The principal P and Q may use secret shared key K for secure communication. The keys K will never be disclosed to others except for the designated principals (i.e., P and Q).  {X} K : Means the formula X is encrypted using the key K.  <X> Y : The formula X is combined with secret parameter Y. Now, we have defines some logical rules that we use in proofs, and which are directly adopted from [49], as follows: Formal Verification of the Proposed E-SAP This sub-section demonstrates the formal verification of our proposed protocol using the BAN logic analysis model [48]. The main principals of E-SAP are: user (U i ), gateway (GW) and medical sensor node (Sn). The following symbols are used: (a) the secret keys are J, K, SK gs and SK; (b) the time-stamps are T′, T′′′ and T*. The main goal of formal verification is to establish a secure session key between the user and the medical sensor node. To perform the formal verification of E-SAP, we use the following logical postulates: The protocol messages (as shown in Figure 3) are needs to be transform into the idealized form, as shown in Table 2: Table 2. E-SAP messages transform into the idealized form. E-SAP formal analysis using BAN logic required further assumptions, as follows: A16) Sn believes (GW controls ID i ) Based on the above assumptions and BAN logic rules, we perform the verification of the proposed E-SAP, as shown in Table 3. Table 3. Formal verification of E-SAP using BAN logic model. As we can see from the above verification, A7, S13, S19 and S20 establish the secure session key between the user and the medical sensor. Furthermore, A3, A10, S5, S11, S19 and S20 verify mutual authentication between the user and medical sensor using the gateway. Hence, the goal of E-SAP is achieved (i.e., secure session key has established and only authentic users can access an individual's body information from the wireless medical sensor networks). E-SAP Evaluation This section discusses the security analysis and functionality analysis of the proposed E-SAP for healthcare application using medical sensor networks. Further, we present a performance analysis of E-SAP. The following assumptions are considered before evaluating the proposed protocol, which is based on a smart card and password (i.e., two-factor):  The adversary has total control of wireless communication; he/she may intercept, delete or alter any message in the communication (recall the discussion of attack model in Section 2).  The attacker either obtains a user's password, or extracts the secrets from the smart card through [52,53], but not both (i.e., password and smart card) at the same time [50].  Assumed that, extracting secrets from smart card is quite complex and some smart card manufacturer provide countermeasures against side channel attacks [42,50]. In [54] authors proposed some software countermeasures against power analysis attack.  We assumed that the symmetric cryptosystems are secure enough to protect patient physiological information from cracking, and any encrypted text cannot be decrypted without having the secret keys, which is known only to the trusted entities (i.e., user, gateway, medical sensor and hospital registration center). Security Analysis This sub-section shows the proposed protocol is secure against many practical attacks. In additions, the proposed E-SAP facilitates: confidentiality, mutual authentication between the user and the medical sensor, a secure session key establishment between the medical sensor node and the professionals, and professionals can change their password, securely. Replay attack: The proposed protocol is resistant to replay attacks. Assume that an adversary replay the old captured messages to the gateway (i.e., <CID i ,T′>), the medical sensor (i.e., <A i , T′′′>) , and the user (i.e., <L,T*>). However, he/she (attacker) cannot pass the old messages, because all messages are validated by the fresh time stamps, which are contained in the protocol messages (i.e., ( T′′-T′) ≥ ∆T, (T′′′′-T′′′) ≥ ∆T and (T**-T*) ≥ ∆T. Masquerading user attack: An attacker cannot masquerade as the professional (U i ). Suppose an adversary were able to forge a login message <CID i , T′>. Now the adversary will try to login into the WMSN with a modified message <CID i *, T′>. He/she cannot pass the fake message because the forged CID i * will not be verified at the gateway node and the gateway node cannot get the original message (i.e., (h(ID i )||Sn||M||C ig ||T′)) by decrypting the fake CID i *. Masquerading gateway attack: An attacker cannot impersonate a gateway, since he/she does not have any idea how to get J, K and SK gs from the protocol messages. So, masquerading as the gateway is not applicable to the E-SAP. Gateway secret guessing attack: The proposed scheme is secure against the gateway secret guessing attack. The gateway has three master keys (i.e., J, K and Q), which are not transmitted as plaintext. Hence, E-SAP is secure against gateway secret guessing. Stolen verifier attack: In [43], a user table (i.e., ID i and PW i ) is stored on the gateway node, which may be a high risk to breach the security of protocols. In contrast, the E-SAP protocol does not use any ID i table and password table. So any stolen-verifier attack will not applicable on the proposed protocol. Password guessing attack: An attacker cannot guess the password in our scheme. In the proposed protocol password is not passing as plaintext, instead N i = h (ID i PW i K), so password guessing is not possible. Mutual authentication: The proposed E-SAP provides mutual authentication between the user and the medical sensor. As shown in the Figure 3, the gateway sends message <A i , T′′′> to the medical sensor. Here, A i = E SKgs [ID i ||Sn||M||T′′′||T′] and it ensures to the medical sensor that the message has come from the legitimate gateway node. Thus, the medical sensor believes that the user is a legitimate user. Furthermore, when the user receives a medical sensor message <L, T*>, then he/she verifies the medical sensor (i.e., whether real or not). Hence, the proposed protocol achieves mutual authentication between the user and the medical sensor. Information-leakage attack: The protocol information-leakage gives room to the attackers, which could be harmful for the patient privacy. In E-SAP, suppose an adversary eavesdrops the protocol messages (i.e., <CID i , T′> <A i , T′′′> and <L, T*>). Here, the sub-message CID i is encrypted using shared secret K, the message A i is encrypted using shared SK gs , and the sub-message L is encrypted using SK. Therefore, E-SAP messages information's are not leaked during communication. As a result, information-leakage attacks not applicable to our protocol. Secure session key: The proposed E-SAP establishes a secure session key between the user and the medical sensor node after the authentication phase taken place. As we can see in Figure 3, a session key (SK=h (ID i *||Sn||M*||T′)) is setup between the medical sensor node and the user. Furthermore, the established session key provides confidentiality for subsequent communication; and for each session the session key will fresh. Confidentiality: Confidentiality is a paramount requirement for healthcare application using wireless medical sensor networks. In the proposed E-SAP, the session key could be used for further secure subsequent communication between both (i.e., user and medical sensor node may encrypt patient physiological information's using the session key (SK)). Furthermore, the proposed protocol provides air message confidentiality to their messages (CID i =E K [h(ID i )||M||Sn|| C ig ||T′], A i =E SKgs [ID i ||Sn || M ||T′′′||T′]and L = E SK [Sn||M*||T*]). Secure password change: In the password-change phase, the proposed protocol first verifies the user's old password and identity, and only then requests a new password. Otherwise it rejects the password change request. Thus, the proposed scheme is secure against changed passwords. E-SAP Functionality Analysis This subsection shows the E-SAP functionality and makes a comparison with related schemes (i.e., Le et al. [34], Das [42], Vaidya et al. [43] and He et al. [46]). As shown in Table 4, the proposed protocol provides more functionality such as strong user authentication, mutual authentication between the user and the medical sensor node, it establishes a secure session key for the user and the medical sensor node, message confidentiality and professionals are able to change their password, whereas in [34,42,43] and [46] the schemes provides less security functionality, which are paramount requirements (recall section 2-C) for wireless healthcare applications. Further, it can be seen from Table 4 that the proposed E-SAP is robust against many popular types of attacks (e.g., replay attack, masquerade attack, gateway secret guessing attack, and information-leakage attack) as compared to other schemes. It is worth notice that our protocol provides indispensable security features, whereas, the schemes in [34,42,43,46] provide less security functionality for real-time healthcare applications. E-SAP Performance Evaluation This subsection evaluates the performance of proposed protocol in term of computation cost, communication cost and compares the results with [34,42,43,46]. The performance evaluation parameters are: T pu : public-key computation, T pr : private-key computation, H (performing one hash function), S (symmetric-cryptosystem), and M (performing one message authentication code). Computation cost: The medical sensor devices (i.e., gateway node and sensor node) have limited power resources and computation capability. Therefore, the computation cost is a prime factor for resource constrained devices. The user registration computation cost is a one-time task and it is not a main concern, whereas the login and authentication computation cost are a prime concern due to the resource constrained nature of the gateway node and the medical sensors nodes. Table 5 shows the computation cost of the proposed E-SAP and related schemes, i.e., Das [42], Vaidya et al. [43], and He et al. [46]. It is easy to see from Table 5, in registration phase the proposed E-SAP needs only 1H and 1S at GW node, whereas [42,43,46] require, 3H, 4H and 5H, respectively, which a is high computation cost at GW node. Further, the Le et al. [34] scheme requires modular exponentiation to compute the public and private keys, so their scheme is computationally expensive and time-consuming, and it also needs to generate and verify digital certificates. In the login and authentication phase, E-SAP requires 6H and 7S, and provides more security. In contrast [34,42,43,46] require 4H+4S+6M, 7H, 9H and 7H, respectively, and provide less security services. This is due to the fact that the proposed E-SAP incurred more computation cost and provides paramount security functionality to healthcare applications as compared to [42,43,46]. Thus, the computation cost of E-SAP is well-suited to the healthcare applications using wireless medical sensor networks. Communication cost: The communication cost is an important issue in wireless communication, (i.e., more message exchanges consume more power). From Figure 3, it is easy to visualize that the proposed E-SAP requires three message exchanges between the user, the gateway and the medical sensor, whereas the schemes in [42] and [46] require three message exchanges, and [34] and [43] require four exchanges. Hence, the proposed protocol is well-suited and quite simple in enhancing the wireless communication security for healthcare application. Considering the functionality, computation cost, and communication cost of E-SAP, it is clear that our protocol is more efficient for healthcare applications using medical sensor networks as compared to others [34,42,43,46]. Conclusions Wireless medical sensors offer services to professionals; but what do we do to verify the professionals (i.e., authentic or not). That poses a question to researchers, how to protect medical sensor data from illegal users? In order to solve the above questions, this paper proposed E-SAP, an efficient-strong user authentication protocol for healthcare application using wireless medical sensor networks. E-SAP utilizes two-factor security features and provides strong user authentication, confidentiality and session key establishment for healthcare application using WMSNs. It is noteworthy that E-SAP is more capable in terms of security services, computation and communication cost, as compared to other existing protocols. Furthermore, through intensive analysis (i.e., BAN logic authentication model) we have shown that E-SAP achieves its stated security goals and is defensive against many popular types of attacks. It is a well-suited protocol for hospital, homecare, and clinic healthcare applications using wireless medical sensors. The future directions for this study are: (1) to develop a real-time heterogeneous biomedical sensor network for healthcare monitoring, (2) implement E-SAP on a real-time test-bed for healthcare application, and (3) more focus on access control in patient mobility scenarios and strong patient privacy.
7,617.4
2012-02-07T00:00:00.000
[ "Computer Science" ]
Synthesis and Characterization of the Zinc-Oxide: Tin-Oxide Nanoparticle Composite and Assessment of Its Antibacterial Activity: An In Vitro Study Introduction Nanoparticles (NPs) have been widely used for biomedical applications. Various methods of synthesis of NPs have been performed and the sol-gel technique is one of the most common and feasible methods. ZnO and SnO2 NPs are widely used due to their interesting properties and versatile medical applications. The present study aimed to synthesize a composite of ZnO- SnO2 NPs and evaluate its structural, morphological, and antibacterial properties. Materials and methods ZnO-SnO2 NPs were prepared via the sol-gel technique. The morphological study was performed by scanning electron microscopy (SEM) imaging, the structural study was performed by X-ray diffraction (XRD) analysis, and chemical studies were performed by Fourier transform infrared spectroscopy (FT-IR) and energy-dispersive X-ray spectroscopy (EDAX). Antibacterial properties of the NPs were assessed by the agar diffusion test and the area of bacterial growth that was inhibited was measured under high and low concentrations of the NPs. Results The SEM analysis confirmed the irregular shape and elemental composition of the synthesized NPs. The purity of the NPs was confirmed by the EDAX spectrum, which indicates the weight percentages of the elements in the NPs as follows: Sn-53.8%, Zn-12.5%, O-29.1%, and C-4.7%. The chemical bonds between the NPs were confirmed by Fourier transform infrared spectroscopy. XRD analysis confirmed the high degree of crystallinity of the NPs and orthorhombic structure of SnO2 and the hexagonal structure of ZnO. The zone of inhibition against S. aureus, S. mutans, and E. coli for low concentrations of the NPs was 24 mm, 26 mm, and 30 mm and for high concentrations of the NPs it was 26 mm, 28 mm, and 31mm and these values were similar to the control antibiotics. Conclusion ZnO- SnO2 NPs were successfully prepared by the sol-gel method. The presence of NPs was confirmed and successfully characterized. The prepared NPs had a good antimicrobial effect against the tested pathogens. Introduction The use of nanotechnology has unleashed unlimited potential and it has been used in both industrial and biomedical fields.These nanostructures have a high surface-to-volume ratio, thereby increasing the free surface energy and ability to alter their chemical and physical properties, thus increasing their reactivity manifold [1][2][3].The production of nanoparticles (NPs) can be accomplished by chemical and mechanochemical methods and the most commonly used processes for the synthesis of NPs include gas condensation, vacuum deposition and evaporation, chemical vapor deposition and condensation, mechanical attrition, chemical precipitation, sol-gel, and electrodeposition [4]. A zinc oxide (ZnO) NP is an odorless white powder widely used for its optical, electrical, photochemical, and catalytic properties [1,5].Previous studies have reported anti-inflammatory, antifungal, antibacterial, antidiabetic, anticancer, wound healing, bioimaging, and drug carrier properties of ZnO NPs [6][7][8][9][10].When coated on orthodontic brackets and wires, they are known to minimize surface roughness, hence decreasing friction and overall treatment time [11,12].When included in resin composites, they have demonstrated excellent physical and mechanical qualities [13][14][15].Tin has been coated on orthodontic stainless arch wires and improved tensile strength, load-bending strength, and reduced frictional resistance have been noted.Tin oxide NPs have biomedical applications and are reported to have photocatalytic, antioxidant, and antimicrobial properties [16]. ZnO/SnO 2 nanocomposites were successfully prepared using the sol-gel method and then characterized by Kumar et al. [17].The sol-gel method of synthesis of NPs involves hydrolysis and polymerization reactions, followed by heating the gel and vaporizing the solvent to obtain the final product.This simple method of NP synthesis provides a homogeneous powder of NPs and is feasible; hence, it is more popular when compared to other methods [18,19].No previous studies have reported on the preparation of ZnO/SnO 2 nanocomposites by the sol-gel method, followed by testing their effect on common oral pathogens.The present study aimed to synthesize a composite of ZnO-SnO 2 NPs and evaluate their morphological, structural properties, and antibacterial properties at two different concentrations. Synthesis of zinc oxide-tin oxide NPs The sol-gel method of synthesizing NPs was employed to produce ZnO-SnO 2 NPs.To create a homogeneous solution, 0.4 M tin chloride pentahydrate (SnCl 2 .5H 2 O) was first dissolved in double-deionized water.Next, 8 M sodium hydroxide (NaOH) was added dropwise at a steady rate.The SnCl 2 .5H 2 O solution was mixed with the ZnO precursor zinc sulfate heptahydrate (ZnSO 4 .5H 2 O) at an optimized concentration of 1 M.The complete solution was continuously stirred until a homogenous solution was obtained.The solution was then agitated for 20 minutes before being microwaved at 320 W for 10 minutes.To get rid of the contaminants and impurities, the resultant product was centrifuged and cleaned using deionized water and ethanol alternately five times.After drying and annealing at 500 °C for 12 hours, a white ZnO-SnO 2 powder was produced. Characterization Following synthesis, characterization of the NPs was carried out at the White Lab-Material Research Centre, Saveetha Dental College and Hospital, Chennai.The morphological study of the NPs was performed by scanning electron microscopy (SEM) imaging.The structural study was performed by X-ray diffraction (XRD) analysis and the chemical studies were performed by energy dispersive X-ray spectroscopy (EDAX) and FT-IR spectroscopy. Morphological study: SEM imaging High-resolution SEM, in conjunction with an EDAX diffractometer (Model: JEOL, JSM IT-800), was employed to analyze the surface characteristics of the NPs.Samples of ZnO-SnO 2 NPs were mounted on carbon-taped aluminum stubs, gold-coated using a sputter coater, and seen under an SEM. Structural study: XRD This technique provides detailed information about the crystallographic structure of the NPs based on the interference of a crystalline sample and monochromatic X-rays.In XRD analysis, the X-rays generated are collimated and directed to a sample of the NPs, and the incident rays interact with the sample to produce a diffracted ray that is detected, processed, and counted.Understanding the crystallinity and crystal structure of the NPs was carried out by the powder XRD analysis in the 2ϴ range of 20 -80° (Model: Bruker D8 Advance, Bruker Corporation, USA) (Figure 1).The plot of the diffraction pattern displays the intensity of scattered diffracted rays at different angles of the material. Chemical studies: EDAX and FT-IR spectroscopy The elemental composition of the ZnO-SnO 2 NPs was determined for EDX analysis.In this technique, the NPs are analyzed by activation using an EDX spectrophotometer.The bonding and functionality of the NPs were studied using Fourier transform infrared spectroscopy.FT-IR spectrometer comprises a test chamber, source, amplifying device, detector, computer, and an analog-to-digital converter.The interferometer allows radiation from the sources to travel to the detector.An analog-to-digital converter and amplifying device form a digital signal that is produced by an amplified and converted signal, followed by the transfer of the signal to the computer where the Fourier transform is carried out.A portion of the infrared radiation, which has a wavelength of roughly 4000-500cm -1 (Bruker-ALPHA 2, Bruker Corporation, USA), is transmitted through the sample while the remaining energy passes through partially (Figure 2).The sample transforms the radiation absorbed into rotational or vibrational energy.The final signal produced at the detector has a spectrum that typically ranges from 4000 to 500 cm -1 , and it represents the molecular signature of the samples.The films' infrared (IR) spectra were taken using a Fourier transform infrared spectrometer. Antibacterial test An agar diffusion test was performed to evaluate the antibacterial properties of the NPs.The antibacterial properties of two concentrations of the NPs, ZnO + SnO 2 (30 μg/ml), ZnO + SnO2 (15 μg/ml), and Control (Antibiotics) were assessed.Antibiotic erythromycin was used against gram-negative and penicillin was used against gram-positive bacteria.All the Petri plates were filled with nutrient agar (25 ml each) and were left to harden.50 μl of S. aureus, S. mutans, and E. coli cell suspension was pipetted onto an agar plate.Three indentations were created on the agar using the gel puncture method with adequate spacing between each well and the plates were placed in the incubator for 16-18 hours at 37°C.The inhibited bacterial zone was assessed encircling all the wells and the diameter was measured and documented in millimeters. Morphological evaluation of the synthesized NPs using SEM Field emission microscopy (SEM) was used to determine the morphology of the synthesized SnO 2 :ZnO NPs. Figure 3 shows that NPs were big, asymmetrical particles.The existence of both SnO 2 and ZnO NPs in the nanocomposites is further supported by the observation of some irregular flower-shaped NPs.The surface structure is characterized by compact nanosized particles.These particles tend to aggregate, posing a challenge to accurately measuring the size of individual nanoparticles.Moreover, some particles fuse, forming clusters with a notably high degree of agglomeration. Figure 5 shows the functional groups found in pure SnO 2 :ZnO nanocomposites as determined by FTIR analysis.In the peaks at region 570 cm -1 , 601 cm -1, and 630 cm -1 [20], the typical peak of tetragonal SnO 2 NPs was seen.Bands of vibration emerging between 1600 and 3473 cm -1 in the sample were the result of the -OH molecules' bending and stretching vibrations [21].The bands that occur in the peak region between 1400 and 2373 cm -1 were used to identify the absorption of CO 2 /organic moieties from the atmosphere [22,23].A new peak corresponding to Zn-O stretching was seen in nanocomposites at 498 cm -1 .This suggests that SnO 2 was successfully combined and formed the nanocomposites of SnO 2 :ZnO.Although the presence of both ZnO and SnO 2 nanoparticles had an impact on the overall functional structure of the ZnO:SnO 2 nanocomposite, a notable difference in peak shapes was observed between the two. Antibacterial properties The antibacterial activity of the synthesized NPs against the bacterial cultures of S. aureus, S. mutans, and E. coli at high (30 μg/ml) and low (15 μg/ml) concentrations is shown in Figure 7. Agar plates A, B, and C showed the antibacterial activity of the NPs against S. aureus, S. mutans, and E. coli respectively.A zone of inhibition is noted around the NPs.Table 1 provides the zone of inhibition values of high and low concentrations of NPs against the tested bacteria.It was observed that the chemically synthesized ZnO-SnO 2 NPs showed a similar zone of inhibition as compared to the control at both concentrations. Discussion In the present study, ZnO-SnO 2 NPs were synthesized followed by evaluation of their morphological, structural properties, and antibacterial properties at two different concentrations.The ZnO-SnO 2 NPs synthesized by the sol-gel technique displayed an irregular shape in their morphological examination.Chemical analysis confirmed the composition of the elements as follows: Sn-53.8Wt%, Zn-12.5 Wt%, and O-29.1 Wt%.Additionally, it confirmed the presence of a chemical bond between ZnO and SnO 2 NPs.The structural assessment revealed that the ZnO and SnO 2 NPs had hexagonal and orthorhombic structures respectively.Furthermore, when assessing the zone of inhibition of these NPs at varying concentrations against S. aureus, S. mutans, and E. coli, it was observed that their inhibitory effects were comparable to the control group. The structural study performed by XRD analysis observed a value of 0.2868 nm was found to be in accordance with the (022) plane of SnO 2 , the interplanar spacing value of 0.3383 nm calculated from the lattice image shown in the XRD pattern is well-matched with the values observed in XRD (0.3337 nm), which corresponds to the (110) plane of ZnO [24].The produced ZnO:SnO 2 nanocomposites' polycrystalline characteristic is determined by the XRD pattern.These results were well in agreement with a previously published study by Zarei et al. which confirmed the purity and high crystallinity of the ZnO-SnO2 thin films [25]. It has been noted that combining two or more potentially lethal NPs to produce nanocomposites results in an increase in antibacterial activity.The suggested mechanism beyond the antimicrobial activity of NPs is based on the generation of reactive oxygen species (ROS) that have the capacity to upset protein activities, disrupt the secondary membrane, and damage DNA [26].Numerous investigations have investigated the fact that ROS can still be formed even in the absence of light.ZnO-SnO 2 NPs have the capacity to generate a high number of reactive oxygen species.Therefore, the generation of reactive oxygen species is the only factor responsible for the increased activity of SnO 2 -ZnO nanocomposites [27].Zn2+ interacts with the cell wall, leading to the depletion of the cell membrane.This causes the intracellular fluids to leak out, resulting in the loss of viability of the cells.Tin interacts with the physiological pathways in cells, which disturbs the regular process and ultimately results in the death of the bacterial cells.Pandey et al. found an improvement in the anti-bacterial activity of ZnO−Ag 2 O/Ag, ZnO−CuO, and ZnO−SnO2 nanocomposites made using the solvochemical approach compared to their single metal oxide equivalents [28].It was noted that the antibacterial activity was higher at a lower concentration and increased after 3 hours.Evstropiev et al. studied the zone of inhibition of ZnO−SnO2 nanocomposites and concluded that the incorporation of SnO 2 with ZnO NPs increased the antibacterial activity against gram-positive and gram-negative bacteria [29].It was concluded that the addition of SnO 2 NPs led to a decrease in the crystal size of ZnO crystals and increased the surface area of the nanocomposites, which increased their antibacterial properties.In the present study, the zone of inhibition at both high and low concentrations of the NPs was noted to be comparable to the control group.The ZnO-SnO 2 NPs can be further studied for cytotoxicity and surface morphology when coated on orthodontic brackets and archwires to reduce friction and can be incorporated into restorative adhesives to enhance the bond strength. Limitations Various factors such as pH, temperature, pressure, and light can influence the synthesis process, potentially leading to variations in the properties of the NPs.Higher concentrations of the NPs may pose toxicity concerns, impacting their safety in various applications hence future study is required.The assessment of antibacterial activity in vitro might not cover the full spectrum of bacterial strains. Conclusions ZnO-SnO 2 NPs were effectively synthesized by the sol-gel technique, which is a cost-effective method.The FIGURE 3 :Figure 4 FIGURE 3: Scanning electron microscopy image of the prepared ZnO-SnO nanoparticles FIGURE 4 : FIGURE 4: Graph showing the chemical composition of ZnO-SnO2 nanoparticles using EDAX spectroscopy X-axis: keV (energy unit); Y-axis: cps/eV (counts per second per electron volt); Sigma: error in weight percentage FIGURE 5 : FIGURE 5: Graph showing Fourier transform infrared spectroscopy results of the prepared ZnO-SnO nanoparticles FIGURE 7 : FIGURE 7: Zone of inhibitions were observed around the high-and lowconcentration nanoparticles and the control (antibiotics) against S. aureus, S. mutans, and E. coli.C-control, HC-High concentration, LC-Low concentration
3,357.2
2024-01-01T00:00:00.000
[ "Materials Science", "Chemistry", "Medicine" ]
Performance Analysis of Data-Driven Techniques for Detection and Identification of Low Frequency Oscillations in Multimachine Power System In power systems, identification and damping of low-frequency oscillations(LFO) is very crucial to maintain the small signal stability. Hence the computation of eigenvalues, eigenmode shapes, participation factors, and coherency of generators are essential parameters of critical LFO modes. The existing data-driven approaches explore either one or two aspects of modal parameters from the dynamic pattern of the measurement data. In the present work, two approaches i) Iterative Approach(IA) ii) Non-Iterative Subspace(SS) method of data-driven techniques are used to estimate the state-space model of the system under study from the measurement data in a holistic framework. Based on the estimated system model, the eigenvalues of LFO, eigenmode shapes, participation factor, and coherency of associated generators participating in electromechanical oscillations are computed. Finally, from the estimated participation factors for the Inter-area oscillation mode (IAM), the Static Synchronous Compensator (STATCOM) damping controller is designed and placed at the generator with the highest participation factor for damping of inter-area oscillation. The enhancement of damping ratio of inter area mode with STATCOM damping controller is estimated and verified using IA & SS data driven approaches for the first time. In this work, IA uses prediction-error minimization algorithm (PEM) & Parallel computing techniques and SS method uses Multivariable Output Error State Space (MOESP) algorithm for the estimation of Hankel matrix from the measured data. The effectiveness of data-driven approaches are demonstrated by the simulation of a IEEE 4-machine,10-bus power system using MATLAB/Simulink. IA & SS methods incorporating wavelet based denoising techniques are very effective in identifying the LFO modes even with noisy measurement. The efficacy of the denoising to suppress the effect of noise is demonstrated by comparing with noiseless environment. The results of data-driven approaches indicate their high degree of accuracy and efficiency in being consistent with Eigenvalue analysis (EA) performed on the system. I. INTRODUCTION The increased power transfer over the transmission network infuriates the power system to undergo low-frequency oscillations(LFO). The LFO typically lies between 0.2 to 3 Hz. Generally, these oscillations decay fast and the system remains stable if the system has positive damping at these oscillating frequencies. Depending upon certain operating conditions the LFO may grow/ sustain to reach the extent of causing synchronous machines to lose synchronism. The adverse effect of these oscillations intensifies the fluctuations The associate editor coordinating the review of this manuscript and approving it for publication was Flavia Grassi . in voltage, power flow, torque, and speed for a long duration. The damping of LFO is a function of system operating conditions [1], [2]. Detection of electromechanical oscillations is of utmost importance in the power system to mitigate the blackout scenario. Analysis of measurement data is very much essential in control rooms to have real-time awareness of such oscillations to initiate emergency control in view of maintaining the stability and to ensure secured power system operation. Numerous approaches have been introduced for extracting details of low-frequency oscillations from the dynamic patterns of measurement data [3], [6]. The Prony method is a parameter fitting approach, that is used to identify dominant modes from measurement data and was initiated by Haur [7]. The extension of the Prony algorithm by Zhou et al. [8] assuming that the homogeneous responses of the linearized power system model is a linear combination of decaying complex exponential functions and is suited if generators swing in unison. Performance of the Prony method deteriorates in the presence of noises in the measured data. Fladrin et al. [9] introduced estimation algorithm empirical mode decomposition (EMD) which uses pre-processed data using a multi-bank filter to remove the noise and signal offsets. However, due to the frequency mixing phenomenon in multi-bank filter, EMD introduces artificial modes apart from the dominant mode of the system. Eigenvalue realization algorithm introduced by Kamwa et al. [10] suppresses the artificial modes using multi-band modal analysis. The use of the sliding window approach adapted by methods reported is computationally exhaustive and the memory requirement for the storage of data is extensively large. Kalman filter and extended Kalman filter-based approaches were introduced in [11] & [12] uses the dominant modes from preceding estimations and updates the estimation from new samples. Further, Zhou et al. [13] introduce an autoregressive moving average(ARMA) algorithm which estimates dominant modes from the ambient responses of the power system. Identification of mode shapes from the ambient data is proposed by Trudnowski et al. [14] and Dosiek et al. [15] introduce the ARMAX to identify the eigenvalues that correspond to electromechanical oscillations and mode shapes from the multichannel measurement. However, the performance of ARMAX is reduced for the measured signal containing high damping oscillatory modes. The stochastic subspace identification Ni et al. [16] and recursive adaptive stochastic subspace identification Nezam et al. [17] were proposed to overcome the limitations of ARMAX and increase the computational speed of recursive methods during estimation of mode shapes and dominant mode. The participation factors of the generators for dominant modes are computed using EA [1], [2], geometric measures, and modal participation ratio methods proposed in [18], [19] & [20], all these methods require the computation of the state matrix from the linearized power system model at a given operating point. Due to changes in operating points, the estimation of participation factors of the dynamic model of the power system in real-time is seldom guaranteed. Senroy et al. [21] proposed the Hibert-Huang Transform to identify the coherent group of generators from the measured data for the inter-area mode of oscillations. Avdaković [22] also proposed the wavelet-based approach to extract the mode shapes among the generators associated with IAM. To identify the coherency of generators for multiple dominant modes the cluster-based data-driven approaches were proposed such as hierarchical clustering [23], principal component analysis [24], independent component analysis (ICA) [25], Dynamic mode decomposition(DMD) [26], Koopman mode (KMs) [27] and spectral clustering [28]. One of the prerequisites of cluster-based methods is that they necessitate a presumptive knowledge of many coherent groups in the system which is not available always. The comprehensive approach which provides information on all the modal parameters such as eigenvalues, mode shapes, participation factors, and generator coherency is proposed by Li et al. [30] in which the Eigenvalue Realization Approach (ERA) is used to extract the information and requires a large set of data to construct a block Hankel matrix. The validation of the ERA method for the system incorporated with the damping controller is not reported. The performance of ERA under noisy signal environment is not been investigated. Xiong et al. [31] explains the advantages, limitations and applications of different approaches for system modelling and stability analysis.The extensive survey is carried out on small signal stability and large signal stability analysis for the power system dominated with VSC. The technical challenges and limitation of the small signal stability analysis using state-space modelling of such systems are addressed. Comprehensive study indicate the need of very effective modelorder reduction considering timescale features, the stability classification and the aggregation of VSCs with similar characteristics. In Impedance based approach the computation process is simplified compared to state space approach but it is prone to measurement noise. This demands very effective signal de-noising approach. In [32], Li et al. proposed robust and effective unified controller for each DG's connected in multi bus microgrid which controls the power flow effectively during grid connected, islanded and operational transition. The robustness of the proposed algorithm is analyzed comparing the performance for the laboratory setup using Realtime digital simulator with experimental setup using MATLAB simulation are verified. Li et al. in [33] introduces a discretized time delay model for power system. In this work, only retarded variable are considered for discretization to reduce the model order and introduces sparse techniques to reduce the CPU time required for the eigenvalue computation for time delay power system. However, the robustness of the algorithm in noisy measurement is not investigated in the paper. In this paper, EA is performed on a linearized dynamic model of the power system and is considered as a base case for the proposed approaches. The data driven techniques IA and SS are utilized to estimate the eigenvalues, mode shapes, participation factors, and generator coherency for multiple electromechanical modes from the measurement data using holistic framework. The performance of IA & SS approaches are evaluated again with denoised signals. Data required to validate the performance of data driven techniques is obtained from MATLAB/SIMULINK model of 4 machine, 10 bus system. In this paper, the data-driven techniques IA and SS methods are used for accurate estimation of the Inter-area mode and Swing modes as compared to the model-based EA approach. PEM algorithm with Levenberg-Marquardt (LM) search algorithm [38] is used to get the maximum fitness function for the measurement data in IA and Multivariable Output Error State Space based algorithm is used to compute the weighting function for singular value decomposition in SS method. Parallel computing with global optimization methods are applied in the present work to speed up the iterative algorithm and the performance of both approaches are compared. The performance of proposed data-driven methods in estimating the modal parameters and its accuracy is found to be consistent with the EA approach in extracting dynamic characteristics such as participation factor of the generator, mode shapes, and coherency of the generator for particular eigenmode. It is observed that, IA & SS approaches meticulously acquire dominant modes, participation factors, mode shapes, and generator coherency. The computation speed of the IA approach is increased by incorporating parallel computing using a global optimization search algorithm. Based on the results, it is shown that SS method is faster than IA. During noisy environment the data driven methods IA & SS are not very effective to estimate the state space due to convergence issues in IA and larger orders for estimation in SS. The performance of proposed approaches during noise environment is tested & necessity of noise detection and denoising the signal prior to the estimation is justified. To overcome the difficulty due to the effect of noise, the statistical based noise detection mechanism and wavelet based denoising techniques are incorporated in this paper. The wavelet denoising with threshold techniques are chosen and the efforts are made to select the efficient wavelet with best decomposition level using statistical analysis. STATCOM supplementary modulation controller is designed to damp out the IAM with highest possible damping factors by placing STATCOM at the generator which has the highest participation factor for IAM. The IA & SS approaches are used to detect the eigenvalues of IAM to demonstrate the effectiveness of the STATCOM damping controller for the first time. The work reported in this paper is mainly focused on 1. Implementation of variant of N4SID is carried out for subspace identification and estimation of eigenmodes describing the LFO from the time series data of Wide Area Monitoring System. 2. Detection of presence of noise in the PMU measurement is using Ljung-Box-Q-test & choice of appropriate wavelet by performing adequate statistical analysis for denoising the noisy signal & using box plot. 3. Implementation of IA & SS methods are used to identify the eigenvalues of Inter area mode (IAM) of the system with STATCOM damping controller incorporated for multimachine power system. The organization of the paper as follows. EA for a linearized power system model is reviewed in section II. The state-space identification using a data-driven techniques iterative approach IA and noniterative approach SS are used to extract the dominant modes, mode shapes, participation factors and coherency between the generators from the measurement data is developed in section III. The details of noise detection & wavelet based denoising techniques are discussed in section III.D, III.E & III.F. Section IV presents the results & performance of proposed approaches with model-based eigenvalue analysis as a base case for the simulated system of 4-machine,10 bus system in Section IV.A & Section IV.B. Section IV.C describes the usage of parallel computing techniques to speed up the iterative approaches. Based on the participation factors STATCOM parameters are tuned to damp the IAM in Section IV.D. In Section IV.E & Section IV.F, the performance of proposed approaches are verified during noisy environment & noise detection and wavelet denoising techniques are discussed. The conclusions are drawn in Section V. II. EIGENVALUE ESTIMATION USING MODAL ANALYSIS Stability analysis in a power system is performed using a set of differential-algebraic equations (DAE) described aṡ where x is a state vector x ∈ R m , input vector i ∈ R k and the output vector y ∈ R n . Linearization of equation (1) at an equilibrium point x 0 , i 0 results in the state space representation of system expressed as where A is the state matrix and the dimension is m × m, B is control matrix with dimension m×k,output matrix C has n × m dimension and the dimension of feedforward matrix D is n × k. To implement eigenvalue analysis (EA) method on state matrix A, the eigenvectors and eigenvalues must satisfy following equation where λ j is the j th eigenvalue of A, U j is right eigenvector of A associated with λ j and dimension is m × 1, left eigenvector of A is represented as V j associated with λ j . To represent the relationship between the system states and the modes, the participation factor is computed by where P r,j is the participation factor which measures the participation of j th state variable over r th mode; v rj is the r th entry of V j and M EA jr is the r th entry of U j . The small-signal oscillation in the power system causes system oscillation to increase due to the presence of lowfrequency oscillation modes. Based on the frequency, the low frequency oscillatory modes are categorized as, inter-area oscillatory modes and swing modes. The mode shape matrix is used to estimate the coherency between the generators associated with low-frequency oscillatory modes [29], [34] i.e represented as where M EA is mode shapes matrix associated with multiple low frequency oscillatory modes. III. DATA-DRIVEN APPROACHES FOR MEASUREMENT DATA FOR STATE-SPACE ESTIMATION The procedures of extracting eigenmodes, mode shapes, participation factors, and generator coherencies for the dynamic model of the power system is discussed in Section II. Accuracy of estimation of such methods is related to the accuracy of the models and parameters of the system which need non-trivial effort in the real-time operation of the power system. Therefore the measurement based approaches are predominantly used in monitoring real-time oscillations considering the impact of the real-time operation of the power grid on small-signal dynamic stability. Estimation of the eigenmodes is identified accurately without depending on specific dynamic models. Due to the advent of phasor measurement systems, the measurement-based methods are the need of the hour. Hence this section describes the data-driven approaches used to estimate the state-space model of the system through the measured data. A. AN ITERATIVE APPROACH USING A PREDICTION-ERROR MINIMIZATION ALGORITHM In an Iterative approach (IA), the discrete data collected from various measurement units such as PMU's and SCADA are used to obtain the continuous-time state-space model of the system. Initialization of parameter estimates is performed either using an iterative rational function estimation approach. The parameter values are refined further to obtain minimum estimation error using the prediction-error minimization (PEM) algorithm to improve the closeness of fit. Consider model structure M x which is the set of parametrized set models. And M x (θ) is the particular model in M x associated with the parameter vector θ. The prediction error e (t, θ) for any model M x (θ) is given by where y(t) is actual output which is p -dimensional column vector,ŷ(t|θ) is a predicted output which is p -dimensional column & e(t) also be a p-dimensional column vector. For any dataset Z N the errors can be computed for t=1, 2, 3 . . . . . . .N. Thus for a parameter estimation, prediction error is computed (7) based on Z t . Select the θ N at t=N so that prediction error become as small as possible A weighted norm of prediction error is defined as The prediction error sequence e (t, θ) which is represented as e (t) and is defined as the difference between the predicted output and measured output of the model, & it is prefiltered using linear filter h −1 (q) the error is defined as where h is the prefilter used for preprocessing the data with is a scalar and e (t) is a vector. The subscript N denotes that the cost function is a function of the number of data samples and accuracy of cost functions increases for larger values of N. In [37], the iterative approach uses a general linear estimator with Langrage's method to minimize the cost function. In this work, to minimize the cost function, the PEM utilizes Levenberg-Marquardt (LM) search algorithm [38]. PEM is used to select the proper order of system which generates different Hankel singular values for integer values of order, the order with lower Hankel singular values are discarded. The state-space model of the measured data is now extracted after observing the close fitness between the predicted data and measured input by choosing the proper order of the system. [34], [35]. The estimated continuous-time state-space model is represented asẋ where A 1 is a state matrix with m×m dimension, B 1 is control matrix with dimension m × k, C 1 is output matrix n × m, D 1 is feedforward matrix n × k. y(t) and i(t) output and input respectively. Further, by using A 1 , B 1 and C 1 controllable matrix B c 1 and observable matrix C o 1 of (10) are computed as where U IA is the right eigenvector represented as M IA k is the mode shapes of all the measurement channels associated with eigenmode λ k are computed using observable matrix C o 1 , (12) where U IA k is the right eigenvector corresponds to the λ k & M IA k is the kth column of C o 1 . Hence using mode shapes M IA k , the participation factor p IA kq of k th state variable in the q th mode; is computed as To identify the coherency between the generators, the angle of cosine d iq between the measure channels i & q can be determined using (14) with the use of estimated mode shape matrix M IA which is defined by The proposed approach uses numerical algorithms for subspace (SS) state-space system identification [39]- [41]. The additional parametrization is required for initial conditions while estimating the state-space using iterative approach from the measured data with the non-zero initial state, wherein performance of subspace approach do not vary for zero and non-zero initial states. Since it is a non-iterative method, the convergence issues, and parametric sensitivities to initial state estimation do not affect the performance of the algorithm. Hence the non-iterative subspace methods are faster than iterative approaches as described in section V. The state-space model for the measurement data from the multichannel can be expressed as discrete form as where A 2 ∈ R m×m is a discrete-time state matrix, B 2 ∈ R m×n is discrete-time input matrix, C 2 ∈ R l×m are an output matrix, the feedforward matrix D 2 ∈ R l×n &y(kT ) and i(kT ) output and input vectors respectively. T is the sampling interval. To estimate A 2 , B 2 , C 2 and D 2 matrices the subspace identification algorithm is employed in this work to predict the exact state-space model from the measured data. The procedures for estimation is summarized as follows (1) Construct input block Hankel matrix H ik , output block Hankel matrix and error block Hankel matrix from the measurement data which can be represented as where i k , y k and e k are the measurement data collected at time k; m 0 is the initial order of the system. Construct a matrix Z P which includes past input I P and past output Y P [40] defines as From Z P and future input I f , Y ef i.e. the predicted future matrix of the future output matrix Y f is computed using the least square algorithm. Inspired by the linearity of the system, we combine the past (I P ) and the future inputs (I f ) linearly to predict the future outputs (Y f ). and linear combinations are denoted using linear operators L Z , and L i defines in (19) Consider Y ef is the estimated output matrix The least-square estimate of Using orthogonal projection of Y f row space in the row space of I f and Z P the estimated output Y ef computed as The optimal values of L Z and L i are obtained using Oblique projection [37]. where Z P,0|i−1 is the first row to the ith row of Z P Taking singular value decomposition of L z Hence the order 'n' of the system is non-zero singular values of S 1 . The extended controllability matrix where the subscript i denotes the number of block row and state space metrics arê Therefore the system model is computed as follows The discrete-time state matrices A 2 , B 2 , C 2 &D 2 are calculated using (26)(27), (30) and the state-space model can be represented as (15) with an error vector ρ z ρ i is neglected. The continuous-time state-space model is obtained from the discrete-time model (15). The continuous time transformation of the matrices A 2 , B 2 and C 2 are where T is the sampling interval. Eigenvalues λ ss and eigenvectors U ss are calculated using If λ j is one of the modes of low-frequency oscillation mode of the power system under consideration and its frequency f j and damping ratio ζ j are computed as The controllable and observable matrices are computed as similar to (9) The participation factor p ss kq of the machines for participation factor p IA kq of k th state variable in the q th mode and M ss , the mode shapes matrix of the eigenvalues are calculated using (34) and (35) respectively In this work, variant of N4SID is implemented for subspace identification.N4SID algorithm [37], [42] is used for initialization & MOESP based algorithm [37] is used to compute the weighting function for singular value decomposition. The subspace method used in this work make use of orthogonal projection to remove the input effect and the oblique projection to remove the error. Therefore, the proposed method is suitable for estimation of output from the recorded data such as time series data for the system with both forced excitation and unforced excitation. C. COMPUTATION USING PARALLEL COMPUTING To speed up the estimation process of proposed approaches the parallel computing using the global optimization tool of MATLAB is used. The various parallelized numerical algorithms such as Non-linear least squares and sequential quadratic programming algorithms are used for parallel computing without CUDA or MPI programming due to special array types and high-level constructs parallel for-loops. The toolbox allows using parallel computing -enabled functions in MATLAB and other toolboxes [43]. In the proposed work the computation speed of the iterative method is verified with non-linear least square and sequential quadratic programming algorithms [44] as compared with the estimation algorithm. However, it is found that, the proposed subspace algorithm is faster compare to the iterative algorithm and it do not require parallel computation. D. MEASUREMENT NOISE AND SNR RATIO In real-time measurement units in power system experiences noise which is characterized as white noise [45]. Uncorrelated sample values are one of the feature of white noise [46], the first i th sample the value have no correlation with (i+1) th samples. The required while Gaussian noise in dB is added to the signal generated through the time domain simulations to have similarity with practical signals. The required noise is added to the signal and Signal to Noise Ratio is computed using (36) and (37) SNR rms = 20 log rms value of signal rms value of noise dB (36) Root mean square value of a signal 'x'is represented as where L is the number of samples of measured signal x(k) Since the rms value of the signal is the square root of its power i.e X rms = X power [47] and equation (36) can be written as, SNR POWER = 10 log signal power noise power dB (38) Noise power in dB = Signal power in dB -SNR in dB (39) Noise power = 10 (Noise power in dB) 10 Power of White Gaussian Noise is its variance itself., MATLAB function used for generating noise is given by Noise = standard deviation of white noise * randn (1, N) (41) VOLUME 9, 2021 The command randn(1,N) creates a row vector of random distribution with zero mean and unity standard deviation. The length of the Noise array N is same as length of signal array x(k). The standard deviation of noise is computed using equation (39). Therefore the Noise signal is obtained using (42) The present work is continued, considering the data matrix consisting of noisy signal measured at generator 1 and noiseless signals measured at remaining generator buses for Two-Area, 4 machine system. The iterative approach fails to obtain the state space estimation due to convergence issues in the noisy environment and subspace method demands higher fitness order n > 50 & time. This drives the necessity of identification of noise & appropriate denoising techniques to process the data. In this paper, the added white Gaussian noise with SNR of 50 dB is chosen, since it is the minimum signal strength possessed by any ringdown data in wide area monitoring unit [47]. Noisy signals results in misidentification of state space model using proposed approaches. Therefore, it is necessary to identify the noisy signal and followed by denoising the signal prior to system identification process. White noise is most common measurement noise in power system [48]. E. LJUNG-BOX-Q-TEST FOR IDENTIFICATION OF WHITE GAUSSIAN NOISE IN MEASURED SIGNAL Ljung-Box-Q-test is a statistical test based on null hypothesis to support the existence of autocorrelation [49], [50], [52] and defined by where r i is the estimated autocorrelation of the data with 'n' umber of samples and k is the number of lags being tested. where γ 2 1−a,h is the chi-square distribution table with 'h' degree of freedom and significance level α. Hypothesis 'h' and 'pvalue' are estimated for the measured signal. The signal lags are identified as heavily autocorrelated if Q value is greater the critical value of Chi-square and hence rejects the null hypothesis. If, Q< critical value, then supports null hypothesis with poor autocorrelation which indicates the existence of white noise. The autocorrelationr i of the signal is computed for time series using (44), where N is the length of the time series, Y is the measured signal, E is the expectation value [49]. F. WAVELET DENOISING USING THRESHOLDING TECHNIQUES The one dimension noisy signal is represented as where x (t) is the noisy signal, s (t) is measured signal through PMU and e(t) is added White Gaussian Noise subject to zero mean value & unity standard deviation. When a wavelet transform applied on signal x (t), it extract the signal energy into large approximation coefficients and distribute the noise energy over detailed coefficients. Therefore the denoising is viewed as exploitation over detailed co-efficient to obtained smooth signal using non-linear filtering process [52]- [56]. Thresholding is the commonly used denoising techniques proposed by Donoho and Johnstone [57] which removes the detailed coefficients less than certain threshold. The denoising process consists of three major steps as represented in block diagram in Figure 1. Step1: Selection of mother wavelet & decomposition levels and computation of wavelet coefficients. Step2: Selection of threshold function and extraction of estimated wavelet co-efficient Step3: Signal reconstruction using Inverse wavelet transform. Universal thresholding is the most widely used thresholding techniques because of effectiveness and simple approach. The formula for computation of threshold using universal threshold is given by where Nj is length of detailed coefficient and σ j is the mean variance at jth scale. The calculation of σ j is by computed by median estimation given by where W 1,l is the wavelet coefficient in scale 1. In this work, universal threshold method is used for denoising the signal. There are two major nonlinear filters are present in the thresholding of wavelet. First one is, denoising using Hard thresholding shown in Figure 2a is defines aš Second one is denoising using Soft thresholding shown in Figure 2b is defines aš In hard thresholding the coefficients between the threshold range −λ to λ are set to zero if the absolute values of coefficient lower than threshold values prior to reconstruction. Soft thresholding is also termed as wavelet shrinkage method which shrinks all positive & negative coefficients which lies in the threshold range [57] prior to reconstruction. To evaluate the performance of various mother wavelets while denoising the signal with thresholding techniques, the signal to noise ratio (SNR) using equation (37), Mean Absolute Error (MAE), Mean Square Error (MSE), Correlations and Fitness coefficients methods are employed in this work. The computation of all the parameters are defined using following equations. Mean squared error is defined by where n=number of data points, X i = Measured noise free signal alues D i = Wavelet denoised signal values Mean Absolute Error (MAE) is defined by where n=number of data points, X i = Measured noise free signal, D i = Predicted values after wavelet denoising, Correlation between the measured and wavelet denoised signal is defined by where X i & D i are the Measured noise free signal and denoised signals respectively,X &D are the mean values of the sample means of X and D. Fit co-efficient is employed to check whether the important information are lost during denoising, the fit co-efficient is given by (53) where X i & D i are the Measured noise free signal and denoised signals respectively. The experiment analysis of the statistical features described equation 50-51 are discussed in the result section IV.A. IV. RESULTS The performance and application of the proposed method are verified with simulated data from 4 machine-10 bus system shown in Figure 3 [1], [2]. The measurement of the angle at generator buses is collected with Generator 1( G1) as a reference at a step size of 0.01 sec for 25 seconds. It provides 2500 frames in 25 seconds with sampling rate of 100 frames per seconds. In real-time systems the PMU's used in Wide area Monitoring System (WAMS )has output frequency varying between 10 to 60 frames per second and in advanced PMU [58], the reporting rate is up to 200 frames per seconds. In this work, the reporting rate for advanced PMU is considered as 100 frames /sec. The model-based approach is used as a baseline with a small signal disturbance condition when the mechanical input to Generators is increased by 10% at 0.5 secs. Figure 4 illustrates the variation in bus angles of generators G1, G2, G3, and G4 for the smallsignal disturbance. The data between 4s to 20s window is used as an input for the proposed approaches to estimate the eigenvalues. A. EIGENVALUE ANALYSIS The model-based approach is applied on the 4-machine, 10 bus systems considering 1.1 model of synchronous machine [1], [2]. The modal parameters of the machines such as eigenvalues, eigenvectors, mode shapes, and participation factors are computed using (3)(4) and (5) when a small disturbance of 10% variation in mechanical input is activated. The computed values of modal parameters are compared with the proposed approaches in the sections to follow. B. AN ITERATIVE APPROACH AND SUBSPACE METHOD Applying an iterative approach (IA) to identify the state-space of a system using bus angle data collected at generator buses at area1 and area 2 of the study system. The parameter estimation minimization algorithm (8) & (9) is applied to find the maximum approximation between the measured input and processed output. The Order of the system is identified by computing Mean squared error of fitness. MSE is the average squared difference between the estimated value & measured value of a signal and is defined by (50). In general, MSE represent the difference between the actual observations and observation values predicted by the model. In this work, the difference between the measured values and predicted values of a signal is calculated using MATLAB function immse() and lesser the MSE more the closeness of fitting. It is observed from Figure 5 that, order n=21 has minimum mean square error. from the chosen data and the matrices A 1 , B 1 , C 1 and D 1 are computed as per equation (10) (11). The algorithm is modified to extract the eigenvalues lying between 0.2 to 3Hz as the frequencies of our interest is to identify the low-frequency oscillation modes. The eigenvalues λ n and the right eigenvector (12). The participation factors of the machines are computed from the measured data using (13). In applying Subspace (SS) approach to the simulated data of the multimachine system bus angles between 4 sec to 20 s window. Initially, the Hankel Matrix associated with the measures input is computed using (16) and the order of the system n=21 is validated against closeness of fitness by solving Singular value decomposition S1(25) which was found to be containing all non-zero singular values. The percentage of the closeness of fitness between measured data and predicted data is shown in Figure 6.2 which is very similar to the IA method. Further, the discrete state-space matrices A 2 , B 2 , C 2 and D 2 are extracted using (30) and computation of continuous-time state-space matrices A t 2 , B t 2 , C t 2 and D t 2 are achieved using (31). Eigenvalues of A t 2 and right eigenvectors U ss are estimated. Controllability and observability matrices B tc 2 and C to 2 are estimated using the right eigenvector of A t 2 via (33). The participation factor of generators is computed using (34). The verification of estimation accuracy of proposed data driven approaches IA & SS for n=21 are evaluated, and compared with critical LFO modes of EA method as the basis for correctness estimation. The dominant modes of the system are computed using the proposed data-driven approaches and EA method are compared as shown in Table 1. It is observed from Table 1 that, the eigenvalues estimated with IA & SS methods closely matches with EA method. Further, the damping ratio (ζ ) and frequency of oscillatory modes are calculated using (32). The %error of damping ratio (ζ ) and frequency of oscillatory VOLUME 9, 2021 modes(f) is found to be significantly small using both the IA and SS method of state-space identification. The %error of ζ and f with SS is slightly high than the IA approach for mode 2 shown in Table 2. However, SS method is faster than the IA approach is discussed in section IV.C. The mode shapes of different dominant modes for the IA approach is shown in Figure 7. For Inter-area mode in Figure 7a, G1, and G2 form coherent group to swing against coherent group formed by G3 and G4, and Generator 1 in area 1 show more dominance for this eigenmode. For mode M1 in Figure 7b, the coherent group constituted by G1 & G3 swing against G2 & G4 and the generator G2 shows more dominance over M1. For mode M2 in Figure.7c, the coherent group constituted by G1 & G4 swing against generators G3 & G2 and G3 shows more dominance over the mode under consideration. The mode shapes of different dominant modes for the SS approach is shown in Figure 8. For Inter-area mode shown in Figure 8a,, G1, and G2 form coherent group to swing against coherent group formed by G3 and G4, and Generator 1 in area 1 show more dominance for this eigenmode. For mode M1 in Figure.8b, the coherent group constituted by G1 & G3 swing against G2 & G4 and the generator G2 shows more dominance over M1. For mode M2 in Figure 8c, the coherent group constituted by G1 & G4 swing in Figure 9. This validation ascertains that the proposed data-driven techniques accurately estimates the mode shapes from the measurement data. Participation factors of the VOLUME 9, 2021 generators for all the oscillatory modes are computed using (13) and plotted in Figure 10 for EA, IA & SS methods. It is interesting to observe from the Figure 10 that, participation factor of Generator1 is maximum for IAM irrespective of EA, IA and SS method. Similarly for swing mode 1 (M1), the G2 contributes the highest participation factor and for swing mode 2 (M2), the G3 Contributes the highest participation factor. It is to be noted that, although the magnitudes of participation factors computed using data-driven approaches are different compared to EA method, the prediction of the generator with the highest participation factor for the particular mode is accurate and is in agreement with EA. It is not surprising to see that, magnitude of participation factor obtained by IA & SS methods are found to be higher than EA method. This is mainly because, EA approach is derived from the mathematical model using state variables wherein the data driven approaches performed using measured data. Hence, for the chosen order the proposed approaches do not replicate all the eigenvalues of the dynamical system. However, it is evident from the results that, the low frequency oscillation components of the system are in agreement with EA approach. C. PARALLEL COMPUTING USING GLOBAL OPTIMIZATION FOR ITERATIVE APPROACH Though the estimation accuracy using the Iterative approach is more accurate as seen in section 3 but the speed of computation is less as compared to the SS approach. In this work, the parallel computing tools are used to verify the increase in the computation speed of IA estimation. The speed of computation using the Nonlinear Least square method is comparatively faster than sequential linear programming. However, the estimation using the subspace method is faster than the IA approach. The computation time required by various methods for the completion of the estimation process is shown in Table 3 using a 16GB RAM and 1.99GHz processor in MATLAB environment. D. DAMPING OF IAM USING STATCOM As per the results obtained in section [IV.B], it is observed that the bus near the G1 is the best place to locate the STATCOM Supplementary modulation controller to damp out the IAM. Figure 11 depicts the block diagram representation of supplementary modulation controller (SMC) with lead/lag network for STATCOM. The control signal considered is the Thevenin voltage and is synthesized by making use of the STATCOM current and magnitude of the voltage at the STATCOM bus. X 1 is the tunable reactance and K r is the reactive current modulation controller gain, I r is the reactive current injected by the STATCOM into the bus. V l is the magnitude of load bus voltage where the controller is connected. Dynamics of the STATCOM is characterized by the first order plant transfer function with time constant T p (0.02sec).T C is time constant of derivative circuit s 1+sT c and it is considered as 10msec. T 1r & T 2r are the time constants corresponds to the compensator and m r denote the number of stages associated with compensator block of STATCOM SMC [59]. Controller Transfer function is given by K r is the reactive current modulation controller gain m r is the number of stages in compensator circuit and T 1r &T 2r are the time constants for compensator circuit and the compensator design is not considered in the present work, therefore the value of m r = 0 and equation (54) is written as The design of STATCOM SMC for the 4 machines, 10 bus power system is discussed in APPENDIX-I. The tunable parameters for STATCOM SMC are X 1 which is the value of the tunable entity to synthesize the Thevenin's voltage and K r is the reactive current modulation controller gain.Using Sequential Linear Programming as per [59], [60], the value of the parameters are calculated with respect to the required decrement factor σ = −.25 & σ = −.5 are K r = 46.29, X 1 = 0.01029 and K r = 46.29, X 1 = 0.014 respectively to damp the IAM. Figure 12a & 12b shows the response of the rotor angles of all generators and the STATCOM reactive current for σ = −. 25. Figure 13a & 13b shows the response of the rotor angles of all generators and the STATCOM reactive current for σ = −.5. It is evident from the response that, the increasing value decrement factor by properly tuning the parameters of STATCOM results in fast damping of Oscillations. The eigenvalues with improved damping ratio correspond to IAM are computed for the variation in STATCOM tuning parameters using EA. The accuracy of IA and SS are compared with EA by estimating the eigenvalues for IAM when the STATCOM controller with tunable parameters for σ = −.25 & σ = −.5 is located at the bus near to the G1. It is observed from the results shown in Table 4, both of the data-driven techniques IA & SS approximates the eigenmode computed using the EA approach. Time elapsed by IA for the estimation is 6.84sec with parallel computing and SS is 3.345sec. The estimation accuracy is computed as shown in Table 5, which shows that, both IA & SS estimates the damping ratios and frequency of modes effectively and found to be consistent with values estimated using EA. E. SIGNAL WITH MEASUREMENT NOISE In this section, the signal measured at generator bus 1 i.e 'δ 1 ' is treated with White Gaussian noise at SNR of 50dB using (42). The Figure 14 and Figure 15 shows the noiseless and contaminated signal. The existence of autocorrelation is computed for the noiseless signal measured at Generator 1 with its time lagged version as in Figure 16. Wherein for noisy signal with its time lagged version there is no autocorrelation as shown in Figure 17. In this work, significance level α= 0.1 is kept as a reference & the autocorrelations are computed for 20 lags for the samples under test. The chi-square distribution table is constructed by computing Ljung-Box-Q-test for both noiseless and noisy signals with confidence interval level of 90%.The MATLAB command in (48) is used to perform the test. Here, h indicate the null hypothesis, pV is the p-value, Q is the Ljung-Box parameter computed using (43) and cV is Chi-squared critical value [h pV Q cV]=lbqtest(x, 'Lags', number of lags, 'alpha', significance value). Table 6 shows the observations of lbqtest conducted for noise free signal. It is observed that, Q value is greater than c-value for all sample lags & rejects the null hypothesis. Hence the alternate hypothesis of existence of strong correlation hold good. The results of lbqtest conducted for noisy signal of SNR 50dB is shown in Table 7. It is observed that, the Q value is lesser than c-value for all the sample lags. Hence it support the null hypothesis. Alternative hypothesis fails, due to poor autocorrelation between the measured noisy signal and the time lags. Therefore, the lbqtest proves the presence of white noise in the signal. The mother wavelet is selected based on the effect of denoising by various wavelets such as Haar, Daubechies, Symlet, Coiflets, Biorthogonal and Reversebior. For denoising studies most commonly used mother wavelets are Symlet 8 (sym8), Daubechies 10 (db10) and Coiflets 5 (coif5) due to their effectiveness in denoising the signal mentioned in [51], [53], [54]. The wavelet thresholding techniques is considered as a suitable tool to process the signal with noise. In this work, the efficacy of various wavelet thresholding methods to suppress the White Gaussian Noise are analyzed. Performance of sym8, db10 & coif5 for contaminated signal 'δ 1 ' is carried out under soft and hard universal thresholding environment with default threshold factor λ = 0.05 at different decomposition level as shown in Figure 18a and Figure 18b. The box plot constructed for denoised signals of various mother wavelets & compared with noisy and noiseless signals. It is observed from the Figure 18a, the box representing noisy signal(box1) has red colored '+'symbol which indicates the outlier in the signal. Among the remaining boxes, the db10-L8s, db10-L8h, db10-L9s, db10-L9h, sym8-L8s, symL8h, sym8-L9s, sym8-L9h and coif5-L7s, coif5-L8s, Further, the efforts are made to select the potential mother wavelet & best decomposition level based on the different statistical measures such as, SNR, mean square error (MSE), percentage fitness between denoised & measured signal, Mean absolute error (MAE) and correlation between denoised and measured noiseless signal. The statistical features for denoised signal using sym8, db10 and coif 5 with different level of decompositions are computed using universal thresholding for soft & hard threshold levels as shown in Table 8 using MATLAB. In comparison with hard thresholding, the statical features obtained through soft thresholding for all the wavelet at majority of the decomposition levels yields better results. Hence, in this paper the soft thresholding method is chosen for the further analysis. It is observed that, db10-level8 soft threshold, symlet8-level8 soft threshold and coif5-level 9 soft threshold have better statistical features compared to other wavelets. Figure 19 a depicts that, db10-level8 soft thresholding has better SNR in dB compared to its counter parts. As shown in Figure 19 b & 19 c, %fit, correlation and MAE of db10-level 8 is better than coif5-level-9, But MAE & MSE of sym8-level8 soft threshold is better than the db10-level8 soft threshold to a small extent which can be neglected. However, since the computed SNR of the denoised signal using db10-level8 soft threshold is approximately 1.65 times of sym8-level8 soft threshold & 1.218 times of coif5-level-9 with considerably better SNR, MSE, MAE & correlation. Therefore, db-10 is chosen as mother wavelet & level 8 is the best decomposition level because of maximal SNR, minimal MAE, minimal MSE maximum %fit and maximum correlation. The denoised signal constructed by db10-level 8 soft threshold is chosen for the further system identification process. Data vector is constructed again using denoised signal and analysis of state space identification in continued for the system under consideration using proposed approaches. Due to convergence issues, IA is not preferred method during noisy environment. SS method gives maximum fitness at n=30 using (50) for denoised data vector. Comparison of the damping ratios and oscillation frequency of the dominant eigenvalues obtained from proposed methods with EA is shown in Table 9. It is observed that, the %error of damping and frequency of IAM is negligibly small as compared to EA. However, the swing mode 1 and swing mode 2 has significantly small %error both in frequency and damping. Participation factors for LFO's and mode shapes of the denoised signal is approximately similar to the noiseless environment as shown in Figure 20 and Figure 21. The value of participation factors are equaling with eigenvalues analysis because in realtime scenario the operating points are not fixed whereas EA analysis is carried out for fixed operating point. However the highly participating machines for respective eigenmodes obtained by proposed approach is identical to that EA approach. Hence, it is found that, with presence of noise SS with wavelet denoising is potentially suitable method to identify the dominant oscillatory modes of the system. V. CONCLUSION In this paper, state-space model of the IEEE-4 machine-10 bus system is estimated using data-driven approaches from the measurement data. The performance of i) Iterative approach and ii) Structured subspace is validated by comparing the data driven estimated eigenvalues with the eigenvalues computed using traditional state space approach using MATLAB/Simulink. Data-driven techniques are applied to estimate dynamic characteristics of the studied power system from the measurement data. The proposed work is also focused on noise identification based on statistical analysis such as lbqtest & boxplots for the measured signal. Lbqtest and boxplots collectively enhances the reliability of noise detection by avoiding the chances of false prediction. On detection of noise, the wavelet based thresholding techniques are applied to denoise the noisy measurement signal. The performance of both IA and SS are evaluated further for denoised signals and results obtained are found satisfactory and consistent with eigenvalue analysis method. The performance of the system for proposed approaches are demonstrated by the synthetic data collected from the simulation of 4 machine −10 Bus system. The performance and novelty of the work is summarized as follows: PEM algorithm with LM optimization is used to get the maximum fitness function for the measurement data in IA and MOESP based algorithm is used to compute the weighting function for singular value decomposition in SS method. The IA and SS methods can accurately estimate the Inter-area mode and Swing modes at par with EA approach. IA and SS data-driven approaches are capable of capturing comprehensive electromechanical oscillation from the dynamic patterns of the power system and reveal the inherent features of the Power system. The performance of data-driven methods estimate the dynamic characteristics of system which is in consistent with eigenvalue approach. The performance challenges of data driven methods under noisy signal environment is successfully addressed. Measured signals are successfully tested for the presence of white gaussian noise using Ljung-Box-Qtest and box plot. The wavelet basis db10 with decomposition level −8 is highly effective compared to other wavelet basis in denoising the signal. SS data driven approach with wavelet denoising is suggested as a potential method for identifying the dominant oscillatory modes of the power system under dynamic conditions. Data driven techniques are also helpful in identifying the location of STATCOM damping controller based on the estimated participation factor. IA & SS methods are capable of detecting the eigenvalues of IAM of the system with STATCOM damping controller and is incorporated for the first time. The effectiveness of damping controllers can be enhanced by tuning the controller parameters with the help of data driven techniques. Derivation of Control Law for STATCOM SMC: The power flow through ith transmission line is given by Linearizing the equation, we get (57) where ∅ i denotes the phase difference across ith branch, V ji , V pi denote the voltage at the two ends of ith branch. The instantaneous power consumptions at all the branches in the power system are given by The reactive current injected at l th node is given by i Rl = −V l b sh l + n pl kl=1 V l x kl − V kl cos ∅ kl x kl (59) where b sh l is the shunt susceptance at the bus l, ji, pi are the two noes of branch i, n pl indicates the number of branches connected to the bus l. V kl is the voltage at the node of branch kl and ∅ kl phase angle across the branch kl. The non-linear reactive current given in (4) (60) Multiplying V l on both sides of equation (5) It can be interpreted from(12) that the average loss causes by the term n l=1 ∂i Rl ∂V kl V l . Therefore the average electromechanical loss is given by For injection of reactive current at bus l Based on equation (20) the equivalent circuit is drawn as shown in Figure 22. Due to the presence of the Thevenin impedance X th 1 jw m the term the power dissipation in the circuit is limited. To prevail over this, the modification is incorporated in the control law such that, Overall block diagram of the STATCOM supplementary modulation controller is as shown in figure 11.
12,118.4
2021-01-01T00:00:00.000
[ "Engineering" ]
Problem Solving: A Test of Yin-Yang Thinking Yin-Yang thinking and Western views of thinking have been studied, but researchers have not focused on how to demonstrate Yin-Yang thinking compares to Western thinking. The purpose of this study was to test how Yin-Yang thinking and Western thinking could be demonstrated by selection of Chinese and Western students at a Chinese University. The students were asked to demonstrate their understanding of a piece of paradoxical literature in a timed test. We found Chinese Yin-Yang thinking was compatible with Western critical thinking practices but revealed an ability to develop more encompassing creative solutiuons to resolving the given paradox. However, the findings in this research are limited to a small selection of participants and further research is needed to test for general applicability. The use of Yin-Yang thinking enhances the development of insightful solutions to problems. Introduction Research has suggested that there are cultural and personal barriers to critical and creative thinking and thus thinking in the West may differ from thinking in China.Recently, whereas the Chinese view of Yin-Yang thinking has been studied, researchers have not focused on how to demonstrate Yin-Yang thinking in comparison with Western thinking.In this paper, we explored whether Chinese Yin-Yang thinking and Western critical thinking practices are compatible. Literature Review The communication of ideas requires cognition (Gu, 2004) and the creation of mental images (Ignatow, 2004) that are developed subconsciously (Cwik, 2011) during normative cognitive development (Leerkes et al., 2008).Human cognition includes both reality and culturally based expressions that embody metaphor and imagination (Ning, 2007) shaped by culture, education, and experience that differs between Western and Chinese cultures (Heffernan et al., 2010;Jin & Dan, 2004;Mok & Morris, 2012;Pearce & Zeng, 2007).Western thinking relies heavily on Aristotelian logic and linear stability; however, Chinese thinking relies heavily on relationships and change (e.g., the Dao; Gou & Dong, 2011).Sternberg (1998) suggested human intelligence consists of three components: analytical, creative, and practical abilities, but each individual ability is dependent on the method of instruction (e.g., Sternberg & Grigorenko, 2004).Moreover, Sternberg and Grigorenko (2004) noted that there are cultural differences in how an individual solves problems and thus cultural contexts should be taken into account when assessing cognitive processes.and why critical thinking is associated with greater problem solving skills (Tümkaya, Aybek, & Aldağ, 2009).Critical thinking includes judging the validity and reliability of assumptions and of sources of information, making inferences, using envisioning, and using inductive and deductive logic to generate solutions (Baildon & Sim, 2009;Kirschner, 2011).Fisher (2011) added the identification of relevant elements, clarification and interpretation of ideas, the evaluation of diverse arguments, and the production of arguments.In addition, Cosgrove (2009) added the application of logical inquiry, reasoning, and the ability to participate in critical evaluation. Critical thinking leads to the production of solutions as well as providing new perspectives on solving problems.However, the presentation of critical thinking as an arranged set of sequential measures fails to recognize how individual elements are interdependent; and that the elements can be complex and ambiguous (Baildon & Sim, 2009;Helsdingen & van Merriënboer, 2011;Saiz & Rivas, 2011).Thus, critical thinking is independent of the context that it occurs, which implies it is impartial, neutral, and apolitical (Baildon & Sim, 2009;Koh, 2002).However, Abubaker (2008) suggested that Chinese education and cultural factors, such as Confucian beliefs on authority and compliance (Tiwari, Avery, & Lai, 2003;Yeh & Chen, 2003) may stifle the development of critical thinking.Therefore, we examined Chinese critical thinking and compared our findings to definitions of critical thinking offered by Dewey (1998), Norris and Ennis (1989), Kirschner (2011), Baildon andSim (2009), andFisher (2011). Yin-Yang and Paradox Chinese thinking differs from Western thinking in that Chinese thinking does not accept the Western view of paradox, or "contradictory yet interrelated elements-elements that seem logical in isolation but absurd and irrational when appearing simultaneously" (Lewis, 2000, p. 760).Western views regard a paradox as a problem, whereas in Yin-Yang (i.e., Chinese) thinking contradictions and paradoxes are considered to be a way of life (Fang, 2012). Recent research has examined Yin-Yang thinking (e.g., Fang, 2012;Jing & Van den Ven, 2014;Li, 2013), and Fang stated that "Yin-Yang captures the Chinese view of paradox as interdependent opposites compared to the Western view of paradox as exclusive opposites" (p.26).Indeed, a Yin-Yang view is often called dialectical thinking, which refers to the predisposition to accept inherent contradictions; this differs from the Western views of contradiction as "either/or" (Aristotelian logic), "both/or" (Bohr's complementary principle), or "either/and" (Hegel's dialectic) logic (Li, 2013).A Yin-Yang view holds a "both/and" framework (Fang, 2012, p. 34).Fang (2012) noted that Western views are biased towards absolutes whereas the Yin-Yang view adopts duality by accepting the existence of opposites. A Test of Yin-Yang Thinking We examined how Yin-Yang thinking may be compared to critical and creative thinking styles in Chinese and Western students.We developed a test that required participants to find a solution to paradoxes that might provide evidence of critical, creative, or Yin-Yang thinking. The test was time limited as to include possible tacit knowledge (Insch et al., 2008).We considered that critical and creative thinking could be demonstrated by finding one or more solutions in a test paradox, whereas Yin-Yang thinking would be demonstrated by finding a solution that encompassed all paradoxes in the test. Methodology Chinese and Western postgraduate students attending a Beijing university were selected as participants in this study.The Chinese students were enrolled in English speaking classes and the Western students were enrolled in Chinese studies.The test was administered in 4 groups each of approximately 29 participants.The groups were composed of 66 Chinese post-graduate students and 58 masters-level Western exchange students studying at Beijing University of Technology.The Chinese students came from many parts of China, whereas one group of Western students was from Holland the others came from various Western countries.As graduate students, all participants were considered to have had exposure to critical and creative thinking (Cosgrove, 2009), which is an assumption that may need to be established. Students' analysis of a piece of English literature containing contiguous paradoxes was used as the testing instrument.Participants were asked to determine if the literature conveyed a logical message, or if it was a collection of nonsense words. We used a verse from "The Walrus and the Carpenter" (Carroll, 1872) for the test.Carroll has been the subject of comment by several researchers who examined its use of metaphors, paradoxes, logic, and syntax (e.g., Fry, 1987;Kauffman, 2002;van der Walt, 2010).We considered that the chosen piece of literature met the notion of opposite elements appearing simultaneously (Fang, 2012).The chosen verse reads: "The time has come," the Walrus said, To talk of many things: Of shoes--and ships--and sealing wax--Of cabbages--and kings--And why the sea is boiling hot--And whether pigs have wings. Because the objects in the verse are diverse, we considered that establishing a relationship between any lines in the verse would demonstrate Aristotelian thinking, whereas developing a coherent scenario that included all the objects mentioned in the verse would demonstrate evidence of Yin-Yang thinking.A five-minute discussion took place during which the concepts of critical and creative thinking were explained The test involved presenting the verse on a power point slide.Test participants were told to read the verse and then the test began.The test was timed and the test conductor participated by giving hints at designated times. The following hypotheses were considered: H1: Western and Chinese participants respectively would determine, without hints, the rationality of the piece of literature. H2: Western and Chinese participants respectively would determine, with hints, rationality in parts of the piece of literature. H3: Western and Chinese participants respectively would determine a scenario encompassing the whole piece of literature. Five minutes into the test, the participants were asked to record their opinion of the verse as a whole, specifically, if it was sensical or nonsensical (a yes/no answer).After all participants indicated they had recorded their opinions, the hint "Kings eat cabbages" was given; participants were told how volcanic action could heat the sea; finally, they were asked to consider the possibilities of genetic engineering that hypothetically could produce a pig with wings.Participants were then asked to record if they thought any portion of the verse was sensical or if a comprehensive scenario could be developed ( a yes/no answer).. Participants who indicated a comprehensive scenario were asked to give a verbal description of it. Results All students participating in the verbal sessions initially considered the verse nonsensical; therefore, Hypotheses 1 was rejected. Once the hint "Kings eat cabbages" was added to the list of seemingly unrelated objects in lines three and four , 89% Chinese and 97% Western students created further relationships between ships, shoes, sealing wax, cabbages, and kings.Recognition of these relationships supports Hypotheses 2 Seven Chinese and two Western students could not grasp the notion of the metaphor implied by a walrus talking.All students accepted that the sea could be boiling hot near volcanic action.However, all students initially rejected the notion that pigs could have wings until making the connection to the possibility of future genetic engineering.Despite the hint about the discovery of life forms in the hot, toxic waters surrounding fumaroles, no student recognized the implication that life could exist elsewhere (such as on other planets) in hot toxic waters.It may be that students did not project possible implications due to a lack of basic knowledge of natural sciences. The possibility that future science might develop a pig with wings caused amusement but generally no other comments.The inability to consider the implication of pigs having wings may not indicate resolving a paradox, but may reflect the ability to use inferences, envisioning, and deductive logic to arrive at a possible solution.However, two Chinese students in different groups suggested that if the flying pigs could be encouraged to fly into the boiling sea the result would be instant pork or soup (although no cabbages were included in the recipe).This insight provides some corroboration for Wang (2007).However, for line five and six of the verse, Hypotheses 2 was not supported significantly. We found that four (6%) Chinese students created rational and creative scenarios.Three students explained the verse in terms of national discontent, applying metaphors for failure of social support (e.g., government indifference to poverty as evidenced by eating cabbages and kings wearing shoes while travelling on ships); public outrage (e.g., boiling hot sea); and hopelessness (e.g., when pigs can fly).One student applied engineering principles to the boiling hot sea and flying pigs to establish the concept of a steam powered amphibious ornithopter, capable of transporting a king wearing shoes and consuming state approved (by affixation of a seal) cabbages.Since four Chinese students (and no Western students) presented a coherent scenario, Hypothesis 3 was not supported significantly.The results are summarized in Table 1.The results indicate that once critical thinking has been established, both Chinese and Western students applied the principles to establish relationships or scenarios between some, but not all, of the elements in the verse.However, some students displayed evidence of creative thinking. Discussion Westerners consider critical thinking to be an essential element for meeting life's challenges (Cosgrove, 2009) and Chinese individuals rely heavily on relationships and the nature of change (Gou & Dong, 2011).The results of the test suggested that some (6%) of the Chinese individuals could rely on both during problem solving; however, the same was not observed in Western students, as they did not develop scenarios that encompassed all elements of the verse.We did not determine if western students could develop coherent scenarios if the students were given more time to complete the test. We could not conclude that critical thinking among Chinese students differed from critical thinking in Western students.We noted that Chinese students appeared to refrain from making inferences, envisioning, and deductive logic to generate solutions; rather, they formed arguments of different kinds.These differences were appeared to be based on relationships such as for example pigs and hot water lead to soup. We considered that the ability to create scenarios with all of the elements demonstrated Yin-Yang thinking because seemingly individual paradoxical subjects were included in the formulation of a coherent whole.We surmize that the iconography of the I Ching helps provide a visual representation of balances in nature and the creation of schemas, which is a factor in the development of Chinese critical thinking (c.f., Ignatow, 2004;Kirschner, 2011).We induce that the ability to establish overall scenarios for the verse is based (probably in part) on Yin-Yang thinking. We also considered how group characteristics might affect creative thinking.We reasoned that if critical thinking prompts and feedback influence critical and creative thinking then the source of the prompts and feedback (i.e., small group characteristics) becomes pertinent.Thus, if coherence can structure group thinking it can also can establish receptiveness to creative thinking but may hinder creative thinking if other group considerations such as conformity to group norms are of sufficient importance (Eliasoph & Lichterman, 2003).However, if coherence and conformity to group norms is weak, the relation to creativity will also be weak but may encourage individual creative thinking.Therefore, when conformity is considered a societal objective (as is the case in China), creative thinking could be hindered (Kan, 2010;Wang, 2007). The test used a piece of Western literature; thus, the responses of the Chinese students may have been influenced by their Chinese cultural experiences.This would appear to have been the case at the beginning of the test.However, the Chinese students extrapolated their understanding once the hints were provided.This result supports those reported by Jones (2005) where Chinese students were exposed to a Western environment and subsequently showed that their critical thinking skills became very similar to their local counterparts (Wang, 2007).Furthermore, Yun (2010) found that most Asian students could adopt effective coping strategies when faced with diffent cultural environments.However, proposing completely new paradigms are eschewed in favor of following other researchers so as not to deviate too radically from the norm (Wang, 2007).Whereas this limited test indicated differences between the Chinese and Western students' abilities to resolve a paradox we believe the belief that Western education encourages expression and that Eastern education stresses adherence to models needs to be revisited (Morris & Leung, 2010). Moreover, recent research has suggested the Chinese education system entails rote learning and is geared towards social control and prescribed moral judgments (Kan, 2010).For instance, Liao and Chen (2009) found that argumentative writing is taught differently in Chinese (Hong Kong) schools whereby Chinese textbooks appeal highly to historical and moral issues, while English textbooks suggest using Toulmin's reasoning system. Xiao and Tong (2010) indicated that there has been a recent swing away from the educational emphasis on rationality following the establishment of New China.There is a need to improve students' ability to apply moral value judgments suggesting that in China the adoption of Western critical and creative thinking is not yet complete.However, in this study, Chinese students were able to extrapolate from the hints; thus, critical and creative thinking were demonstrated.This may suggest that either these abilities are included in the education system or these abilities are subsets of Yin-Yang thinking. Future Research The notion of Yin-Yang thinking suggests several avenues of future investigations related to education, training, and research methods.We could not establish the extent to which Chinese traditional curricula and methods have continued.If the existence of New China has reduced the influence of traditional methods, the origin of traditional thinking in Chinese students requires investigation. Chinese students exposed to Western thinking may have managed to adapt to Western rationality; however, whether Western students exposed to Chinese thinking can similarly adapt is unknown.One possible avenue for future research is the relationship between the I Ching and thinking styles to determine how one method of thinking infuemces the other (c.f., critical thinking; Cosgrove, 2009).Chinese educators have long recognized the value of visual presentation, but the use of the I Ching as a model for the presentation of relationships has not yet been adopted in Western education.We did not determine whether Yin-Yang thinking could inhibit the critical thinking process. Limitations The testing used herein falls far short of the rigorous testing methods used by Sternberg and Grigorenko (2004); and further research is required.Furthermore, as noted by Sternberg and Grigorenko (2004), intelligence is culturally dependent and there may be cultural differences between Chinese and Western students' interpretation of a Western piece of literature.Thus, the finding where Chinese students were able to develop a complete scenario while Western students were not able to do so may be due to cultural differences resulting from Yin-Yang thinking. The participants in the test represented a small and educated segment of Chinese society and a small segment of Western students, so conclusions cannot be generalized to entire populations.In addition, the responses of the students may have been influenced by other factors.First, proficiency in English among the Chinese students varied, so there may have been language issues that influenced the Chinese students' understanding and responses.Because English is spoken by the students from the western countries we did not factor in language considerations for these stuednts. Second, the Chinese students may also have been reluctant to record their opinions due to peer pressure to conform to norms (Wang, 2007).Although the Western students represented a specialized group who benefitted from inter university exchange programs, we did not determine whether their status represented an influencing factor. Whereas we noted that some students were able to develop coherent scenarios, we were unable to determine if other students might do the same in a different context, such as contexts with no classmates present.Further, we did not examine if Yin-Yang thinking and intelligence might be linked.Sternberg (1998) found that intelligence is in part culturally dependent that is influenced by education and social norms.We also did not examine how education and or cultural norms might influence Yin-Yang thinking. Implications Critical and creative thinking are essential aspects of cognitive research.White (2002) found deficiencies in Chinese research that resulted in a lack of theory development and contribution to conceptual discourse, a finding that was echoed by other researchers (e.g., Cao & Li, 2010;Siu, 2010).Our study indicated that Chinese and Western participants demonstrated similar critical thinking abilities; however, some Chinese participants demonstrated more creative thinking than Western participants. One implication of our findings is that once Chinese participants understood the principles of critical thinking, their creative thinking appeared to be enhanced due to a Yin-Yang approach (Li, 2012).Indeed, Western readers may encounter difficulties in understanding how creative thinking outcomes were derived perhaps due to a lack of Yin-Yang influence.White (2002) and Harzing (2006) noted that while most Chinese and Western studies identified behavioral or attitudinal differences between regions, they did not examine the underlying causes these differences (Sternberg and Grigorenko, 2004).Our own experiences as university educators also have noted such differences.Stening and Zhang (2007) noted that educators should be aware of Chinese methodological issues, including an understanding of Chinese culture (Ryan & Kam, 2007;Silver, 2012). Conclusions In this study, we conducted a limited test to determine if Yin-Yang thinking can enhance the production of creative solutions.We found few differences between Chinese and Western participants' ability to use critical thinking; however, we did find that some Chinese participants demonstrated more encompassing creative thinking.Therefore, we suggest that the understanding of Yin-Yang thinking can enhance Western creative abilities, which will provide opportunities in problem solving. Table 1 . Summary of results
4,481.4
2014-11-27T00:00:00.000
[ "Education", "Philosophy" ]
Composite Laguerre Pseudospectral Method for Fokker-Planck Equations . A composite generalized Laguerre pseudospectral method for the nonlinear Fokker-Planck equations on the whole line is developed. Some composite generalized Laguerre interpolation approximation results are established. As an application, a composite Laguerre pseudospectral scheme is provided for the problems of the relaxation of fermion and boson gases. Convergence and stability of the scheme are proved. Numerical results show the efficiency of this approach and coincide well with theoretical analysis Introduction Fokker-Planck equations describe the evolution of stochastic systems, such as the erratic motions of small particles immersed in fluids, and the velocity distributions of fluid particles in turbulent flows.Several methods have been developed for the linear Fokker-Planck equations, such as combined Hermite spectral-finite difference method, domain decomposition spectral method, a semi-analytical iterative technique and etc, see [1,6,9,13,22].However, the nonlinear Fokker-Planck equations can better reflect nonlinear characteristics of the corresponding to physical problems.Applications could be found in various fields such as astrophysics, the physics of polymer fluids and particle beams, biophysics and population dynamics, see [7].So it is interesting to solve various nonlinear problems, such as the nonlinear Fokker Planck equations, see [2,4,10,11,16,20]. Let R = { v | −∞ < v < ∞} be the velocity of particles.Denote by W (v, t) the probability density.Moreover, W 0 (v) is the initial state.For simplicity, let ∂ z W = ∂W ∂z , etc.The nonlinear Fokker-Planck equations for the relaxation of fermion and boson gases are given as follows (cf.[7,14]): where k = 1 for bosons and k = −1 for fermions.These models have been introduced as a simplification with respect to Boltzmann-based models as in [5,15].The entropy method was applied for quantifying explicitly the exponential decay towards Fermi-Dirac and Bose-Einstein distributions in the one-dimensional case, see [3].Further more, some numerical methods were developed for the problems (1.1).For example, the full-discrete generalized Hermite spectral and pseudo-spectral schemes were proposed in [4,21].However, in the stability and convergence analysis of the fully discrete pseudospectral scheme, the aliasing error brings a certain difficulty.Also, a composite Laguerre-Legendre pseudospectral scheme is presented in [19].Yet it requires more basis functions, which makes the calculation more complex.Recently, Wang [20] considered the composite Laguerre spectral method for the problems (1.1), in which the nonlinear term ∂ v (vW (v, t)(1 + kW (v, t))) exacerbates the difficulty of calculating the quadratures over the whole line.So we prefer composite generalized Laguerre interpolation method that only predicates on estimated values of unknown functions on the interpolation nodes, and handles nonlinear terms easily [18,24,25,27], which preserves the continuity and possesses the global spectral accuracy on the whole line [8,9,12,17,22,26].This paper is focused on developing a semidiscrete composite generalized Laguerre pseudospectral method for problem (1.1).The method needs fewer basis functions and avoids the aliasing errors analysis caused by the second order difference term in [21].To cope with the stationary solution whose value decays exponentially as |v| → ∞ in [3], we take the generalized Laguerre functions as the base functions, which is a complete L 2 (R)-orthogonal system in [9].Also, we can greatly ameliorate the accuracy of numerical errors by selecting the scaling factor involved in the base functions.Moreover, the numerical analysis is simplified and the algorithm scheme has better stability by using the basis function of weight function χ(v) ≡ 1. This paper is organized as follows.In Section 2, we establish some results on the composite Laguerre interpolation approximation, which are pivotal to the error analysis of pseudospectral methods for various differential equations on the whole line.Then we construct a semidiscrete composite Laguerre pseudospectral scheme for (1.1) and present some numerical results to show the high accuracy of the proposed algorithm in Section 3. We prove its convergence and stability in Section 4. The final section is for some concluding remarks. Preliminaries In this section, we establish some basic results on the composite Laguerre-Gauss-Radau interpolations. In particular, H 0 χ (R + ) = L 2 χ (R + ), with the inner product (u, w) χ,R + and the norm ∥u∥ χ,R + .We omit the subscript χ in the notations when χ(v) ≡ 1. Let The generalized Laguerre polynomial of degree l is defined by We denote by P N (R + ) the set of all polynomials of degree at most N .Let R+ = R + ∪ {0} and ξ N +1 (v), which are arranged in ascending order.Denote by ω We introduce the following discrete inner product and norm (cf.[22]): By the exactness of (2.1), By (2.4) of [22], we have Let R+ be the same as before.For any u ∈ C( R+ ), the generalized Laguerre-Gauss-Radau interpolation I R,N,α,β,R + u ∈ P N ( R+ ) is determined by To design proper pseudospectral method for the Fokker-Planck equation and many other similar problems, we shall use the orthogonal system of generalized Laguerre functions, defined by We now consider the new generalized Laguerre-Gauss-Radau interpolation corresponding to the weight function R,N,R + ,j , 0 ≤ j ≤ N. We introduce the following discrete inner product and norm (cf.[22]): In particular, (cf.(2.9) of [22]) Moreover, for any ϕ ∈ Q N +1,β (R + ), we have (cf.(2.10) of [22]) For pseudospectral method of nonlinear problems with varying coefficient, we need the following result.Lemma 1.For any ϕ ∈ Q N,β (R + ), there holds (2.3) Proof.Following the same line as in the proof of Lemma 2.1 of [19], we can obtain the desired result.⊓ ⊔ For any u ∈ C( R+ ), the generalized Laguerre-Gauss-Radau interpolation ĨR,N,α,β,R Furthermore, by the definitions of I R,N,α,β,R + and ĨR,N,α,β,R + , we have that (2.4) In particular, if |α| < 1, then the above result holds for any integer r ≥ 1. We now consider the interpolation on the subdomain which are arranged in descending order.Denote by ω(α,β) R,N,R − ,j , 0 ≤ j ≤ N, the corresponding Christoffel numbers.For v ∈ R − , we introduce the discrete inner product and norm as follows (cf.[22]): 3), we have that ) ( In particular, if |α| < 1, then the above result holds for any integer r ≥ 1. Composite generalized Laguerre interpolation on the whole We are now in position of studying the composite generalized Laguerre-Gauss-Radau interpolation on the whole line R = R+ ∪ R− .The space L 2 χ (R) is defined as usual, with the inner product (u, w) χ,R and the norm ∥u∥ χ,R .We omit the subscript χ in the notations when χ(v) ≡ 1.Further, let We introduce the discrete inner product and norm as By virtue of (2.2)-(2.3)and (2.5)-(2.6),we have that ) ) ) Math.Model.Anal., 28(4):542-560, 2023. In numerical analysis of the composite pseudospectral method for the Fokker-Planck equation on the whole line, we need a non-standard projection , which is defined by By (2.11) of [20], we have that if We need the following embedding inequality (cf.[21]). By Lemma 2 and (2.12), we can get the estimate on L ∞ (R)-norm of P 1 N,β,R u. (2.14) In the end of this section, we need some inverse inequality which will be used in the sequel (cf.[20]). Composite pseudospectral method In this section, we propose the composite pseudospectral method for the nonlinear Fokker-Planck equations on the whole line.We also describe the implementation and present some numerical results. Pseudospectral scheme Now, we deduce the pseudospectral scheme of (1.1), whose weak formulation is to seek W which belongs to the space L ∞ (0, T ; We now design the composite generalized Laguerre pseudospectral scheme for (3.1).It is to find Thanks to (2.8), the above problem is equivalent to Next, we describe the implementations for pseudospectral scheme (3.2) with a nonhomogeneous term f (v, t), we use the Crank-Nicolson discretization in time t, with the mesh size τ . For simplicity of statements, we use the notation The fully discrete scheme of (3.2) is as follows: Then, at each time step, we need to solve the following nonlinear equation: Math.Model.Anal., 28(4):542-560, 2023. For notational convenience, let L(β) The functions In actual computation, we expand the numerical solution as Numerical results In this subsection, we present some numerical results confirming the theoretical analysis.We use scheme (3.3) to solve (1.1) with k = 1.The numerical errors are measured by the discrete norm Now, we take the test function which decays exponentially at infinity, In Figure 1, we plot the errors log 10 E N,τ (t) with t = 10 and β = 5.85 vs. √ N .Clearly, the errors decay fast when N increases and τ decreases.The above facts coincide very well with theoretical analysis in Theorem 1 on page 16.In particular, they show the spectral accuracy in the space of scheme (3.3). In Figure 2, we plot log 10 E N (t) at t = 10, τ = 0.01 and different values of parameter β vs. √ N .It seems that the errors with suitably bigger β are smaller than those with smaller β.However, how to choose the best parameter β is still an open problem.Roughly speaking, if the exact solution decays faster as v increases, then it is better to take suitably bigger β. Convergence and stability analysis of pseudospectral scheme In this section, we consider the convergence and stability of scheme (3.2). Convergence analysis We next deal with the convergence of scheme (3.2).Let W N = P 1 N,β,R W .We derive from (3.1) that where Taking W N = w N − W N and subtracting (4.1) from (3.2), we obtain that where 2), we deduce that for 0 < t ≤ T , Therefore, it suffices to estimate the terms |G j (t, W N )|.Firstly, we use the Cauchy inequality and (2.12) to verify that for integers r ≥ 1, Obviously, According to (2.11) with α = 1, it is easy to derive that ), (2.14) with r = 1, (2.11) with α = 2 and (2.12), we have that A combination of the above two estimates gives that By (2.8) and (2.9) with α = 0, we deduce that Using (2.15), (2.16) with q = ∞ and (2.13), we derive that ), For fully big N , and r > 1, we have where d(W (t), β) is a positive constant, depend on ∂ r v (e ), where Integrating (4.6) with respect to t, we deduce that where λ = 1 for w N (0) = I N,0,β,R W 0 , and λ = 0 for w N (0) = P 1 N,β,R W 0 .We shall use the following lemma (cf.Lemma 3.1 of [11]). Theorem 1.For 0 ≤ t ≤ T, we have , provided that the norms appearing in the previous statements are finite. Stability analysis We now consider the stability of scheme (3.2), which might be of the generalized stability as described in [10].Suppose that W 0 has the errors W0 .They induce the error of w N denoted by wN .Then, we obtain from (3.2) that for all (4.9) Taking ϕ = 2 wN (t) in (4.9), we derive that for 0 < t ≤ T , Substituting above inequality into (4.10),we deduce that Let Z(t) be the same as before (see the page 16 of the paper).We take in Lemma 4, Conclusions In this paper, we developed the composite generalized Laguerre pseudospectral method for the nonlinear Fokker-Planck equation on the whole line, which is distinguished from the methods as mentioned in the references in Section 1.The numerical results demonstrated spectral accuracy in space, and well confirmed the theoretical analysis. The main advantages of the proposed approach are as follows: By using different generalized Laguerre interpolation approximations on different subdomains, we could deal with non-standard types of PDEs on the whole line properly.This trick also simplifies actual computations, especially for large modes N . With the aid of composite generalized Laguerre interpolation approximations coupled with domain decomposition, we could exactly match the numerical solutions on the common boundary v = 0 of adjacent subdomains R + and R − , and the singularities of coefficients appearing in the underlying differential equations.Consequently, we could deal with the nonlinear Fokker-Planck equation on the whole domain properly.
2,826.6
2023-10-20T00:00:00.000
[ "Physics" ]
Dietary ecology of Markhor(Capra falconeri cashmiriensis) in winter range of Kazinag National Park, Kashmir, J&K, India Background/Objective: Understanding winter diet composition of wild ungulates in temperate habitats is of paramount importance for devising conservationmeasures. The winter diet composition ofMarkhor (Capra falconeri), one of the least studied ungulate species, was assessed in Kazinag National Park (KNP) of Jammu and Kashmir, India.Methods: Reference slides of 15 available plant species, through micro-histological technique were prepared. Tests like Diet Selection Values (DSV), Ivlev's Electivity Index (IEI) and Chi-square tests were applied to study the selection and preference of dietary items. Findings: 80 fecal samples of markhor were analyzed in winter seasons of 2017 & 2018, and compared with reference slides to evaluate the winter diet. Fifteen (15) plant species belonging to 7 families were identified in the diet. Use of Ivlev's Electivity Index (IEI), revealed that, shrubs were strongly preferred during this season, besides one graminoid species (Poa pratensis). Among the most preferred species are, Poa pratensis (DSV=6.17) followed by Prunus tomentosa (DSV=2.42), Indigofera heterantha (DSV=2.23), Lonicera spp. (DSV=1.66) and Euonymus hamiltonianus (DSV=1.63). Chi-square goodness of fit test showed that markhor did not feed on all plant species uniformly (p< 0.05). Novelty: Our findings infer that, markhor shows feeding flexibility to adapt to change in forage availability. We recommend that plant species which are the major components of diet of markhor during resourcelean winter be conserved and propagated on prior- Introduction To meet the dietary need is the fundamental task for a wild ungulate to survive in harsh environmental conditions. Winter, a season with severe https://www.indjst.org/ climatic conditions, is a tough period for the survival of majority of ungulate species due to little choice of preferred forage and more energetic demands associated with movements through snow covered habitats (1)(2)(3) and is thus crucial period from animal ecologist's viewpoint (4) . Habitats with rugged terrain and snow cover, tend to have strong spatial and seasonal variations in food availability for ungulates (5) . In highly seasonal environment, as in KNP, diet quality and its availability act as strong constraint (6) . The winter snow cover is one of the important abiotic factors affecting the resource selection by ungulates inhabiting such extreme environments and tend to cope up with these conditions by using different strategies like limited movements through snow (5,7) and by altering their rumen physiology and metabolism to adjust to lignin rich and nutrient poor winter diets (8,9) , thus show plasticity to cope up with seasonal changes in nutritional quality and its quantity. Consumption of unusual plant material during such conditions leads to poor health and reproductive performance (10,11) . Information on the diet and its selection, during different seasons is a primary element to know multiple aspects of ungulate ecology (12) and is a determining component for their survival, health, and mobility (13) . Nutritive qualities of winter forage is poor and are least studied, hence, a comprehensive study must be initiated to fill the gap on data on this issue. The behavior of high altitude ungulates is mainly affected by the nature of availability of food in their habitats and the ways in which these are obtained during different seasons. Kashmir markhor also called flare horned markhor (Capra falconeri cashmiriensis), a true goat is distributed from Afghanistan to Pakistan, PoK and Jammu & Kashmir (14) . In India, markhor exists in Kashmir valley only (15)(16)(17) , which is among the primary areas for the Pirpanjal markhor in India. In recent state-wide surveys of markhor only two viable populations totaling approximately 250, were confirmed in Kashmir, besides identifying a few more markhor potential areas in the state (17,18) . These include the Kazinag National Park and Hirpura Wildlife Sanctuary (18) . Winter, a critical season with harsh conditions and scarcity of food has detrimental effect on the survival of this threatened goat and information on diet utilization during this critical season is a prerequisite for the effective and proper management steps to be taken for maintaining its viable populations in the wild. Although some work on distribution, status and habitat of this caprid has been conducted in Kashmir (15)(16)(17)(18)(19)(20)(21)(22) but there is dearth of data on its winter diet composition in Kashmir. With this aim, to understand the dietary composition and selection by markhor during resource-lean season of winter, the study was undertaken in Kazinag National Park, and the data procured, through this study, is expected to be useful to conservation stakeholders for planning apt management measures for the survival of this wild caprid. Study Area The work was conducted in Kazinag National Park (34 • 10'0"N latitude and 74 • 2'0"E longitude) with an altitudinal range of 1,800-4,700m asl, located in Western Himalaya of India (23) in the valley of Kashmir ( Figure 1).The vegetation is temperate coniferous, alpine and sub-alpine type (24) dominated by Pine (Pinus wallichiana), Deodar (Cedrus deodara) and Fir (Abies pindrow) in the mid-lower elevations. At higher elevations, the subalpine forest is dominated by Birch (Betula utilis) and mixed forests whereas the alpine vegetation is dominated by Juniper (Juniperus squamata) and alpine meadows. The riverine forests are dominated by Horse Chestnut (Aesculus indica) forests and Viburnum grandiflorum shrubs whereas temperate grasslands with rolling terrain at lower elevations. Temperature varies from -10 o C in winter to +30 o C in summer. The precipitation is received as snow during winter, rains in spring and occasional showers in summer. The typical seasons in the region are: spring (March-May), summer (June-August), autumn (September-November) and winter (December-February). Data collection The field data was collected during winter seasons (December-February) of 2017 and 2018. Our study was centered on the identification of microscopic, undigested plant remnants chiefly the epidermal features, characteristic of each plant species, obtained from fecal pellets (25) . For the purpose, reference slide preparation of food plants, their microphotography to establish a reference library, collection of fecal samples, making slides of fecal samples and identification of fragments of plants from the slides of fecal samples by comparing with the microphotographs of reference key was done (26)(27)(28) . Preparation of plant reference slides Reference slides of potential food plants of markhor were prepared, as key, after collection from the study site and identified from the Center for Plant Taxonomy and Biodiversity, University of Kashmir, Srinagar. For this, 12 line-transects of 2km each, were laid in all four winter range habitats: coniferous forest, grassland, cliffy areas and riverine areas. In each transect, plots (10m radius) were laid after every 200m. The plant species that were potential food of markhor, were collected, after thorough field observations on feeding, and confirmed from wildlife officials, field experts and locals. Plant samples collected were dried, shredded and put in glass test tube containing 33% Nitric acid and water in 1:3 ratio. This test-tube was heated in a water-bath for 5 minutes. When solid material settled down, Nitric acid was decanted, and fresh Nitric acid (33%) was added. The material was again boiled till it became transparent. Then the transparent material was washed with water to remove Nitric acid. It was followed by staining the material, just for two minutes, with safranin. The sample was again washed in water and then dehydrated by processing through different grades of alcohol. Dehydration was completed by placing it in absolute alcohol. Mounting was done in Canada balsam and microphotographs were captured with the help of a digital binocular microscope (Olympus, BX60). Field collection of fecal samples Fecal samples (n=80) were collected from 12 permanent transects. Pellets of one group of faeces were counted as one sample. The pellets of markhor were differentiated from that of goral, musk deer, sheep and goat on the morphological characters viz. dimension, size & shape (29) . Sampling plots were systematically designed, laid parallel and were almost equidistant (100m) from one another. Wherever pellets were collected, a widely used plot size of 10m × 10m for the study of dietary patterns of wild animals was laid around the pellets (30)(31)(32)(33) . Pellets were aged, as fresh, comparatively old, or very old, based on texture (34) . Slide preparation (fecal samples) Randomly selected, oven-dried pellet groups from each sample were crushed and sieved through two small-mesh sieves of mesh size 5mm and 3 mm respectively. The fine sieved material was put to further analysis whereas the course material was discarded. Fine material was placed in a test-tube having 33% nitric acid and water in 1:3 ratio. Further processing and slide preparation was done in the similar manner as was done for the reference material. Three slides were prepared for each sample with 240 slides in all (80 samples × 3 slides). While identifying the plant fragments, 4 microscopic viewing fields, for each slide were considered with sum total of 960 FOV. Fragments of diverse plant species, from the pellets, were identified by comparing with the microphotographs of reference vegetation, on the basis of characteristics viz. cell-wall, cell shapes, trichomes, and stomata (25) . Data Analysis The relative proportion of a particular plant species in the given fecal sample, which is sum of remnants identified for that plant species divided by the total count of all fragments, was symbolized as relative importance value(RIV), and was expressed as percentage (35) . Diet selection value (DSV) was calculated as follows (35) : Where RIV x is the RIV for species x, and expresses its relative frequency in the faeces. PV x shows the prominence value (PV) for species x, and expresses the relative availability of that plant in the markhor habitat. PV was calculated as follows (36) : Where M x is the % cover of species x, and f x is the frequency of occurrence of species x in sample quadrats. Food preference of markhor was determined by calculating Ivlev's electivity index (IEI) (37) as: Where 'r i 'is the share of vegetation type 'i' in the markhor diet, and 'pi' is the total proportion of vegetation type 'i' along all systematically sampled quadrats (i.e.in the habitat). IEI of '1.0' express high preference for a vegetation type, '0' denotes use in proportion to availability, and '-1.0' denotes complete avoidance (37) . Statistical analysis The data was analyzed with Statistical packages MS-Excel 2007 and MINITAB software version 13.2 (Minitab-2002) with confidence level of 95% and P<0.05 for significance. Results We recorded availability of 15 species of plants that belong to 7 families ( Figure 2). Apart from the identified plant fragments, 608 unidentified fragments with a proportion of 21% were recorded and were eliminated from statistical analysis. Among browse species, shrubs were far dominant with an overall occurrence of 81.85%. The dominant shrubs were Indigofera heterantha (RIV=13.98), Prunus tomentosa (RIV=7.23) and Lonicera spp. (RIV= 3.51). The dominant tree species in markhor diet was Pinus wallichiana (RIV=2.68) whereas Cynodon dactylon was the dominant grass species (RIV=13.43). We could not find any single tree species that markhor consumed in significantly higher proportion than their availability. Plant species which were utilized more than their availability include Indigofera heterantha (PV=6. 26 17) were utilized less than their availability. The abundant plant categories available during winter were trees, followed by shrubs and grasses but were utilized in different proportions ( Figure 3). The recognition of fragments of various plant species from pellets differed significantly at species level (χ 2 =1529.731, df =14, p<0.000), at family level (χ 2 =2382.947, df =6, p<0.000) and at growth form level (χ 2 =606.972, df =2, p<0.000). We also observed that markhor strongly selected Poa pratensis (DSV=6.17), followed by Prunus tomentosa (DSV=2.42), Indigofera heterantha (DSV=2.23) Lonicera spp. (DSV=1.66) and Euonymus hamiltonianus (DSV=1.63). Ivlev's electivity index (IEI) values revealed that, markhor shows a strong preference for shrubs and grasses during winter season and least preference for trees ( Figure 3). Discussion Food and its availability have fundamental impact on the physical health and fertility of an animal. Utilization of nutritious diet helps a faunal species to combat diseases and reproduce successfully, what is actually the basic requisite for a species to coup up in the competition for existence and in continuing its race. Knowing feeding strategies of wild ungulates is vital for sound management of a species especially in protected areas (38,39) . Each species prefers a particular type of food and shows peculiar type of foraging behavior. Feeding in markhor occurred early in the morning during hot months, with occasional day feeding (21,40,41) . Early morning and evening foraging and midday rest during hot days of summer, as observed in the current study, was also observed in other wild ungulates (27,42,43) . But during winter, food was short and scarce, hence, feeding occurred intermittently throughout the day. Continuous day feeding during winter could be because of limited availability of forage during this season (40) . The ratio of different categories of plants in herbivore diet represents their dietary diversity and composition (44,45) .Consumption of grasses in all four seasons suggests that markhor is primarily a grazer. Same has been observed in a number of wild ungulates. Grasses make an important and major dietary part of Himalayan goral (46) , which mostly consumed grasses 84% (47)(48)(49) , with browse to graze ratio of 12: 88 (50) . In Kazinag National Park, during winter, shrubs and grasses constitute important components of the markhor diet. The ratio of browse to graze (36.28%:42.23%) clearly indicates that, markhor shows a browsing strategy during winter. The reason behind such changed strategy of feeding during winter could be due to environmental conditions as also evidenced in grey goral in Pakistan (51) . Similarly, Bighorn sheep of British Columbia mostly browsed during winter and shrubs contributed the greatest proportion of its diet (52) .The present study clearly concludes that markhor strongly prefers shrubs ( Figure 3) during win-ter with Indigofera heterantha and Prunus tomentosa alone contributing 21.22% of the whole diet. These results are supported by earlier results. It was reported that the shrubs constitute the main component of the grey goral diet during winter with Berberis vulgaris and Viburnum nervosum as the most common dietary shrubs (28) . We also reported that some grasses were available in markhor habitat and their relative percentage in the markhor diet was significant during winter. Our findings are substantiated by the earlier findings for goral in various protected areas in India (50,53,54) .This also implies that markhor is primarily a grazer but also browse whenever required. Thus its diet changes with the season and availability (22,55) . Moose has been described to modify its rumen physiology and rate of dietary intake as a response to scarcity of nutritive diet during fall (9) . Kazinag National Park experiences heavy snowfall during winter that covers the entire area. Deep layers of snow are probably the reason for the low utilization of grasses in winter as most of grasses remain under snow cover. However, grasses in certain areas with less snow depth and also around the cliffs are available to markhor but not in a sufficient quantity due to over grazing by livestock of herders during the previous autumn season. The negative impact of livestock grazing on availability of forage particularly in winter was also speculated by many authors (14,55,56) . Some herbs although available in markhor habitat during winter but did not appear in fecal samples. Herbs being fugacious and appear for a shorter duration, have limited availability and hence, low consumption (57) . Another reason for very low representation of herbs in the markhor feaces is perhaps their high digestibility. They have softer tissues, hence, expected to face higher digestion and lower representation as identifiable pieces in fecal samples (57,58) . Three species of trees (Aesculus indica, Picea smithiaina and Pinus wallichiana) were also reported as lean-winter food of markhor. The conifer species viz. Picea smithiana, Cedrus deodara and Abies pindrow were also recorded as diet of goral in Pakistan during winter season (28) . Due to limited dietary choice during winter, herbivores consume food of low nutritive quality like conifer needles (59)(60)(61) but in other seasons they are avoided as they are low in energy content (62,63) . Consumption of needles of some conifers shows unfavourable foraging and may affect health and reproductive condition of herbivores (10,11) . Although trees were abundant (32.14%) in the habitat, but contributed only 6.68% of the winter diet ( Table 1 ), hence, reflected that the utilization of trees by markhor was primarily due to availability rather than selection. The utilization of conifers by high altitude temperate ungulates during winter signify dietary compromise when preferred forage availability is limited (22) and serves as an emergency winter forage when deep snow makes other forages unavailable (64) . Moreover, the rugged geomorphology of the study site, covered by snow, acts as severe bottlenecks during cold and snowy winter (6) . Our findings revealed that, the winter nutrition of markhor was dominated by shrubs and grasses contributing 72.37% of the consumption. These observations were authenticated by the other findings. The winter forage of mule deer was reported to be dominated by browse species (74%) followed by graze species (26%) in North America (65) . During winter, browse provides major proportion of nutrients, especially proteins, during critical times of the season when grasses were low in nutritional value & digestibility and with ample fiber content (66,67) . Markhor utilizes grasses and herbs during other seasons but shifts to browsing mode in winter for nourishment has also been documented earlier (16) . The dietary shift of markhor to browsing mode may be owing to decline in availability of graze species in winter with increasing snow depth. During winter, snow cover limits the access of ungulates to ground forage thus they suffer from dietary deficiency (68) . There occurs an increased shrub use and decreased forb & grass consumption during winter with increased snowpack (69) . Conclusion Although a mixed feeder with more tendencies towards grazing, the results confirmed that, markhor shows a browsing strategy during winter season, thus shows high adaptability in feeding habit. Conifers are consumed by markhor during winter as an emergency food rather than selection and shrubs were critical to the dietary composition and consumed at relatively high rates. The snowfall during winter acts as a major limiting factor and has a drastic impact on the survival of this animal as almost all ground forage remains covered and hence, unavailable for consumption. Nutritive value of dietary species of wild herbivores has hardly been evaluated; hence, need to be studied to ascertain the reason of their avoidance or preference (49) . We also recommend that the species of plants consumed by the markhor during winter must be protected and propagated (55) and supplementary feed must be provided to this wild goat during resource-lean period of winter. The anthropogenic activities inside the National Park must be strictly curbed for the continued survival of markhor in its distribution range in Kashmir.
4,386.2
2020-06-27T00:00:00.000
[ "Environmental Science", "Biology" ]
The Association of Neonatal Gut Microbiota Community State Types with Birth Weight Background: while most gut microbiota research has focused on term infants, the health outcomes of preterm infants are equally important. Very-low-birth-weight (VLBW) or extremely-low-birth-weight (ELBW) preterm infants have a unique gut microbiota structure, and probiotics have been reported to somewhat accelerate the maturation of the gut microbiota and reduce intestinal inflammation in very-low preterm infants, thereby improving their long-term outcomes. The aim of this study was to investigate the structure of gut microbiota in ELBW neonates to facilitate the early identification of different types of low-birth-weight (LBW) preterm infants. Methods: a total of 98 fecal samples from 39 low-birth-weight preterm infants were included in this study. Three groups were categorized according to different birth weights: ELBW (n = 39), VLBW (n = 39), and LBW (n = 20). The gut microbiota structure of neonates was obtained by 16S rRNA gene sequencing, and microbiome analysis was conducted. The community state type (CST) of the microbiota was predicted, and correlation analysis was conducted with clinical indicators. Differences in the gut microbiota composition among ELBW, VLBW, and LBW were compared. The value of gut microbiota composition in the diagnosis of extremely low birth weight was assessed via a random forest-machine learning approach. Results: we briefly analyzed the structure of the gut microbiota of preterm infants with low birth weight and found that the ELBW, VLBW, and LBW groups exhibited gut microbiota with heterogeneous compositions. Low-birth-weight preterm infants showed five CSTs dominated by Enterococcus, Staphylococcus, Klebsiella, Streptococcus, Pseudescherichia, and Acinetobacter. The birth weight and clinical indicators related to prematurity were associated with the CST. We found the composition of the gut microbiota was specific to the different types of low-birth-weight premature infants, namely, ELBW, VLBW, and LBW. The ELBW group exhibited significantly more of the potentially harmful intestinal bacteria Acinetobacter relative to the VLBW and LBW groups, as well as a significantly lower abundance of the intestinal probiotic Bifidobacterium. Based on the gut microbiota’s composition and its correlation with low weight, we constructed random forest model classifiers to distinguish ELBW and VLBW/LBW infants. The area under the curve of the classifiers constructed with Enterococcus, Klebsiella, and Acinetobacter was found to reach 0.836 by machine learning evaluation, suggesting that gut microbiota composition may be a potential biomarker for ELBW preterm infants. Conclusions: the gut bacteria of preterm infants showed a CST with Enterococcus, Klebsiella, and Acinetobacter as the dominant genera. ELBW preterm infants exhibit an increase in the abundance of potentially harmful bacteria in the gut and a decrease in beneficial bacteria. These potentially harmful bacteria may be potential biomarkers for ELBW preterm infants. Introduction Extremely-low-birth-weight (ELBW) infants refers to newborns whose birth weights are less than 1000 g.ELBW preterm infants have less well-developed systems than other low-birth-weight (LBW) preterm infants, and their poor immune systems make them more susceptible to infections and other preterm complications, often involving the nervous system, which increase the risk of cerebral palsy, intellectual disability, mission, and deafness [1,2].The mortality rate of premature infants with ELBW is high.It was reported that the probability of ELBW premature infants dying from clinical complications (such as necrotizing enterocolitis and sepsis) is as high as 23% [3][4][5]. The gut microbiota begins to colonize the gastrointestinal tract at birth and plays an important role in the growth and development of newborns in the early stages of life and beyond.However, the diversity of gut microbiota is low in early neonatal life, and the structure of the gut microbiota is influenced by a variety of factors, including the mode of delivery, gestational age, birth weight, feeding method, and the environment [6][7][8][9][10][11].The mode of delivery is one of the most important determinants of gut microbiota composition [12].In vaginally delivered newborns, the abundance of Bacteroidetes is higher, while in cesarean-delivered newborns, Klebsiella and Haemophilus are the dominant species 6.Studies have shown that gestational age and birth weight are the most important factors influencing differences in intestinal microecology.Preterm infants have a unique gut microbiota in the early postnatal period [13], which is dominated by conditionally pathogenic bacteria, such as Staphylococci, Enterococci, and Enterobacteria, and beneficial bacteria such as Bifidobacteria do not exist as dominant species [14].Most LBW preterm infants are transferred to a neonatal intensive care unit (NICU) after birth to be maintained on respiratory support equipment because of respiratory distress or other reasons.The gut microbiome colonization in LBW preterm infants can also be influenced by the NICU's ambient settings and the usage of appropriate equipment.Extended respiratory support in preterm infants can lead to an increase in intestinal aerobic and facultative anaerobic bacteria [15].Gut microbiota genera in LBW preterm infants in the NICU are dominated by Klebsiella, Enterobacter, and Enterococci, and differences among the gut microbiota decrease with an increase in hospitalization time [16].An other significant element influencing the gut microbiota makeup in preterm newborns is feeding method.Breast-fed and non-breast-fed infants have different gut microbiota [17].However, breastfeeding can help premature infants' immune systems mature and encourage the colonization of intestinal bacteria Bifidobacterium [18].The maternal diet can also affect the composition of the infant's gut microbiota [19][20][21][22].For instance, if the mother consumes plant-based protein or a high-fat diet, it can lead to a significant reduction in the presence of Bacteroides bacteria in the newborn's gut, and the decrease in Bacteroides may affect the early-immune and metabolic development of newborns [20,21].In addition, the use of antibiotics also has a certain impact on the composition of gut microbiota in premature infants.Antibiotics can reduce the diversity of gut microbiota and delay the colonization of Bifidobacterium [23].The community state type (CST) is based on the gut microbiota abundance obtained from sequencing analysis and classified into different CSTs by clustering [24,25].There are also variations in the types of gut microbiota-community states among infants of different age groups.In healthy infants under 6 months old, the gut microbiota CSTs are mainly characterized by a higher abundance of Bifidobacterium, while in infants aged 12 to 36 months typical adult bacterial genera such as Bacteroides and Faecalibacterium predominate [26].It can be seen that, as the newborn grows and develops, the composition of gut microbiota in the body also undergoes dynamic changes. Current research has found that preterm infants, because of their prolonged exposure to the NICU environment and the relatively frequent clinical interventions such as respira-tory support and antibiotic use they experience, undergo changes in their gut microbiota composition, making them more susceptible to conditions like NEC and late-onset sepsis (LOS) [3,27,28].Supplementing the food of early-stage newborns with probiotics such as Bifidobacterium can promote the colonization of the intestine by beneficial bacteria, thereby preventing or reducing the occurrence of NEC, LOS, and feeding intolerance [29][30][31][32].Probiotic supplementation improves gut microbial composition, making it closer to that of full-term infants, which is beneficial for promoting immunity and metabolism [33][34][35][36].Probiotic-supplemented ELBW preterm newborns had low levels of harmful bacteria and a substantial increase in the gut bacterial Bifidobacterium.The results showed that the abundance of Bifidobacterium in the intestinal bacteria of preterm infants of ELBW who received probiotic supplementation was significantly higher than those who did not receive probiotics, and the abundance of pathogenic bacteria was lower.Simultaneously, preterm infants who received probiotic supplementation had higher levels of acetate and lactate (end products of HMO metabolism), and the abundance of acetate was positively correlated with the abundance of Bifidobacterium [37].At the same time, the gut microbiota diversity of ELBW preterm infants who received probiotic Lactobacillus supplementation increased, and the abundance of the supplemented probiotics also rose.Compared with the control-group infants, ELBW preterm infants who received probiotic supplementation had reduced abundances of Staphylococcaceae and Enterobacteriaceae in their intestines [38].It can be seen that probiotic supplementation for preterm infants can facilitate colonization of the intestine by beneficial bacteria and reduce harmful bacteria.Probiotics can also promote the metabolism of HMO in breast milk, enabling the beneficial metabolites in HMO to exert their immune-enhancing effects. Most studies on intestinal microbiota have focused on full-term infants; however, the health outcomes of ELBW and VLBW preterm infants are equally important.Because of their immature systemic physiology and immature intestinal microbiota structure, they may be predisposed to long-term outcomes such as neurodevelopmental disorders [39].Studies have found that there is a correlation between gut microbiota and brain function.A study established a connection between the gut microbiota, immunology, and neurodevelopment in extremely-preterm infants and discovered that excessive growth of the intestinal microbiota can be a strong predictor of brain injury.Abnormal development of the gut-microbiota-immune-system-brain axis may drive or exacerbate brain injury in extremely-preterm infants [40].The underlying mechanisms of these effects have not been fully elucidated, and some have not even been considered.Therefore, this study aimed to investigate the gut microbiota structure of preterm infants with LBW using 16S rRNA gene sequencing technology.We analyzed the gut microbiota structure, corresponding microbiota profiles, and the CST of the gut microbiota among preterm infants of different birth weights.Correlation analysis of the CST and clinical indicators of preterm infants was conducted, and the clinical value of the intestinal microbiota in diagnosing extremely-LBW preterm infants was evaluated. Participant Enrollment and Sample Collection This study included a total of 98 fecal samples from 39 preterm infants with LBW.Inclusion criteria: premature infants hospitalized in the NICU of the neonatology department; gestational age at birth of <37 weeks and a birth weight of <2500 g; hospitalization time > 7 days.Exclusion criteria: neonates with a gestational age at birth of ≥37 weeks and a birth weight of ≥2500 g; hospitalization time < 7 days; premature infants with severe congenital heart disease and severe digestive tract malformation who need surgery; premature infants with Down syndrome, hereditary metabolic diseases and severe asphyxia; stillbirths, induced abortions, combined with severe cardiac and renal dysfunction.We selected the first stool sample of NICU low-birth-weight premature infants who met the inclusion criteria, then planned to collect fecal samples every 2 weeks until discharge or until the 8th week of collection.Finally, the preterm infants were divided into three groups based on their birth weights: ELBW (<1000 g), VLBW (1000-1499 g), and LBW (1500-2499 g). The guardians of the participants collected fecal samples in sterile containers and transported them overnight on ice to the laboratory.The researchers immediately aliquoted the samples into tubes containing 3-5 g each and stored them in a −80 • C freezer.The research protocol of this study was approved by the hospital's medical ethics committee, and each neonate's parents provided written informed consent.The research protocol was designed in compliance with the Helsinki Declaration and approved by the hospital's medical ethics committee. Analysis of Ecological Diversity Indices The diversity function from the R package Vegan (version 2.6-4) was used to calculate the Shannon and Inverse Simpson indices for the samples.The estimateR function from the R package Vegan was used to calculate the richness index for the samples. Stacked Bar Chart, Chord Diagram, Venn Plot, Volcano Plot, Manhattan Plot The processes used to obtain the stacked bar charts, chord diagrams, Venn plots, volcano plots, and Manhattan plots were completed by referring to the EasyAmplicon protocol [47]. Constrained Principal Coordinates Analysis Constrained Principal Coordinate Analysis (CPCoA) refers to the addition of grouping information to the Principal Coordinate Analysis (PCoA) in order to find a plane that can best explain the differences between groups under self-defined grouping conditions.The process was completed by referring to the EASYAMPLICON protocol [47]. Gut-Microbiota Network Analysis The layout and visualization of the gut microbiota network diagram were completed with reference to the article published by the Zhou Jizhong Team, Li Ji Team, and Shen Qirong Team [48][49][50]. Analysis of Microbial Community Structure Gap statistics were used to determine the optimal number of clusters in the microbial community structure.This method identifies the best number of clusters by comparing the distribution of clustered data with that of a random distribution through the calculation of the gap (or "gap statistic") between them. Non-Metric Multidimensional Scaling Non-metric multidimensional scaling (NMDS) was completed with reference to the authors' previously published research [51,52].Firstly, based on the genus-level data, the metaMDS function in the R package Vegan (version 2.6-4) was used to conduct NMDS ordination analysis and obtain the stress value.Simultaneously, the adonis2 function in the R package Vegan was employed to conduct a permutational multivariate analysis of variance (PERMANOVA) based on Bray-Curtis distance, yielding p-values and R 2 values.The ordisurf function in the R package Vegan was used to passively add environmental variables to the NMDS ordination.Finally, the geom_point function in the R package ggplot2 (version 3.3.2) was employed to visualize the results of the NMDS ordination. Random Forest Analysis Refer to previously published articles [53][54][55] for detailed methodology on the random forest analysis (detailed in the Supplementary Materials). Other Analyses To evaluate the correlation between the significantly different gut microbiota compositions between groups and clinical manifestations, the lm function in R software (version 4.2.3) was used to construct a logistic regression model.The p-value and coefficient of determination (R-squared) of the logistic regression model were obtained through the summary function.The beeswarm function in the R package beeswarm (version 0.4.0) was used to create boxplots, and the wilcox.testfunction from the R package stats (version 4.2.3) was used for statistical testing to obtain p-values.The visualization of clinical data and other aspects were completed using customized scripts. Results This study included 98 fecal samples from 39 preterm infants.We conducted a visual analysis of clinical data on premature infants, and the results are shown in Figure 1A.To determine the saturation of sequencing data for the 16s rRNA gene, that is, whether the number of sequencing data were sufficient, we performed saturation curve analysis based on species richness, and the results are shown in Figure 1B.It can be seen that the saturation curves for ELBW, LBW, and VLBW all tended to saturate, indicating that the 16s rRNA gene sequencing data were sufficient.At the same time, the species richness in the LBW group was slightly higher than those of ELBW and VLBW infants.We used ANOSIM, which stands for analysis of similarities, to compare the similarity of the gut microbiota composition data among ELBW, LBW, and VLBW infants.As a non-parametric test method, ANOSIM is often used to test for the similarities among high-dimensional data.We also compared the magnitude of differences in gut microbiota compositions both between and within the groups of ELBW, LBW, and VLBW infants, and the results are presented in Figure 1C.The R-value of 0.0418 indicated the presence of a certain degree of difference both within and between the groups.The p-value of 0.043 suggested that this difference was restricted.To further understand the shared and unique gut microbiota profiles among ELBW, LBW, and VLBW infants, and to visually demonstrate the overlaps in gut microbiota among the three groups, we conducted an analysis using a Venn plot and found that the number of OTUs shared among the three groups was 118, indicating that the majority of gut microbiota were common to ELBW, LBW, and VLBW infants (Figure 1D). To further understand the gut microbiota composition of preterm infants, we analyzed the gut microbiota at the genus level (Figure 1E,F).The results showed that the gut microbiota of the ELBW group was dominated by Enterococcus, followed by Staphylococcus, Acinetobacter, and Klebsiella.The gut microbiota of the VLBW group was primarily composed of Klebsiella, followed by Enterococcus, Staphylococcus, Streptococcus, Acinetobacter, and Pseudescherichia.In the LBW group, Enterococcus, Staphylococcus, Klebsiella, and Streptococcus were the main gut microbiota genera, followed by Bifidobacterium and Pseudescherichia.Compared with those of the LBW group, the ELBW and VLBW groups' abundances of Acinetobacter were significantly increased, with a notable increase observed in the ELBW group.Conversely, the abundance of Bifidobacterium was significantly reduced.We employed CPCoA to compare the differences in the gut microbiota composition among the ELBW, LBW, and VLBW groups of infants.The results showed that the grouping could explain 2.65% of the variation, and the separation was relatively distinct, indicating that grouping had a certain influence on the composition of gut microbiota (Figure 1G).We further analyzed the gut microbiota of preterm infants in the ELBW, LBW, and VLBW groups by NMDS clustering at the genus level.The group data were calculated via the Bray-Curtis index to generate NMDS to visualize the similarity of the gut microbiota.In Figure 1H, each point in the graph represents the microbiota characteristics of an individual preterm infant in a low-dimensional space.The results showed that there were distinct clusters of gut microbiota genera among the three groups, indicating significant differences in their distribution (R 2 = 0.041, p = 0.001).Simultaneously, we conducted clustering analysis based on the clinical phenotypes of the three groups of preterm infants.The results showed significant differences in gestational age and birth weight among the three groups (Figure S1A). grouping could explain 2.65% of the variation, and the separation was relatively distinct, indicating that grouping had a certain influence on the composition of gut microbiota (Figure 1G).We further analyzed the gut microbiota of preterm infants in the ELBW, LBW, and VLBW groups by NMDS clustering at the genus level.The group data were calculated via the Bray-Curtis index to generate NMDS to visualize the similarity of the gut microbiota.In Figure 1H, each point in the graph represents the microbiota characteristics of an individual preterm infant in a low-dimensional space.The results showed that there were distinct clusters of gut microbiota genera among the three groups, indicating significant differences in their distribution (R 2 = 0.041, p = 0.001).Simultaneously, we conducted clustering analysis based on the clinical phenotypes of the three groups of preterm infants.The results showed significant differences in gestational age and birth weight among the three groups (Figure S1A).E,F) composition of gut microbiota at the genus level among the three groups; (G) CPCoA explained 2.65% of the total variation in gut microbiota composition among the groups, and there were significant differences between the groups (p < 0.05); (H) NMDS analysis used to rank the gut microbiota of the ELBW, VLBW, and LBW groups.The Bray-Curtis index was calculated for the three groups to generate the NMDS to visualize the similarities among the gut microbiota, and the results showed that there was a significant difference in the distribution of the gut microbiota among the three groups (p < 0.05). To further understand whether the gut microbiota components were differentially distributed among the ELBW, LBW, and VLBW groups, we performed an analysis of gut microbiota at the genus level by volcano plots.The results showed that 118 genera with differential abundances were identified between ELBW and LBW at the genus level.Among them, 56 genera were less abundant in ELBW, while 62 genera were more abundant in the ELBW than in the LBW group (Figure 2A).Compared with VLBW infants, ELBW infants exhibited a total of 83 differentially abundant genera of gut microbiota at the genus level, with 44 genera showing lower, and 39 genera showing higher, abundances compared with those in the VLBW group (Figure 2B).A total of 67 differentially abundant genus-level enterobacteria were identified in VLBW infants compared with the findings Children 2024, 11, 770 7 of 17 in LBW infants, with 36 genera less abundant and 31 genera more abundant than in the LBW group (Figure 2C).We further specifically analyzed these differentially abundant gut microbiota through Manhattan plots.The results showed that, compared with the LBW group, the ELBW group exhibited more Enterococcus, Streptococcus, and Acinetobacter, but lower amounts of Klebsiella.Enterococcus, Streptococcus, and Clostridium sensu stricto abundances were predominantly lower, and that of Enterobacter was predominantly higher, in ELBW compared with the findings in VLBW.Compared with LBW infants, VLBW infants showed more Acinetobacter and less Enterococcus and Klebsiella (Figure 2D-F). differential abundances were identified between ELBW and LBW at the genus level.Among them, 56 genera were less abundant in ELBW, while 62 genera were more abundant in the ELBW than in the LBW group (Figure 2A).Compared with VLBW infants, ELBW infants exhibited a total of 83 differentially abundant genera of gut microbiota at the genus level, with 44 genera showing lower, and 39 genera showing higher, abundances compared with those in the VLBW group (Figure 2B).A total of 67 differentially abundant genus-level enterobacteria were identified in VLBW infants compared with the findings in LBW infants, with 36 genera less abundant and 31 genera more abundant than in the LBW group (Figure 2C).We further specifically analyzed these differentially abundant gut microbiota through Manhattan plots.The results showed that, compared with the LBW group, the ELBW group exhibited more Enterococcus, Streptococcus, and Acinetobacter, but lower amounts of Klebsiella.Enterococcus, Streptococcus, and Clostridium sensu stricto abundances were predominantly lower, and that of Enterobacter was predominantly higher, in ELBW compared with the findings in VLBW.Compared with LBW infants, VLBW infants showed more Acinetobacter and less Enterococcus and Klebsiella (Figure 2D-F).To further understand the interrelationships among the intestinal microbiota in each group, we employed the molecular ecological networks (MENs) method and visualization To further understand the interrelationships among the intestinal microbiota in each group, we employed the molecular ecological networks (MENs) method and visualization tools based on 16S rRNA high-throughput sequencing.The results showed that the gut microbial interaction network of the VLBW group consisted of 416 nodes (ASVs) and 8856 links (interactions).In the network constructed for the ELBW group, more nodes were observed, but fewer links were present (Figure 3A).Compared with non-breastfed preterm infants, breastfed preterm infants exhibited a higher number of nodes but fewer links in their gut microbiota networks.Preterm infants with jaundice had fewer nodes and even fewer links compared with those without jaundice (Figure S1B).To further understand whether the differences between all enrolled subjects affected their corresponding gut microbiota and clinical phenotypes, for example, we used gap statistics, a clustering method based on interval statistics, and analyzed the optimal number of clusters based on the total sample size.The study subjects were grouped according to their similarities, resulting in high similarity levels within groups and significant differences between groups.Figure 3B displays the gap statistic plots based on clustering by sample size.Based on B = 100 iterations for each k, the results showed that k = 5 was the optimal k-value, indicating that the clustering performance was basically optimal.As k continued to increase, the performance improved, relatively slowly.Therefore, the final clustering algorithm was chosen with a Children 2024, 11, 770 8 of 17 k-value of 5, meaning that we grouped the samples into five clusters.We further employed NMDS to analyze the five clusters identified through the clustering analysis.By calculating the Bray-Curtis index, we generated an NMDS plot to visually display the similarities among the samples.To further understand the connection between the gut microbiota and clinical phenotypes in preterm infants with low birth weight, we first conducted an analysis of CSTs based on their gut microbiota.Through multidimensional scaling (MDS), we performed ordination analysis based on the sorting of eigenvalues and visualized the first four eigenvectors using NMDS (Figure 3C,D).Then, five CST samples were visualized using the NMDS method.In Figure 3E, each point on the plot represents the characteristics of a single sample in the low-dimensional space, and the results indicated that the five CSTs exhibited distinct clustering patterns.To understand the relationships among the gut microbiotic abundances of the five clusters identified through clustering analysis, we used a further clustering heatmap to display the variations in the abundance of key gut microbiota across the five CST samples.This allowed us to compare the compositional similarities and differences in the gut microbiota at the genus level among the different groups.The results indicated that the gut microbiota in the five CSTs was primarily composed of harmful bacteria.The six bacteria species with relatively high abundance in the gut microbiota were Enterococcus, Klebsiella, Staphylococcus, Streptococcus, Pseudescherichia, and Acinetobacter.The abundance of gut bacteria also varied among the different CSTs.Specifically, the abundances of Streptococcus and Pseudescherichia were higher in CST 1; Staphylococcus had a higher abundance in CST 2, Enterococcus was more abundant in CST 4, and Klebsiella was more prevalent in CST 5 (Figure 3F). We further analyzed the relationships among the five CSTs and clinical phenotypes, and the results are presented in Figure 4A.Overall, there were significant differences (p < 0.05) between the five CSTs in terms of gestational age, parity, birth weight and length, weight and length at 1 month, weight and length at 3 months, and the percentage of neutrophils.In terms of the gestational age, birth weight, and birth length of the infants, there were significant differences between CST 3 and CST 5.In a comparison of body length at 1 month, there were significant differences between CST 1 and CST 3. In a comparison of body length at 3 months, CST 5 exhibited significant differences compared with CST 1, CST 2, and CST 3.There were also significant differences in the percentage of neutrophils between CST 4 and CST 5.In terms of parity, there was also a significant difference between CST 2 and CST 5. We further analyzed the correlation between each group and the clinical indicators (Figure 4B).The results showed a significant positive correlation between body length at 1 month and CST 1, while there was a significant negative correlation between body length at 1 month and CST 5. Additionally, there was a significant negative correlation between CST 1 and platelet count (PLT), as well as a negative correlation between CST 4 and total bile acid (TBA). To further explore whether there were differences in the gut microbiota between ELBW infants and other LBW infants, we initially classified the preterm infants into two groups.One was the ELBW group (ELBW+), and the other group comprised infants with VLBW and LBW, collectively known as the non-extremely-low-birth-weight group (ELBW−).According to the classification of gut microbiota under ASV conditions, the gut microbiota of the two groups of children were compared, and the results are shown in Figure 5A.The four bacteria with significantly increased abundance in the ELBS+ group were Acineto-bacter_ASV_46, Acinetobacter_ASV_49, Acinetobacter_ASV_51, and Acinetobacter_ASV_54.The abundance of intestinal bacteria Bifidobacterium_ASV_107 and Klebsiella_ASV_2 were significantly lower in the ELBS+ group of infants.To further evaluate the clinical application value of the gut microbiota, we constructed a classifier based on a random forest model, as shown in Figure 5B,C.The top-three gut microbiota (Klebsiella_ASV_2, Ente-rococcus_ASV_38, Klebsiella_ASV_11) used for preterm infant classification had an AUC value of 0.836.The AUC values for preterm infant classification using the top-5 and top-10 gut microbiota were 0.793 and 0.753, respectively.These results indicated that intestinal bacteria may be potential biomarkers for ELBW preterm infants. Discussion The CSTs can be used to discover the dominant bacterial community composition in different age groups and samples.Currently, most studies have focused on analyzing the CSTs of gut microbiota based on samples from the reproductive tract.One study categorized the female vaginal microbiota into five CSTs by 16S rRNA gene sequencing, of which CSTs I, II, III, and V were all dominated by Lactobacillus 24.A study based on adult gut microorganisms found that adult gut microorganisms can be categorized into three distinct clusters, known as enterotypes, driven by different genera of bacteria, namely Bacteroides (enterotype 1), Prevotella (enterotype 2), and Ruminococcus (enterotype 3) [56].A study conducted on the gut microbiota of school-age children identified three distinct enterotypes: Bacteroides, Prevotella, and Bifidobacterium [57]. There have been fewer CST analyses conducted on samples from neonates.A study focusing on the gut microbiota of infants found that infants under 6 months of age primarily had five community state types, which were dominated by the genus Bifidobacterium.There were seven main infant community state types (ICSTs) for infants aged 6-36 months; these ICSTs were characterized by typical adult bacterial genera and primarily manifested as decreased Bifidobacterium and increased Bacteroides 24.Grier and his team conducted a longitudinal CST analysis by collecting intestinal samples from preterm and full-term infants.The results revealed the existence of CSTs potentially characterized by Enterobacteriaceae, Veillonella, Ruminococcus, Streptococcus, Prevotella, Bacteroides, and Bifidobacterium [58].The detection of a large number of ICSTs is believed to reflect the high variability and dynamics of the microbiota during early life [59].In this study, CST analysis was conducted on the gut microbiota of low-weight preterm infants, and we found diverse gut microbiota compositions among the VLBW, LBW, and WELBW infants.Low-weight preterm infants exhibited five distinct CSTs, primarily characterized by Enterococcus, Staphylococcus, Klebsiella, Streptococcus, Pseudescherichia, and Acinetobacter.The primary intestinal bacteria in CST 1 were Streptococcus and Pseudescherichia.CST 2 was dominated by Staphylococcus.CST 4 was primarily made up of Enterococcus, while CST 3 and CST 5 were mainly Klebsiella.It can be seen that the CSTs of the neonatal intestine were generally dominated by opportunistic pathogens. The gut microbiota of neonates is influenced by various factors, and there is a correlation between the community state types of neonatal microbiota and clinical phenotypes.The community state types of the neonatal gut microbiota also differ based on the mode of delivery.Infants delivered vaginally tend to have CSTs dominated by Bifidobacterium, while those delivered by cesarean section are more likely to have Bacteroides as the primary bacteria 24.In this study, preterm infants exhibited significant differences in gestational age, birth weight, and birth length in terms of CSTs, and there were especially significant differences between CST 3 and CST 5.There was a linear relationship between the CST and the length, PLT, and TBA of preterm infants.However, further verification is needed to determine whether there is a causal relationship between the gut microbiota and these clinical indicators. Particularly in preterm children, the degree of intestinal growth is immature during the neonatal era, and the gut microbiota's structure and function varies markedly.The gut microbiota of preterm infants is often dominated by facultative anaerobic and opportunistic pathogens such as Enterobacter, Enterococcus, and Staphylococcus [60,61].In this study, we analyzed the structure of the intestinal bacteria in different low-birth-weight preterm infants.We discovered that, although the intestinal bacterial composition of preterm infants with different low-birth-weights varied, the main bacterial species were still opportunistic pathogens such as Enterococcus, Staphylococcus, Klebsiella, Streptococcus, and Acinetobacter.Compared with the VLBW and LBW groups, the ELBW group in this study exhibited a significant increase in the potentially harmful intestinal bacterial genus Acinetobacter.Acinetobacter belongs to the category of opportunistic pathogens, is also a major cause of neonatal infections and outbreaks in neonatal intensive care units (NICUs) [62], and can lead to the occurrence of diseases such as meningitis, bloodstream infections, and respiratory infections [63,64].Acinetobacter, one of the major drug-resistance-associated mortality pathogens, is associated with high morbidity and mortality rates, and preterm and VLBW infants are highly susceptible to infection [65,66]. In this study, Klebsiella was identified as a potential biomarker bacteria genus in preterm infants.The random forest analysis also indicated that Klebsiella could be a potential biomarker for diagnosing preterm infants.Klebsiella is a common intestinal microorganism during neonatal development [67].It can act on macrophages to thereby evade the host immune system and to persist, potentially causing opportunistic infections [68].Because of their gestational age and low body weight, premature infants do not yet possess fully developed or matured systems such as digestion and absorption and immune systems.Preterm infants are more prone to a series of infections, such as neonatal sepsis and necrotizing enterocolitis (NEC), because of their small gestational age, low body weight, and incomplete development of various systems, such as the digestive, absorption, and immune systems.Relevant studies have shown that Klebsiella is associated with bacterial infections and the occurrence of NEC in neonates [69,70], and elevated Klebsiella abundance is also associated with neonatal cerebral white-matter damage 40.However, further research is needed to elucidate the specific mechanisms underlying these associations and their relevance to the health of preterm infants. In this study, a significant decrease in the abundance of the intestinal probiotic Bifidobacterium was observed in ELBW preterm infants.Bifidobacterium are beneficial bacteria in the human gut with pro-inflammatory, anti-inflammatory, anti-viral, and immunomodulatory functions [71][72][73].Studies have found that a higher abundance of Bifidobacterium in early infancy is associated with a better immune response to vaccination and potentially enhanced immune memory [74].A low abundance of Bifidobacterium may lead to the development of allergies, eczema, and asthma [75].A study of gut microbial compositions and functions in very-preterm infants given probiotics found that Bifidobacterium can be used to predict microbial maturation and that Bifidobacterium is an important factor in accelerating gut microbial maturation 35.It showed that probiotic supplementation can promote the maturity of gut microbiota in premature infants, thus reducing differences between microbiota.In addition, related studies found that probiotic supplementation in preterm infants can reduce mortality and improve NEC and feeding intolerance, among other benefits [76].Evidently, changes in intestinal probiotics may affect the health of preterm infants. Using machine learning methods, we demonstrated the value of the gut microbiota composition in diagnosing extremely-low-birth-weight preterm infants.We assessed the clinical value of gut microbiota in ELBW preterm infants by a machine learning method, and found that the AUC values of the intestinal bacteria Klebsiella_ASV_2, Enterococ-cus_ASV_38, and Klebsiella_ASV_11 were 0.836.The AUC values for Klebsiella_ASV_2, Enterococcus_ASV_38, Klebsiella_ASV_11, Acinetobacter_ASV_51, and Acinetobacter_ASV_46 were found to be 0.793.The results show that the diagnosis of ELBW preterm infants based on gut bacteria is reliable, to some extent.With machine-learning analysis methods, gut bacteria may play a significant role in ELBW preterm infants, and their ROC values can predict diagnostic outcomes. This study demonstrated a certain level of innovation: CST analysis is commonly used in the structural analysis of genital tract microbiota.In this study, we identified five major CSTs through an analysis of community types in low-birth-weight preterm infants, and CST was related to the clinical phenotype of premature infants.Furthermore, machine learning methods were employed to evaluate the potential of using bacteria composition in diagnosing preterm infants with ELBW.As for limitations, the 16S rRNA gene sequencing method used lacks the ability to analyze the functional composition of the gut microbiota.The study also lacked an independent validation cohort to verify the potential of using the bacteria composition in diagnosing preterm infants with ELBW.The next step will be to further investigate the functional aspects of the gut microbiota and conduct larger-scale validation studies. Conclusions The intestinal bacteria of premature infants are characterized by a community state type primarily driven by harmful bacteria such as Enterococcus, Klebsiella, and Acinetobacter.ELBW preterm infants exhibit an increase in the abundance of potentially harmful bacteria in the gut and a decrease in beneficial bacteria.These potentially harmful bacteria may be potential biomarkers for ELBW premature infants. Figure 1 . Figure 1.Analysis of gut microbiota diversity in low-birth-weight preterm infants.(A) Visual analysis of clinical data for three groups of preterm infants; (B) saturation curve analysis based on species richness; (C) comparison of similarities and differences in ELBW, LBW, and VLBW gut microbiota composition data by ANOSIM; (D) Venn plot illustrating shared and unique OTUs among the three groups; (E,F) composition of gut microbiota at the genus level among the three groups; (G) CPCoA explained 2.65% of the total variation in gut microbiota composition among the groups, and there were significant differences between the groups (p < 0.05); (H) NMDS analysis used to rank Figure 1 . Figure 1.Analysis of gut microbiota diversity in low-birth-weight preterm infants.(A) Visual analysis of clinical data for three groups of preterm infants; (B) saturation curve analysis based on species richness; (C) comparison of similarities and differences in ELBW, LBW, and VLBW gut microbiota composition data by ANOSIM; (D) Venn plot illustrating shared and unique OTUs among the three groups; (E,F) composition of gut microbiota at the genus level among the three groups; (G) CPCoA explained 2.65% of the total variation in gut microbiota composition among the groups, and there were significant differences between the groups (p < 0.05); (H) NMDS analysis used to rank the gut microbiota of the ELBW, VLBW, and LBW groups.The Bray-Curtis index was calculated for the three groups to generate the NMDS to visualize the similarities among the gut microbiota, and the results showed that there was a significant difference in the distribution of the gut microbiota among the three groups (p < 0.05). Figure 2 . Figure 2. The distribution of bacteria at the genus level in the gut microbiota.(A-C) Volcano plots revealing differentially abundant gut microbiota among the VLBW, ELBW, and LBW groups.Red represents significantly high-abundance bacteria, while green represents significantly low-abundance bacteria.(D-F) Manhattan plots revealing the distribution of gut microbiota among the three groups. Figure 2 . Figure 2. The distribution of bacteria at the genus level in the gut microbiota.(A-C) Volcano plots revealing differentially abundant gut microbiota among the VLBW, ELBW, and LBW groups.Red represents significantly high-abundance bacteria, while green represents significantly low-abundance bacteria.(D-F) Manhattan plots revealing the distribution of gut microbiota among the three groups. Figure 3 . Figure 3.The relationships among the five clusters of samples identified through clustering analysis and the diversity of the gut microbiota.(A) MENs method based on 16S rRNA high-throughput sequencing and visualization tools to analyze the interrelationships among gut microorganisms between groups.(B) The gap statistic method was used to analyze the optimal number of clusters based on the Bray-Curtis distance of the incoming samples; the results show that 5 was the optimal k value.(C) Ordination analysis of eigenvalue obtained from MDS. (D) NMDS visualization based on the first four eigenvectors obtained by MDS.(E) Demonstration of 5 CSTs samples based on the NMDS method.(F) Heatmap showcasing the variations in the abundance of driver intestinal bacteria across the five CST sample Figure 3 . Figure 3.The relationships among the five clusters of samples identified through clustering analysis and the diversity of the gut microbiota.(A) MENs method based on 16S rRNA high-throughput sequencing and visualization tools to analyze the interrelationships among gut microorganisms between groups.(B) The gap statistic method was used to analyze the optimal number of clusters based on the Bray-Curtis distance of the incoming samples; the results show that 5 was the optimal k value.(C) Ordination analysis of eigenvalue obtained from MDS. (D) NMDS visualization based on the first four eigenvectors obtained by MDS.(E) Demonstration of 5 CSTs samples based on the NMDS method.(F) Heatmap showcasing the variations in the abundance of driver intestinal bacteria across the five CST sample. Figure 4 . Figure 4.The correlation analysis between the five CST samples and clinical phenotypes.(A) Significant differences between the five CST samples and clinical phenotypes (*** p < 0.001, ** p < 0.01, * p < 0.05); a, b,and care defined as using the significant difference letter marking method to arrange all the means from largest to smallest.Any difference with the same marking letter is not significant, and any difference with a different marking letter is significant.(B) Significant linear correlation between the five CST samples and clinical phenotypes. Figure 4 .Figure 5 . Figure 4.The correlation analysis between the five CST samples and clinical phenotypes.(A) Significant differences between the five CST samples and clinical phenotypes (*** p < 0.001, ** p < 0.01, * p < 0.05); a, b,and care defined as using the significant difference letter marking method to arrange all the means from largest to smallest.Any difference with the same marking letter is not significant, and any difference with a different marking letter is significant.(B) Significant linear correlation between the five CST samples clinical phenotypes.
8,997.8
2024-06-25T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Review Article: Unraveling synergistic effects in plasma-surface processes by means of beam experiments The interaction of plasmas with surfaces is dominated by synergistic effects between incident ions and radicals. Film growth is accelerated by the ions, providing adsorption sites for incoming radicals. Chemical etching is accelerated by incident ions when chemical etching products are removed from the surface by ion sputtering. The latter is the essence of anisotropic etching in microelectronics, as elucidated by the seminal paper of Coburn and Winters [J. Appl. Phys. 50, 3189 (1979)]. However, ion-radical-synergisms play also an important role in a multitude of other systems, which are described in this article: (1) hydrocarbon thin film growth from methyl radicals and hydrogen atoms; (2) hydrocarbon thin film etching by ions and reactive neutrals; (3) plasma inactivation of bacteria; (4) plasma treatment of polymers; and (5) oxidation mechanisms during reactive magnetron sputtering of metal targets. All these mechanisms are unraveled by using a particle beam experiment to mimic the plasma–surface interface with the advantage of being able to control the species fluxes independently. It clearly shows that the mechanisms in action that had been described by Coburn and Winters [J. Appl. Phys. 50, 3189 (1979)] are ubiquitous. I. INTRODUCTION The interaction of low temperature plasmas with solid surfaces is at the core of many key technologies of the 21st century. Plasma treatment of thermolabile surfaces improves plastics to achieve wear protection or to improve barrier properties. Plasma coatings on metal substrates provide corrosion resistance against aggressive chemicals or serve as hard coatings to extend the lifetime of tools. Plasma etching of semiconductors is the workhorse in microelectronics and the driver for being able to follow Moore's law over the past few decades-the whole field of nanotechnologies relies heavily on the capabilities of plasma processes. All this is linked to the nonequilibrium character of plasmas, where the energy is invested in ionization and dissociation of species and not necessarily in the heating of the whole reaction volume. Thereby, the surfaces can remain cold while the surface reaction is triggered by incident reactive species. These heterogeneous surface reactions, however, are very complex because they may be governed by various synergisms and antisynergism among the particles, ions, electrons, photons, and radicals interacting at a growing or etched surface. The unraveling of these mechanisms directly in plasma experiments is difficult because all processes may occur simultaneously and any separation may remain ambiguous. This can be resolved by means of particle beam experiments using quantified beams of different species in an ultra-high-vacuum (UHV) environment with independent control of their fluxes and energies. Thanks to the independent control of particle sources, synergistic effects can be identified. The most famous example of such a particle beam approach was presented by the pioneering work by Coburn and Winters on reactive etching of silicon. 1,2 They showed that the etch rate of silicon or silicon oxide is significantly enhanced if reactive fluorine species and ions impact on the surface simultaneously. They coined the expression chemical sputtering to describe a process either where the chemical reaction creates an intermediate at the surface, which is then sputtered by the ions due to the lower surface binding energy, or where the ions damage the surface and make it accessible to the incident reactive neutrals. The most direct proof of the chemical sputtering process is the observation of a significant time delay, in the range of milliseconds, between an incident ion and the desorbing species caused by the out-diffusion of the etch products. Such a time delay was directly observed in modulated beam experiments for the etching of silicon by low-energy ions and fluorine atoms. 2 The threshold for chemical sputtering is lower than that for physical sputtering since no momentum reversal of the incoming projectile has to take place. The absolute erosion yield is much higher because the erosion products do not only originate from the physical surface as in the case of physical sputtering but are also formed within the whole penetration range of the incident ions. The ion-radical synergisms in silicon-containing plasmas are well studied. Here, we summarize similar experiments for an organic system, the growth and etching of hydrogenated amorphous carbon films, as well as for an inorganic system, the oxidation of metals. (1) Hydrogenated amorphous carbon films are used with very different properties such as diamond like coatings as wear protection or they form naturally at the first wall of the nuclear fusion experiment when the graphite tiles interact with the hydrogen fusion plasma. The interaction of plasmas with hydrogenated amorphous carbon films is also an example for the atomistic processes during plasma treatment of polymers or for the interaction of plasmas with biological systems, which are organic interfaces on the atomistic scale. Finally, beam experiments have shed light on the effects of ions, metastables, and UV photons during plasma sterilization and chemical sputtering on spores. 3 (2) Oxidation of a metal target during magnetron sputtering (MS) may lead to target poisoning and a strong hysteresis of the operating parameters voltage and reactive gas flow during MS. Therefore, the understanding of the effects of ions, radicals, reactive neutrals, and UV photons at a metal target surface is crucial for the development of reactive sputtering models. 4 A prominent example is the ion-enhanced oxidation and the ion-induced secondary electron emission (SEE) from metal and metal oxide targets that can be described by an extension of Berg's magnetron sputter hysteresis model. 5,6 All these systems and mechanisms are elucidated by using a particle beam experiment to mimic the plasma-surface interface. Thereby, the elementary input parameter of plasma-surface models can be uniquely measured. It clearly shows that the ion-neutral synergisms at the plasma-surface interface are ubiquitous, as already pioneered by Coburn and Winters. II. BEAM EXPERIMENT SETUP Generally, beam experiment setups are separated into a particle beam chamber and a load-lock for the samples. The samples are then transferred into the particle beam chamber without breaking vacuum. The particle beam reactor is an UHV chamber equipped with several particle sources for the production and irradiation of known fluxes of different species. 7,8 Figure 1 shows a sketch of a beam experiment used by Jacob et al. 7 Hot capillaries and Evenson (plasma) sources are used to generate atom and radical beams from molecular precursors. Ion guns may consist of ECR-based plasma sources with ion optics to obtain energetic ion beams or, more sophisticated, may consist of an ion source setup equipped with a Wien filter for mass selection, a beam decelerator, and a deflection system to discriminate fast neutrals from charge exchange collisions. The radical and ion beams interact with the sample usually at normal incidence or with an angle of incidence of 45 . A base pressure in the 10 À7 Pa range is reached after bake out. The working pressure is usually comprised within 10 À3 and 3 Â 10 À2 Pa, depending on the gas throughput. A large variety of in situ diagnostics are used in beam experiments to monitor the heterogeneous surface processes during particle bombardment. The optical properties of the samples during beam irradiation can be determined by optical in situ real time ellipsometry. On the other hand, Fouriertransform infrared spectroscopy (FTIR) in the reflection mode provides the density of active chemical groups. Hence, creation or etching rates of these chemical groups shed light on the chemical state of the sample surface. Mass variation rates in real time are obtained using a quartz crystal microbalance (QCM) so that surface coating and sputtering can be modeled through flux balance equations including sputter yields and sticking coefficients. The yields of SEE can be measured using a special electrostatic collector consisting of biased coaxial electrodes. Besides, classical in situ diagnostics to provide the chemical composition and crystalline structure data, like x-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy and reflected high energy electron diffraction are also usually employed during beam experiments. The composition of the particle beams can be characterized by quadrupole mass spectrometry. The ion flux and energy distribution of the ion beams are measured routinely using a Faraday cup and a retarding field energy analyzer, respectively. III. EXAMPLES AND DISCUSSION A. Hydrocarbon thin film growth from CH 3 and H Amorphous hydrogenated carbon (a-C:H) films are frequently used for wear-resistant applications. They are deposited from glow discharges using a hydrocarbon precursor gas such as methane or acetylene. In the case of methane as the source gas, atomic hydrogen and CH 3 radicals represent the dominant growth species. 9,10 Diamond deposition is a variant of carbon deposition from discharges using a mixture of a few percent methane in hydrogen. It is believed that the microscopic growth mechanism consists of the adsorption of CH 3 at free surface sites, which are created via the abstraction of surface-bonded hydrogen by incoming atomic hydrogen. [11][12][13][14][15] Such a plasma process is investigated in a beam experiment using quantified beams of H atoms and CH 3 radicals, as illustrated by the growth or etch rate in Fig. 2(c) for the different combinations of H and the CH 3 flux shown in Figs. 2(a) and 2(b). One can clearly see that the growth rate is only high when both the species interact with the surface simultaneously. The sticking coefficient of CH 3 under the "hydrogen beam on" condition is 10 À2 . At point 2.2, the atomic hydrogen beam is switched off and the growth rate drops significantly although the CH 3 radical flux remains constant. The sticking coefficient under "H beam off" conditions is 10 À4 . This proves that CH 3 adsorption is strongly enhanced by a simultaneous flux of atomic hydrogen. The synergism between CH 3 and H atoms explains very well the growth rate of amorphous hydrogenated carbon thin films and that of diamond deposition. It was also shown, however, that even without any activation of the surface, the sticking coefficient of CH 3 is 10 À4 . This result had dramatic consequences for the design of future nuclear fusion reactors. For a long time, the first wall of a nuclear fusion experiment consisted of graphite tiles due to their heat resistance and compatibility with a hydrogen fusion plasma at millions of K. If a carbon atom enters such a fusion plasma due to sputtering of the first graphite wall, the core fusion plasma performance is not deteriorated. This is in contrast to the use of metals with high nuclear charge as the first wall. In a future nuclear fusion reactor, the hydrogen isotopes tritium and deuterium will be used. Due to safety reasons, an upper overall limit for the tritium content in the nuclear machine has to be ensured. Tritium retention may occur either in the form of the gas or bonded in surface layers or dissolved in the metal surfaces. The interaction of a tritium-containing fusion plasma will eventually cause the formation of CT 3 when interacting with the graphite tiles. These CT 3 radicals are expected to also have a low sticking coefficient of 10 À4 and may survive many wall collisions and are, therefore, able to reach very remote locations of a nuclear fusion reactor. This corresponds to a permanent retention of tritium because deposited C:T layers cannot easily be removed in remote flanges or pump ducts. Consequently, the allowance limit for the tritium content in a nuclear fusion reactor may be reached because eventually all tritium is bound in C:T layers at inaccessible locations. This is not tolerable so that eventually the nuclear fusion community abandoned the predominant use of carbon as the first wall material and switched to tungsten and beryllium for the next generation of fusion reactors. B. Hydrocarbon thin film etching by ions and reactive neutrals-Chemical sputtering Synergisms at the plasma-surface interface do play a role not only in film growth of a-C:H but also in etching. It is known that a-C:H film erosion by hydrogen ions can occur even at low substrate temperatures and low ion energies. This is surprising because chemical erosion by H atoms can be excluded since it is a temperature activated process, which is negligible at the low temperatures. Also, physical sputtering can be excluded because the ion energies are below the threshold for this sputtering mechanism. 17 This can be resolved by using beam experiments exposing a-C:H layers to argon ions and a beam of hydrogen atoms. Figure 3 illustrates that sputtering by argon ions is drastically enhanced if an additional flux of atomic hydrogen is present. The erosion rate is by far higher than the sum of the etching rates for atomic hydrogen and Ar þ ions only. The prediction for chemical etching is shown as the dotted line in Fig. 3, and the prediction for physical sputtering is shown as the solid line in Fig. 3. In addition, significant film etching is observed even below the threshold energy for sputtering by argon ions only. A direct analysis of the surfaces reveals that the hydrogen content at the surface remains high in the case of a simultaneous flux of Ar þ ions and H atoms, while it decreases in the case of a bombardment with Ar þ ions only. This indicates that the additional flux of atomic hydrogen leads to the incorporation of hydrogen in the film, thereby compensating the release of bonded hydrogen caused by the incident Ar þ ions. The large difference between Y (Ar þ ) and Y (Ar þ jH) further shows that the C:H film surface with a high hydrogen content is more susceptible to sputtering. In the case of physical sputtering, the momentum of an incident projectile has to be reversed before it transfers kinetic energy to a surface atom or group to overcome the surface binding energy. In contrast, film sputtering at ion energies below the threshold for physical sputtering is explained by the process of "chemical sputtering," as defined in detail by Winters et al. 2,19 According to this definition, chemical sputtering of C:H films by Ar þ ions and H atoms can only occur if ions and H atoms interact simultaneously with the surface: incident ions create broken bonds within the collision cascade. These broken bonds are instantaneously passivated by the abundant flux of atomic hydrogen. This leads to the formation of stable hydrocarbon molecules underneath the surface. They finally diffuse to the surface and desorb. Ion-induced surface processes not only are relevant for chemical sputtering but may also trigger film growth: 20 incident ions displace surface bonded species within the collision cascades and create thereby dangling bonds at which incident radicals may chemisorb. Such a process is especially important for film growth at low ion energies, where the physical sputtering rate can be negligible and the incident radicals have a low sticking coefficient. Film growth is, therefore, very susceptible to any additional surface activation. A good agreement between measurements and modeling was found by Hopf et al. for a-C:H film growth, 21 who showed that a helium ion beam activated the surface and facilitated the chemisorption of incident CH 3 radicals at dangling bonds with a probability of unity. The best agreement was found if one also includes the recombination of adjacent dangling bonds at the surface corresponding to the transition between sp 3 and sp 2 bonds. The quantitative interpretation of ion-induced film growth may be, however, more difficult because incident ions not only cause chemical sputtering and surface activation but also modify the plasma exposed surface within the penetration depth of the ions. For example, an intense ion bombardment not only depletes the surface of hydrocarbon films from hydrogen, which makes them more resistant against chemical and physical sputtering, but also reduces the ion-induced formation of dangling bonds as adsorption sites for incident radicals. Since both the rates of ion-induced etching and ion-induced film growth decrease, it remains difficult to isolate one effect from the other. C. Hydrocarbon thin film etching-Chemical sputtering of spores The ion-neutral synergism during the etching of a-C:H layers by incident argon ions and hydrogen atoms can also serve as a model system to understand the interaction of plasmas with biological systems in plasma sterilization. Plasma sterilization is a modern technology to inactivate especially very resistant germs 22-24 using very often argon as plasma gas with admixtures of nitrogen, hydrogen, or oxygen. [25][26][27] The main advantage of plasma sterilization is its ability to sterilize also thermolabile medical tools made of plastics. The validation of a sterilization technique is usually based on the proof to inactivate endospores of bacteria, which are known to be a very resistant biological system. The inactivation itself is caused by UV photons that induce DNA strand breaks within seconds of exposure to a plasma. 28,29 The sterilization efficiency is, however, in realistic scenarios largely reduced because the UV radiation is shadowed by multilayered stacks of spores, which makes an additional plasma-induced chemical or physical etching of the biological systems essential. Such a plasma etching, however, needs to be mild enough not to harm the object being sterilized. The effect of oxygen atoms, oxygen molecules, and incident argon ions on the ability to inactivate and to etch Bacillus atrophaeus spores is determined in the beam experiment. Figure 4(a) indicates by SEM micrographs that endospores exposed to Ar þ ions at 200 eV show only a slightly altered surface texture [see Fig. 4(a)]. Apparently, Ar þ ion bombardment at 200 eV does not cause any significant erosion of the spore coat. This can be explained by the energy dependence of the physical sputtering yield Y (Ar þ ) of hydrocarbon compounds by argon ions. The physical sputtering process becomes only significant for ion energies well above 200 eV. 30 Endospores exposed to a simultaneous flux of argon ions and oxygen molecules exhibit, however, a very different appearance [see Fig. 4(b)]. Although the absolute size of the spores did not change noticeably, the spore coat became porous, showing etch channels with an estimated depth of approximately 100 nm after 60 min. The sputtering yield can be estimated from the known argon ion flux of 1.8 Â 10 14 cm À2 s À1 to be Y (Ar þ jO 2 ) % 1, which is well above the expected value for a pure physical sputtering process [Y (Ar þ ) % 0.06 for an Ar þ ion energy of 200 eV]. The high sputtering yield is again explained by the simultaneous impact of ions and oxygen molecules leading to the process of chemical sputtering. 20 The repeated ion-induced bond breaking followed by their reaction with oxygen molecules leads, below the surface, to the formation of presumably CO, CO 2 , and H 2 O as volatile components. These reaction products diffuse to the surface and desorb. 1 As a result, etching occurs, making plasma sterilization of multilayered biological samples a viable method for the health care industry. D. Plasma treatment of polymers-Chemical sputtering The interaction of ions and radicals with a-C:H films is also a model system for plasma-surface treatment of polymers for either lithography purposes (photoresist) or hydrocarbon removal (plasma cleaning). [31][32][33][34] For example, argon plasmas are extensively used to activate polymer surfaces in order to enhance their adhesion with subsequently deposited layers. 31 On the other hand, the addition of oxygen to the gas mixture facilitates the generation of polar groups on the treated surface, thereby increasing the surface energy. [35][36][37] Another expected effect of oxygen in the plasma is an increase in the etching rate due to chemical sputtering of the hydrocarbon network, which produces bond scission reactions leading to the creation and desorption of volatile CO 2 and H 2 O molecules. Surface modifications of polyethylene terephthalate (PET) and polypropylene (PP) by Ar ions, oxygen atoms and molecules, and UV photons have been investigated in beam experiments with known fluxes of argon ions and oxygen neutrals to mimic plasma treatment: (1) in the case of PET, the addition of oxygen to the incident argon ion flux does not enhance the etching rate but only changes the surface composition as evidenced by in situ FTIR analysis and subsequent contact angle measurements. 38 Figure 5 shows how the normalized infrared reflectivity R/R 0 for the removal of the C¼O groups (1720 cm À1 ) and CH x groups (2960 cm À1 ) is affected by the different bombarding conditions. In fact, the sputtering of PET is dominated by chemical sputtering with an internal source of reactive species. As a consequence, any addition of an external source of reactive species cannot alter the already high sputter yield, and the addition of the oxygen beam does not affect the etching rate. The yields at low and high ion energies have been compared with transport of ions in matter (TRIM) calculations. The measured yields remain higher than the TRIM yields irrespective of the ion energy because of the presence of the intense internal source of reactive species. This source of reactive species is represented by the displaced hydrogen and oxygen atoms in a collision cascade. 38 FIG. 4. SEM micrographs of B. atrophaeus exposed between 15 and 120 min to (a, first column) argon ions (j Ar þ ¼ 1.8 Â 10 14 cm À2 s À1 ) at 200 eV, to (b, second column) a combined flux of argon ions (j Ar þ ¼ 1.8 Â 10 14 cm À2 s À1 ) at 200 eV and O 2 molecules (j O2 ¼ 4.5 Â 10 15 cm À2 s À1 ), and to (c, third column) a combined flux of argon ions (j Ar þ ¼ 1. (2) Plasma treatment of PP shows a different scenario. The source of internal reactive species is reduced since no oxygen is present. On the other hand, the structure of PP is more sensitive to UV photons than in the case of PET, which leads to a synergistic effect between argon ions and UV radiation: the sputter yield by Ar þ ions is maximized at 200 eV. 4 This combined action of ions and photons toward a very efficient etching is consistent with the measured IR spectra of the modified top layer, which fits perfectly with untreated PP and indicates that net etching without chemical conversion takes place at this ion energy. 39 The chemical sputtering of PP films by combined bombardment of Ar ions, UV photons, and oxygen neutrals is discussed in the following. The cross-linking on PP produced by the UV photons, whose penetration depth is some tens of nanometers, is in competition with the etching induced by the same photons and the incident argon ions. Also, the dense top-layer caused by energetic ion bombardment (ca. 2-3 nm thick of amorphous carbon) introduces an additional barrier that reduces chemical sputtering by oxygen and argon ions at the PP surface. This explains why the etch rate only increases upon addition of oxygen at low ion energies (ca. 20 eV), whereas the rate is found to be approximately constant at higher ion energies (over 200 eV). This explanation based on a hardening effect at high ion energies is in agreement with in situ FTIR measurements, which show a cross-linking of the polymer due to selective etching of methylene groups and/or exomethylenic bond formation by oxygen atoms. 39 E. Oxidation mechanism during reactive magnetron sputtering Finally, beam experiments can also be used to analyze inorganic systems, such as the interaction of plasmas with metal surfaces during magnetron sputtering of metals or oxides and nitrides. Plasma deposition processes using reactive magnetron sputtering of metals are of major importance for many present day technologies. 40,41 The addition of oxygen to the discharge leads to the formation of a compound metal oxide on the growing film surface. Also, it causes hysteresis effects in sputtering processes due to the oxidation of the target surface, whose state can be described in terms of rate equations providing fundamental parameters like sticking coefficients and sputter yields (Berg model). 42,43 The fundamental surface processes in the Berg model are oxygen chemisorption, reactive ion implantation, sputtering of metal and the oxide, and knock-on implantation of oxygen by the ion bombardment to explain the ion-enhanced oxidation of aluminum during reactive magnetron sputtering. 5 The sticking coefficient of ground-state oxygen atoms and molecules on aluminum is relatively low (0.015). However, the measured effective sticking values during simultaneous bombardment of Al with Ar þ beams and O/O 2 species remain significantly larger than zero in steady state conditions. Such a phenomenon is a signature of ion-enhanced chemical oxidation by oxygen displacement, which can be linked to the implantation of oxygen due to the impact of energetic argon ions. The probability of this event is quantified by the so called knock-on implantation coefficient. 44 Figure 6 shows the influence of the sputter rate on the argon flux at different ion energies, and the corresponding model fits with different knock-on coefficients. The synergistic effect of simultaneous irradiation with ions and oxygen is modeled with an extension of the Berg model, as illustrated by the lines in Fig. 6. This result proves the reliability of this model to study heterogeneous surface processes. The fact that the knock-on coefficient of oxygen turned out to be of the order of unity and higher reveals that surface activation by ion bombardment is a very efficient mechanism to increase aluminum reactivity with oxygen atoms and molecules. Parallel to QCM experiments, the knock-on implantation of oxygen onto the Al subsurface was investigated by in situ FTIR. 45 There, the oxide absorption band from ionbombarded oxidized Al surfaces was compared with the oxide absorption band of the Al surface thermally saturated with oxygen. The excess signal corresponding to the former case demonstrates the significant subplantation of oxygen atoms under the metal top layer during reactive sputtering of aluminum (Fig. 7). This result is consistent with the depth of the ion-enhanced oxidation of ion-treated Al surfaces measured using XPS and confirmed using the computer code transport of ions dynamics. 5 In contrast, other metals like chromium show that knockon implantation of oxygen atoms is a weaker oxidation mechanism compared to, for example, dissociative chemisorption, which is reflected by a much higher sticking probability of oxygen on chromium. 46 The application of a retarding field on the samples by means of a counter-electrode provided the secondary electron yields of metals at low-medium Ar þ ion energies (500-2000 eV), which were of the order of %0.1. 6,47 The addition of oxygen molecules to the beam reactor enhanced substantially the ion-induced emission of secondary electrons, with this effect being remarkable in the case of aluminum oxide. In general, this result was interpreted as the wellknown effect of higher electron emission in oxidized surfaces. However, in addition, oxidized aluminum provided an energetic component of secondary electrons associated with Auger transitions, which abnormally increased the yield values over the unity. 48 IV. SUMMARY AND OUTLOOK In this article, five examples of synergistic surface reactions at the plasma-surface interface are mimicked using beam experiments. Thereby, quantitative information on sticking coefficients, chemical sputtering yields, and secondary electron emission coefficients is derived. These input parameters for plasma modeling and plasma-surface modeling are of paramount importance because the heterogeneous surface reactions are usually the big "unknowns" in the description of technological plasma processes in general. Besides the determination of fundamental parameters, beam experiments are also able to elucidate mechanisms such as chemical sputtering, as it was introduced by the pioneering work by Winters. It is now known that such synergisms are ubiquitous in many systems and explain not only hydrocarbon film etching and growth but also the interaction of the plasma with bacteria and cells. It will be the demand of the future to expand the beam experiment approach to many further systems to form a quantitative basis for the understanding and description of the plasma-surface interface. In the following years, the roadmap in beam experiments can be connected to further fields, other than the ones described in this article, interested in synergistic effects among diverse plasma species incident on solids: (1) Sputtering phenomena on complex nonplanar surfaces, like on nanopatterns obtained by nanolithography techniques. Plasma micro-/nanopatterning could be thus monitored using beam experiments. Also, control of target defects, crystal phase, roughness, and porosity are required to mimic exactly the real surface processes during plasma exposure. (2) Study of ion-surface interactions in two-dimensional materials like graphene for applications like functionalization and decoration. The role of the site occupied by, e.g., hydrogen on upper and lower surfaces and edges in, for example, nanoribbons, is the subject of intense research in plasma engineering that could be approached with beam experiments. 49,50 (3) The existence of prebiotic molecules (e.g., glycine) exposed to ionized regions in the extraterrestrial environment defines the central topic in Astrobiology. The study of ion and electron beam irradiation of biomolecules will provide valuable information to explain the presence and the persistence of certain prebiotic elementary species in the outer space, which could be connected to the origin of life in the Universe. 51
6,736.8
2017-05-10T00:00:00.000
[ "Physics" ]
Prediction and Limitations of Noise Maps Developed for Heterogeneous Urban Road Traffic Condition: A Case Study of Surat City, India Road traffic noise pollution has been recognized as a serious issue which affects human health as well as affects urban regions. Noise maps are very beneficial to identify the impact of noise pollution. A noise mapping study performed to study the propagation of noise in tier-II city along with field measurements. The noise maps are developed using a computer simulation model (SoundPLAN essential 4.0 software). The noise prediction models like U.K’s CoRTN, Germany’s RLS-90, and their modified versions, which can be used for homogenous road traffic conditions, cannot be successfully applied in heterogeneous road traffic conditions of India. In developing country like India, road traffic noise pollution depends on the composition of heterogeneous traffic volume, variance in road geometrical, honking conditions, un-authorized parking, and varying density of the building on either side of the road. These traffic compositions contain vehicles, which have different sizes, speeds variations, a different dimension of vehicles. Because of fluctuating speeds, lack of lane disciplines, and un-authorized parking on main road lanes, honking events becomes inevitable, which changes and affects the urban soundscape of nations like India. Analysis of noise maps showed that horn honking due to un-authorized parked vehicles contributed an additional increase up to 11 dB (A) noise, which is quite significant. Introduction In today's era, urban roads in India are loaded with heavy traffic compassion linked with large groups of buildings spreading outside the original city zones [1]. Traffic noise has been the main issue in counties like India. Traffic noise is the most important source of environmental noise pollution in the urban region; therefore, many countries have introduced noise emission limits for vehicles and legislation to control the traffic noise [2]. Prediction of noise pollution due to traffic compositions related to such larger cities is a very curious topic [3]. Noise due to transportation and infrastructure poses a formidable challenge to the environmentalist [4]. Noise mapping of the urban region is a very beneficial method in view of noise control and sustainability [5]. Noise maps can be beneficial in classifying noise sources [6]. In a developing nation like India, the urban environments show varying characteristics of traffic as compared to a developed nation [7]. The noise prediction models like a U.K's CoRTN, Germany's RLS-90 for traffic noise can be very useful for urban road design and assessment of current traffic noise conditions [8]. The noise indices such as Leq, L 10 , L 50 , L 90, etc. are required for the prediction models, set by nations government bodies to predict the sound pressure levels [9]. There are many softwares available for noise mapping and these types of softwares have basic tools for environment mapping [10]. In present study, noise mapping has been done using SoundPLAN essential 4.0 software for two different zones of Surat city. Study Area The study area selected for noise mapping was Surat city (tier-II city). The city covers an area about 326.515 sq.km and divided in seven zones. These seven zones include different activities such as business, residence, commerce, and industrial. Out of the seven zones, South-West zone and West zone were selected for the study purpose because these zones include diversified activities of business, residence, commerce. Zones with land use involving industrial activity were kept beyond the scope of this research work [11]. In West zone, three road stretches of 1 km each forming a triangle were selected (see Fig. 1). These zones contain all type of activities which can be affected by vehicular noise pollution. Activity refers to schools, colleges, hospitals, commercial areas, and residential areas. Five different locations of Surat city have been chosen for the filed study ( Fig. 1). Survey locations along the left side of Gujarat gas circle to Dhanmora complex is named as A 1 and along the right side is named as B 1. In the same way survey location along with the left side of Gujarat gas circle to Rushbh Tower is name as B 2 and along the rightside name as A 2 . Also, Arterial road connecting these two arterial roads is named as C. In the South-West zone, selected study area for the survey was 5 km stretch of Athwa-Dumas corridor and 4 km stretch of Udhna-Magdalla because these corridors contain all types of activities which can be affected by vehicular noise pollution (see Fig. 2). Activity refers to schools, colleges, hospitals, commercial areas, and residential areas. Noise monitoring was done at 31 locations in and around these two arterial roads with traffic volume study and traffic speed study. Survey locations along the left side of the Athwa-Dumas corridor are named as A 1 , A 2 … to A 8 and along the right side of the Athwa-Dumas corridor are named as B 1 , B 2 … to B 8 . In the same way, survey locations along the left side of the Magdalla-Udhna corridor are named as A 9 , A 10 … to A 14 and along the right side named as B 9 , B 10 , … to B 14 . Also, the sub-arterial roads connecting these two arterial roads are named as C 1 , C 2 and C 3 . Methodology and Data Collection Noise monitoring was done at 5 locations in West zone and 31 locations in South-West zone in arterial roads of Surat city, with traffic volume, traffic speed, numbers of horns, and un-authorized parking counts. Residential as well as commercial buildings are located just on the roadside and these buildings are minimum 3 storeys and maximum 13 stories. Measurements were carried out from Monday to Friday, the working days. Noise data were collected by using the KIMO DB 300/2 (see Fig. 3), automatic sound level meter for 24-hour duration. Monitoring was divided into two parts as per CPCB guidelines, day time (6:00 to 22:00 hrs) and night time (22:00 to 06:00 hrs) [12]. The calibrator was used to calibrate the sound level meter for each measurement. The sound level meter was put on a tripod at 1.2 m above the floor level. The vehicle was divided into five categories like 2-wheelers (motorcycle, mopeds), 3-wheelers (autorickshaw), 4-wheelers (cars), bus, and truck. The number of vehicles were counted that crossed the point of noise measurement from either direction on the road and vehicles were recorded by videography [13,14]. For each road stretch, through the day time (6:00 to 22:00 hrs) and night-time (22:00 to 06:00 hrs), total horn events and un-authorized parked vehicles were counted manually. The speeds of individual vehicles were also taken with a hand-held radar gun (see Fig. 4) along with the noise level. The data extraction process consists of four parts: namely noise level data, traffic (count & speed), and number of horns, un-authorized parking data. Noise levels (L Aeq ) and other noise indices (L 10 , L 50 , L 90 , L 95 ) collected and stored in the automatic precision sound level meter, automatically generates a complete data sheet of all necessary noise data and statistics in a user-friendly way. Noise Mapping Process For developing road traffic noise model, it is essential to associate noise levels with sound generation, sound propagation, and sound reception [15,16]. Every model needs significant features such as the source of noise, the path of noise propagation, and the receiver on which it can be observed [17,18]. The noise source data consist of the flow rate of the vehicle, average vehicle speed, road gradient, and the characteristics of the road [19][20][21]. The propagation path comprises the distance of the receiver from source, the average height of propagation above the road surface, the road surface characteristics, the angle of view of the source from the [22,23]. The geometry features of road and buildings were measured manually [24]. Buildings were developed manually on the bitmap. The heights of the buildings was taken as 3.5 meters per floor. Residential as well as commercial buildings were located just on the roadside and these buildings were minimum 3 storeys and maximum 13 storeys. The traffic data was acquired by on site observation for traffic counts. The speed of the vehicle was taken using a radar gun and observed between 35 and 45 kmph during day and night. A further assumption in the computation was that all road and motorway surfaces are constructed of impervious bitumen and that all the measurements were taken only during, i.e., Monday to Friday [25][26][27]. To develop noise map, country-wise road calculation models such as RLS-90 from Germany, CoRTN:88 from UK, NMPB:2008 from France, TNM2.5 from US, etc., are available in SoundPLAN essential 4.0 software [28]. There are two noise propagation model such as RLS-90, CoRTN:88, are useful to develop road traffic noise map. Because these two noise models having urban road inventory features. Data required for mapping are noise data (L Aeq24 hr , L 10 , L 90 , L den , L max , L min ), road inventory data, geometric features of mapping area, category wise traffic counts, category-wise vehicles speed, meteorological data such as wind velocity, humidity, temperature, air pressure [29]. The digital ground model was developed using tool available in SoundPLAN essential 4.0 software. The file supported to developed base ground models is a bitmap, DXF, ESRI Shapefile or ASCII. Bit maps of South-West zone and West zone of Surat city were imported and digitized in SoundPLAN. All the features of buildings were drawn and identified using tools available in SoundPLAN essential 4.0 such as road making tool, building making tool, receiver tool, calculation selection area tool, etc. Around 1600 building in South-West zone and 400 buildings in West zone of Surat city were drawn manually and ground elevation height for all buildings was taken from the Google Earth Pro. These buildings were a minimum of 3 stories and a maximum of 13 stories. The height of one floor considered was 3.50 meter so the minimum building height is 10.5 meter and the maximum is 45.5 meter. Traffic volume data, speed, and road inventory were given as per the model requirement. Receiver positions were input in the base map and identify its noise limit as per Central Pollution Control Board (CPCB) guideline (see Tab. 1). Noise propagation models were selected as per requirement. Calculation area (affected area) has been selected on its map. After the end of this lengthy procedure, noise maps were generated. Tabs. 2 and 3 depict the difference between measured L Aeq and predicted L Aeq by a different model. CoRTN: 88 uses bituminous road surface and having vehicle speed greater than 75 kmph, whereas on Indian urban roads such vehicles speed is never observed. This is reflected in the prediction values given by Tab. 4 depict the difference of up to 9 dB(A) at South-West zone using RLS-90 model when compared to measured data. The results show a significant difference from the measured noise levels which are due to the fact that these inbuilt models of SoundPLAN essential 4.0 inherently assume homogeneous traffic conditions with higher speeds, wider roads, and no conjunction, whereas Indian traffic conditions are heterogeneous, lesser speeds and narrow roads. Residential as well as commercial buildings are located just on the roadside and enough parking space is not available. Therefore, people park vehicles on the main road. Also, due to the widely fluctuating vehicular dimensions, the composition of vehicles, speed of vehicles, lack of lane disciplines, and un-authorized parking on main road lane in heterogeneous road traffic conditions, horn honking becomes imperative. It changes the soundscape of the city considerably as compared to other cities of developed countries. Therefore, a model considering the factors o Indian urban condition such as heterogeneity, un-authorized parking, horn honking conditions should be developed, which probably may bring down the difference between predicted and measured values. Noise prediction models developed for highways-expressway cannot be applied to urban road traffic noise conditions. Hence, RLS-90, which is inbuilt in SoundPLAN essential 4.0 software, was selected. This RLS-90 takes into account many variables, which are responsible for noise generation/mitigation viz. hourly traffic flow, vehicle speed, road type, road geometry, and obstacles. This study has undergone exhaustive data collection and extraction for preparation of noise prediction models/maps, around 0.5 million noise readings. However, inspite of this quantum of data, the error between predicted and measured noise levels comes at around 4 to 11 dB(A). On critically analyzing, it can be concluded that a slightly high error is due to the fact that RLS-90 is a prediction model developed for homogenous traffic conditions, whereas in country like India, heterogeneous traffic conditions prevail, which is in high contrast to western countries. Due to varied vehicular dimension and composition, accelerating-decelerating speed, lack of lane discipline and un-authorized parking on roads, horn honking becomes imperative. These two significant parameters viz. number of horn counts and un-authorized parking may be the leading cause of higher error percentage between observed and predicted noise values. Kalaiselvi et al. [2] applied a new horn correction factor in existing RLS-90 prediction model using level of service of the road as the input parameter for horn honking. There are many reasons for horn honking. After the actual observation on many streets of urban roads, it was found out that the major reason for horn honking is the occupancy of un-authorized parked vehicles on the side kerbs. Hence, a new horn correction factor of un-authorized parked vehicles in the RLS-90 model can be developed, which will be the extension/upgradation of RLS-90 model in heterogeneous traffic conditions.
3,169.6
2021-01-01T00:00:00.000
[ "Computer Science" ]
Positive Periodic Solution for Second-Order Nonlinear Differential Equations with Variable Coefficients and Mixed Delays In this paper, we study two types of second-order nonlinear differential equations with variable coefficients and mixed delays. Based on Krasnoselskii’s fixed point theorem, the existence results of positive periodic solution are established. It should be pointed out that the equations we studied are more general. Therefore, the results of this paper have better applicability. Introduction The main purpose of this paper is to consider positive periodic solution for two classes of second-order nonlinear differential equations with variable coefficients and mixed delays as follows: x (t) + b(t)x (t) + a(t)x(t) = f (x(t − δ(t))) + ∞ 0 k(s)h(x(t − s))ds (1) and (Ax(t)) + b(t)x (t) + a(t)x(t) = f (x(t − δ(t))) + ∞ 0 k(s)h(x(t − s))ds, where a, b, δ ∈ C(R, (0, ∞)) are T−periodic functions, f , h ∈ C(R, R), c(t) ∈ C 1 (R, R) is an T−periodic function with |c(t)| = 1, τ > 0 is a constant, and k(s) is a continuous and integrable function on [0, ∞) with ∞ 0 k(s)ds = 1. Equation (1) is a non-neutral second-order nonlinear differential equation which has received much attention. Wang, Lian, and Ge [1] studied the following second-order differential equation with periodic boundary conditions: In Equation (4) . (5) They obtained G(t, s) > 0 for t, s ∈ [0, ω] if the following conditions are satisfied: (A 1 ) There are continuous ω-periodic functions a(t) and b(t) such that ω 0 a(t)dt > 0, ω 0 b(t)dt > 0 and a(t) + b(t) = p(t), b (t) + a(t)b(t) = q(t) for t ∈ R; Obviously, G(t, s) in (5) is too complex, and the conditions for satisfying G(t, s) > 0 are too strong and cannot be easily used. Bonheure and Torres [2] studied the existence of positive solutions for the model scalar second-order boundary value problem where a, b, c > 0 are locally bounded coefficients and p > 0. For (6), the authors also obtained the Green function, which can be used for studying the homoclinic solution and bounded solution of a second-order singular differential equation. However, it is inconvenient to use this Green function to study the periodic solutions of (1). In order to overcome the above difficulties, we use the order reduction method for studying periodic solutions of (1) in the present paper. For more results about second-order singular differential equation with variable coefficients and delays, see, e.g., [3][4][5][6][7] and cited references. Equation (2) is a neutral second-order nonlinear differential equations. Periodic solutions of higher-order differential equations have a wide range of applications, and many researchers have conducted a lot of research on them. Liu and Huang [8] studied the existence and uniqueness of periodic solutions for a kind of second-order neutral functional differential equations. Lu and Ge [9] considered periodic solution problems for a kind of second-order differential equation with multiple deviating arguments. Luo, Wei, and Shen [10] investigated the existence of positive periodic solutions for two kinds of neutral functional differential equations. Arbi, Guo, and Cao [11] studied a novel model of high-order BAM neural networks with mixed delays in the Stepanov-like weighted pseudo almost automorphic space. Xin and Cheng [12] studied a third-order neutral differential equation. In [13], the authors considered the existence of periodic solutions for a p-Laplacian neutral functional differential equation by using Mawhin's continuation theorem. For more recent results about positive periodic solutions of neutral nonlinear differential equations, see, e.g., [14][15][16][17][18]. We found that the results of existing positive periodic solutions mostly depend on Green functions and the properties of neutral operator. However, it is very difficult to obtain proper Green functions. In this paper, we develop some new mathematical methods for obtaining the existence of positive periodic solutions without using Green functions. It should be pointed out that, in 2009, we obtained an important result (see the below Lemma 1) for the properties of neutral operator which can be easily used to study the periodic solution problems of functional differential equations. This paper is devoted to studying the existence for positive periodic solutions of Equations (1) and (2) by using the Krasnoselskiis fixed point theorem and some mathematical analysis techniques. The main contributions of this paper are listed as follows: (1) Equations (1) and (2) in the present paper are more general, including the existing classical second-order differential equations, than the considered equations in [1,[4][5][6][7][15][16][17][18]. Therefore, the results of this paper are more general and better applicable. (2) Since it is very difficult to obtain Green functions of second-order nonlinear differential equations with variable coefficients, we develop new methods for overcoming the above difficulties. Using appropriate variable transformation, we transform a secondorder equation into an equivalent one-dimensional system, so we do not need to solve the Green function. The research method of this paper is different from the existing research methods, see, e.g., [1,[15][16][17][18]. (3) In 2009, we obtained the important properties of the neutral operator in [19]. In the past, we mostly used this important property to study the existence of periodic solutions. In this paper, we used this important property to study the existence of positive periodic solutions for the first time. The following sections are organized as follows: Section 2 gives the main lemmas. Section 3 gives the existence results of positive periodic solutions to Equation (1). Section 4 gives the existence results of positive periodic solutions to Equation (2). In Section 5, an example is given to show the feasibility of our results. Finally, Section 6 concludes the paper. Main Lemmas Definition 1 ([20]). Let X be a Banach space and K be a closed, nonempty subset of X. K is a cone if (i) αu + βv ∈ K for all u, v ∈ K and α, β ≥ 0, (ii) u, −u ∈ K imply u = 0. Positive Periodic Solution of Equation (1) Let where ξ > 0 is a constant. Then, Equation (1) is changed into the following system: Since system (7) is equivalent to (1), we just have to study the existence of positive periodic solutions to system (7). Then, X is a Banach space. Throughout this paper, we need the following assumption: where ξ > 0 is defined by (7). Let where θ = min{ǧĝ ,ˇ¯ĥ¯h },ǧ,ĝ,ȟ,ĥ are defined by (10) and (11). Integrate (7) from t to t + T and obtain that where It is easy to see that By assumption (H 1 ), we havě For each z = (x, y) T ∈ X, define an operator Φ : where Φz = (Φ 1 z, Φ 2 z) T , g(t, s),h(t, s), and F(s) are defined by (8). Thus, the existence of a positive periodic solution of system (7) is equivalent to finding the fixed point of operator Φ. Lemma 4. The mapping Φ : K → K is completely continuous. Positive Periodic Solution of Equation (2) According to the proof of the existence of the positive periodic solution of Equation (1) and Lemma 1, we can easily obtain the existence of the positive periodic solution of Equation (2). Let where ξ > 0 is a constant. Then, Equation (2) is changed into the following system: Since system (29) is equivalent to (2), we just have to study the existence of positive periodic solutions to system (29). Let X = {z ∈ C(R, R 2 ) : z(t + T) = z(t), z = (Ax, y) T } with the norm ||z|| = max{|Ax| 0 , |y| 0 }. Then, X is a Banach space. Let where θ = min{ǧĝ ,ˇ¯ĥ¯h },ǧ,ĝ,ȟ,ĥ are defined by (10) and (11). Integrate (29) from t to t + T and obtain that where In view of (30), for each z = (Ax, y) T ∈ X , define an operator Φ : K → K as where Φz = (Φ 1 z, Φ 2 z) T , g(t, s),h(t, s), and F(s) are defined by (30). Thus, the existence of a positive periodic solution of system (29) is equivalent to finding the fixed point of operator Φ. Since the proofs of Lemmas 5 and 6 are similar to the proofs of Lemmas 3 and 4, we omit them. Lemma 5. The mapping Φ maps K into K. Lemma 6. The mapping Φ : K → K is completely continuous. Theorem 2. Suppose that assumption (H 1 ) holds. Furthermore, assume that there are positive constants r and R with r < R such that sup ||φ||=r,φ∈K 2ȟ . Proof. Similar to the proof of Theorem 1, system (29) has a T-periodic solution z = (z 1 , z 2 ) T such that z 1 = Ax > 0, i.e., x = A −1 z 1 . By Lemma 1, we have (1) and (1) with periodic boundary condition and variable coefficients. This paper aims to propose a new method to study the above equations to avoid the difficulty of finding Green functions. We use the order reduction method to reduce the higher-order equation into a lower-order system, so we can avoid solving the Green function, and we can directly use the fixed point theorem to study the lower-order system. Remark 2. In recent years, a huge amount of literature has come into existence for studying the positive periodic solution of neutral second-order nonlinear equations. Wu and Wang [14] studied the following second-order neutral equation: where c ∈ (−1, 0) is a constant and φ ∈ (0, 1) is a constant. By the use of the fixed point theorem in cones, sufficient conditions for the existence of the positive periodic solution of (34) are established. When c ∈ (−1, 1), Cheung et al. [15] discussed the existence of a positive periodic solution for the following second-order neutral differential equation: Fore more results about (35), see, e.g., [16,17] and related references. In a very recent paper [18] using the Leray-Schauder fixed point theorem, Cheng, Lv, and Li studied the following equation: where c(t) ∈ C 1 (R, R) is a T-periodic function. They obtained a range c(t) ∈ ( a ∞ +δb ∞ +a 0 ) for guaranteeing the existence of the positive periodic solution to (36). However, in the above papers, they used the properties of Green functions and neutral operators. In the present paper, we use the order reduction method to study second-order nonlinear differential equations with variable coefficients. We wish that the methods of the present paper can be used to study positive periodic solutions of neutral second-order nonlinear equations with variable coefficients. Conclusions and Discussions In the last past decades, nonlinear second-order differential equations with variable coefficients have found successful applications in scientific areas including quantum field theory, fluid mechanics, gas dynamics, and chemistry. Hence, there exists ongoing research interest in second-order differential equations with variable coefficients, including existence, stability, and oscillation, which have been obtained, see, e.g., [22][23][24][25]. In this paper, we develop a reducing order method for studying second-order differential equations with variable coefficients, avoiding the difficulty of finding Green functions. The methods of this paper can be extended to investigate other types of secondorder differential equations such as stochastic differential equations, impulsive differential equations, fractional differential equations, and so on. We hope some authors can use the methods provided in this article to conduct more in-depth research on various types of second-order differential equations with variable coefficients.
2,661.4
2022-09-01T00:00:00.000
[ "Mathematics" ]
Type I error control for cluster randomized trials under varying small sample structures Background Linear mixed models (LMM) are a common approach to analyzing data from cluster randomized trials (CRTs). Inference on parameters can be performed via Wald tests or likelihood ratio tests (LRT), but both approaches may give incorrect Type I error rates in common finite sample settings. The impact of different combinations of cluster size, number of clusters, intraclass correlation coefficient (ICC), and analysis approach on Type I error rates has not been well studied. Reviews of published CRTs find that small sample sizes are not uncommon, so the performance of different inferential approaches in these settings can guide data analysts to the best choices. Methods Using a random-intercept LMM stucture, we use simulations to study Type I error rates with the LRT and Wald test with different degrees of freedom (DF) choices across different combinations of cluster size, number of clusters, and ICC. Results Our simulations show that the LRT can be anti-conservative when the ICC is large and the number of clusters is small, with the effect most pronouced when the cluster size is relatively large. Wald tests with the between-within DF method or the Satterthwaite DF approximation maintain Type I error control at the stated level, though they are conservative when the number of clusters, the cluster size, and the ICC are small. Conclusions Depending on the structure of the CRT, analysts should choose a hypothesis testing approach that will maintain the appropriate Type I error rate for their data. Wald tests with the Satterthwaite DF approximation work well in many circumstances, but in other cases the LRT may have Type I error rates closer to the nominal level. that observations are independent is violated. When the response variable of interest is continuous, linear mixed models (LMMs), which require that observations are independent only after conditioning on cluster membership, are a common approach to the data analysis. CRTs are a widely used experimental design (see for example [2][3][4]), and LMMs are an attractive option for data analysis. Some reasons for this attractiveness are that LMMs are robust to certain missing data mechanisms and can flexibly accommodate nested levels of clustering and/or varying cluster sizes [5]. Generalized linear mixed models (GLMMs) extend the approach to non-Gaussian data, such as binary, count, or multinomial outcomes. Issues we discuss in this paper may arise in these settings as well, though use of GLMMs introduces additional issues such as the choice of modeled distribution, link function, and the approximation of an intracluster correlation coefficient (ICC) with the natural parameters of that distribution. We do not investigate GLMMs in this article. When fitting LMMs to CRT data, inference on parameters depends on asymptotic results, and in settings where the number of clusters is small they can generate Type I error (TIE) rates well above or below the nominal level [6]. All frequentist null hypothesis significance testing (NHST) theory depends on tests having the nominal size -a test with a nominal 5% error rate should produce false rejections 5% of the time. If not, data analysts in a CRT could be led to inappropriate conclusions when evaluating a treatment effect using NHST; for example, producing too many false positives or false negatives. Analysts evaluating associations using confidence intervals rather than null hypothesis significance testing may also be misled if asymptotic parameter distributions are incorrect with small samples. Unfortunately, small cluster counts are not uncommon in the literature, because it is often more expensive to add more clusters to a study than more individuals to a cluster. Despite common heuristics such as 'at least 30 units at each level of analysis' [7], CRTs often have as few as 20 clusters. For example, a review of 100 CRTs [8] found 37% with fewer than 20 clusters and minimal reporting of any small-sample corrections employed. Some limited investigations of the problems with (G)LMM small sample inference have been conducted. Pinheiro and Bates [6] examined a very restricted parameter space, while Schluchter and Elashoff [9] reviewed the issue from a slightly different angle, examining approaches for longitudinal data with different covariance structures, which have different interpretations than a typical CRT. Several studies [10][11][12][13] suggested improving smallsample inference by applying the Bartlett correction [14], also under a smaller set of parameters than we apply here. However, as far as we are aware there is no simple way for data analysts to implement the Bartlett correction in SAS or R. Other studies [15][16][17] examine issues around small numbers of clusters, but include both random intercepts and slopes, which may not be a structure that all CRTs utilize. Closer to our setting in this article, Leyrat et al. [18] evaluated the power and TIE rates of different degrees of freedom (DF) choices for LMMs with Wald hypothesis tests for CRT designs under various design factors. They found both conservative and anti-conservative results, depending on the DF method chosen. Kahan et al. [8] reviewed small sample issues, but limited investigation to a small set of parameters and methods. Johnson et al. [19] examined LMM TIE rates, but only for Wald tests with two DF choices, and did not break down their results by design factors. In the GLMM context, for binary outcomes only, Li and Redden [20] examined TIE rates under different DF choices and found that the rates varied widely by method and design factors. The work discussed above either does not break down the small-sample problems by design factor combinations (the effect of the ICC may vary depending on the number of clusters and cluster size, for example), does not compare results to the likelihood ratio test, and/or examines a limited set of data-generating parameters. Our work aims to add to this literature by examining in more detail the TIE control of several LMM inference approaches in a variety of plausible CRT scenarios. We examine both likelihood ratio test and Wald test results, including different DF choices for the latter. We also vary cluster size, number of clusters, and intracluster correlation coefficient, looking at how results vary under the different approaches. We hope to provide enough detail to alert data analysts to the situations that may lead to incorrect TIE rates with LMMs, and give guidance on which methods have the best error control given those factors. Methods We performed a Monte Carlo simulation study to examine the TIE control of different LMM inference approaches under varying, plausible CRT circumstances. First, we describe the statistical model in question and the difficulties with small-sample inference, then we outline our specific study design. For all data analysis in this article, we used the SAS/STAT 15.1 (SAS Institute Inc., Cary, NC) and R 3.6.0 (R Foundation for Statistical Computing) software packages. Model We consider here a version of the linear mixed-effects model of Laird and Ware [21]: where Y ij is a continuous response variable for individual j in cluster i, X T ij are that individual's covariates for a vector of fixed effect regression parameters β, Z T ij are the cluster-level values for a vector of random effects b i for cluster i, and ij is the residual error of the observation. In our case, matching common practice in CRTs, we restricted the random-effects structure to include only a random intercept term, so the term Z T ij b i reduces to b 0i . We let ij ∼ N 0, σ 2 for all individuals, and cluster-level variance b 0i was distributed N 0, σ 2 b , with b 0i independent of 0i . We further assumed that cluster size is uniform for all clusters, and that there are two treatment arms with an equal number of clusters in each arm, modeled with an indicator variable x i ∈ {0, 1} for control or treatment arm, with β 1 being the treatment effect. Thus, for the remainder of the article, our model is: Impact of clustering on inference In a CRT, there are typically two assumed sources of variability in outcomes: between-cluster, denoted here as σ 2 b , and within-cluster, denoted as σ 2 . The marginal variance of y ij = σ 2 b + σ 2 . One way of quantifying the amount of clustering is via the intra-cluster correlation coefficient or the proportion of total variance due to the cluster-level variability. If one were to incorrectly analyze the data using a linear model rather than a linear mixed model, standard errors for the coefficient estimates would have to be adjusted, since observations are correlated in violation of the model assumptions. An approximation of this adjustment, the design effect [22], is a multiplier for the sampling variance of the treatment effect estimator. It is defined as where n is the number of subjects per cluster. For example, with 10 observations per cluster and an ICC of .01, the design effect is 1.09, meaning that the treatment effect coefficient standard errors would have to be multiplied by roughly √ 1.09 ≈ 1.04 to account for clustering. However, with 100 observations per cluster and the same ICC, the standard error multiplier increases to √ 2 ≈ 1.41, and for 1000 observations per cluster it increases to √ 11 ≈ 3.31, meaning that even a very small ICC can drastically change inferences when the cluster size is large. This approximation demonstrates the necessity of accounting for between-cluster variation in the data analysis, even if the ICC is expected to be small. Inference with LMM fixed effect estimators Two ways of fitting a linear mixed model are by maximum likelihood (ML) and restricted maximum likelihood (REML), and most major statistical software packages can perform estimation by either method. Inference aboutβ 1 can be made using the likelihood ratio test (LRT) if fitting via ML, or by a Wald test if fitting via REML. A third test based on the maximum likelihood, the score test, is rarely used in this setting and is not discussed here. The LRT compares the log-likelihood of a model without β 1 ( 0 ) to a model that includes it ( 1 ), and the test statistic λ = −2( 0 − 1 ) has a χ 2 p distribution, asymptotically, with degrees of freedom p the difference in parameter dimension between the two models. In our case, as in many CRTs, there is one treatment effect parameter, so p = 1. In general, the LRT is recommended over the Wald test, as its asymptotic properties are superior [23]. Unfortunately, the χ 2 distribution may be a poor approximation of the distribution of λ when the amount of information in a sample, for example, cluster count, is small. Alternatively, a Wald test statistic under the null hypothesis H 0 : β 1 = 0 can be generated by dividing the estimated treatment effect by its standard error: t * = β 1 /SE(β 1 ). This value can then be compared to a central t distribution. Unfortunately, for many designs, it is unclear what the appropriate degrees of freedom (DF) for that distribution should be [24]. Choices include: • Residual: N − p, where N is the total number of observations and p is the number of fixed-effects coefficients to be estimated in the model. In the CRT design assumed here, p = 2. Since the number of observations is usually much larger than the number of parameters in the model, this will generate similar results to the 't as z' approach described below. • Between-within: The residual DF are partitioned into between-subject and within-subject groups, equivalent in this case to a one-way ANOVA decomposition, meaning DF = K − 2, where K is the number of clusters. • Satterthwaite approximation: This method, generalizing the ideas of Satterthwaite [25], is quite complex, but it essentially uses the variance of the β 1 estimate in its calculation of the DF. For more detail, see McCulloch et al. [26], Ch. 6. • Kenward-Roger approximation: This method [27] inflates the fixed and random effects variancecovariance matrix, and calculates Satterthwaite DF based on these inflated values. Under our model with one treatment effect, it generates DF equivalent to the Satterthwaite approximation. • Infinite ('t as z'): The statistic is compared to a standard normal distribution, equivalent to a t distribution with infinite DF. Alternative inferential approaches The Wald and likelihood ratio tests are not the only options for generating confidence intervals and performing inference in CRTs. Bayesian methods have been implemented with mixed models [28,29], but we do not include Bayesian methods in this analysis. Alternatively, confidence intervals for LMM fixed effects can be generated by a parametric, semi-parametric, or non-parametric bootstrap. All are computationally intensive and require careful implementation due to the clustered nature of the original sample, so we chose not to investigate those approaches, though the parametric boostrap has been recommended by some authors [30]. Data generation We generated clustered, balanced data sets from the null model for clusters i = 1, 2, ..., K and individuals j = 1, 2, ..., N within each cluster. The random intercept b 0i for cluster i was distributed ∼ N(0, σ 2 b ), and the residual error term ij ∼ N(0, σ 2 ). b 0i and ij were generated as independent pseudorandom variates. We also generated values of x ij such that for clusters i = 1, ...K/2, x ij = 0, and for i = K/2 + 1, ...K, x ij = 1. This variable represents the treatment indicator, though it was not used in the data generation, as there is no treatment effect under the null hypothesis. For each data set, we then fit the model shown in equation (2) using SAS PROC MIXED and the lme4 and lmerTest packages in R. The coefficient of interest in these fitted models,β 1 , represents the estimated treatment effect. We gathered p-values for theβ 1 coefficients using the LRT and the Wald test using the various DF options. We assessed the rejection rate under each test for the null hypothesis that β 1 = 0 with α = .05. Since the datagenerating mechanism had a true β 1 value of zero, this estimates the TIE rate for the nominal α = .05 level. We performed our analysis on 10,000 simulated data sets for all possible combinations of the following datagenerating parameters: In preliminary simulations, we tested several different magnitudes for σ 2 b and σ 2 that produced the same ICC, and found that they generated the same Wald and LR test statistics. Based on this, we simplified number of parameter combinations to investigate by fixing σ 2 at 1 and only varying σ 2 b . Determining p-values Both PROC MIXED and lme4 reportβ 1 estimates, their associated standard errors, and t * statistics. This allows for easy testing of theβ 1 coefficient via a Wald test, fitting with REML. The t * statistics generated were compared to t distributions with three choices of DF: between-within, Satterthwaite/Kenward-Roger, and residual, as described earlier. We then collected the p-values and calculated TIE rates under the three DF choices. Both software packages also allow for model fitting using ML, allowing for model comparison and p-value determination forβ 1 via the LRT. First, a null model (4) was fit, with the only fixed effect being an intercept term: Second, a model with an added fixed effect for x ij , as in model (2). The doubled difference in maximized loglikelihood was compared to a χ 2 1 distribution since there was a one-parameter difference in model dimension. Pvalues from the χ 2 1 distribution were collected and TIE rates calculated. Results Both software packages generated identicalβ 1 estimates and standard errors when fitting with REML, and identical differences in likelihoods when fitting with ML. Reported results are from SAS. In addition, since the Kenward-Roger and Satterthwaite approximations were indistinguishable in this setting, they are both labeled as "approximate. " Results are displayed in Fig. 1. Under all approaches, departures from the nominal α level were most pronounced when the number of clusters is small. When the number of observations per cluster is small, and there is a relatively small ICC, the LRT demonstrated appropriate TIE control. Regardless of the number of observations per cluster, the LRT is anti-conservative as the ICC rises. However, the anti-conservatism of the LRT was most apparent with smaller ICC when the number of observations per cluster was larger. Even with as many as 40 clusters and 50 observations per cluster, the LRT was noticeably anti-conservative once the ICC rose above .1. Worse, even when the ICC was very small (.01, .02), the LRT was anti-conservative with as few as 20 clusters of 50 observations per cluster. As for the Wald tests, the between-within DF option led to conservative TIE rates when the ICC was small and/or the cluster size was small, but maintained the appropriate TIE rate with large clusters or a large ICC. The residual DF choice was less conservative in the case of a small ICC, but produced anti-conservative results as the ICC increased, and was more anti-conservative when the cluster size was large. Notably, depending on how the model is fit, the default method for determining DF in SAS may be 'containment' , which under this study design leads to SAS assigning residual DF. Since this choice leads to the most anti-conservative results, it may be a concern for SAS analysts. The Satterthwaite approximation for our simulation estimated the DF as equal to the between-within DF in some cases and to residual DF in other cases, depending on the data set. This is why the TIE rates labeled "approximate" in Fig. 1 are bounded by those other two options. We also tested the effect of an ICC of .09 generated with σ 2 b = 1 and σ 2 = 10 rather than the values discussed above. The results did not differ notably, which suggests that this pattern of TIE rate inflation with the LRT, as with the Wald test, is insensitive to the absolute size of the σ 2 b and σ 2 values, only their relative size. Finally, given the balanced nature of our data and the lack of other covariates, we could have used a t-test on the cluster means of each treatment arm to perform a hypothesis test. Using this approach, we achieved close to the nominal .05 alpha level in all cases. However, since most CRTs include covariates, a t-test would be inappropriate, and hence these results are omitted from the plot. The Wald test with the between-within DF choice is almost equivalent to this t-test [31], the only difference being that the LMM estimates two variances (σ 2 b andσ 2 ), while the t-test only estimates their sum, leading to slighly different inferences. Conclusions To our knowledge, the effect of different combinations of design factors and analysis approach on Type I error rates have not been examined comprehensively in previous reports. Our results show that none of the approaches meet the nominal alpha level in all cases examined, and the departures from the nominal level are directionally different based on the approach and data structure. Hence, there is no one-size-fits-all recommendation for data analysts in these small-sample cases. The likelihood ratio test, based on an asymptotic χ 2 distribution, does not perform well in these finite-sample cases, especially when the clusters contain many observations. This extends other studies that found the LRT to be anti-conservative [6,32] in smaller explorations of the possible parameter combinations. Alternatively, with a Wald test, some choices of DF, such as between-within or the data-adaptive Satterthwaite, can avoid anti-conservatism. However, a tradeoff exists, as they are too conservative when the ICC, the number of clusters, and/or cluster size is small. After collecting our TIE rate results as outcomes, we formally tested the interactions between our design factors, using a three-way ANOVA within each analysis type and breaking the 10,000 simulations of each condition into 10 sets of 1,000, giving 10 outcomes per condition. Most of these three-way interactions were statistically significant, and given the strong patterns see in Fig. 1, we expect that we could show significance of all the interactions if we grew the number of simulations arbitrarily. The results here suggest that data analysts should choose an approach that best suits their data. For example, if the ICC is expected to be small and the number of observations per cluster is small, the likelihood ratio test should perform well. For cases where the number of observations per cluster is large, a Wald test with the Satterthwaite DF approximation is better, though it can be conservative in some situations. One perhaps unsatisfying conclusion is that analysts may want to generate their own small simulation studies to evaluate different approaches before fitting their final data models, since they will likely know the model structure, number of clusters, and cluster size by that point. Finally, we caution analysts to be careful when using default setting in software. For example, with Wald tests, SAS PROC MIXED may default to the poorly-performing residual DF choice, and the lmerTest package in R defaults to the Satterthwaite approximation, which may be too conservative in some cases. It is unclear how aware data analysts may be about the small-sample problems that may arise in making inference from mixed models. A review of linear mixed model applications in education and social sciences [33] found minimal reporting of estimation and inference methods and assumptions, and that cluster sizes could be as low as 2 and the number of clusters as low as 8. Our own review, and that of Kahan et al. [8], confirmed that small cluster counts are not unusual in biomedical settings as well. Therefore, we hope this will provide analysts with some recommendations of which approaches control TIE at appropriate rates under different circumstances, and we encourage more reporting of DF choices and analytic methods in CRT publications. Our results, while limited to models with one random intercept, are in concordance with comparable LMM simulation studies with similar data-generating parameters but including random slopes [15][16][17], though only Luke [15] explored the same range of DF options considered here. Given that small sample sizes are not uncommon in CRT literature, there is need for more investigation of which methods control TIE in other contexts. One limitation of our result is that we did not include any scenarios with repeated measures (for example, baseline, post-treatment, and follow-up), which are common in biomedical settings, and deserve similar scrutiny. Additionally, more parameters could have been added to the simulations, such as unbalanced cluster sizes or varying ICC by treatment arm. Previous simulation studies [34] demonstrated that unbalanced cluster sizes can result in inflated TIE rates. We suspect that the relatively good performance of the approximate DF will persist in these unbalanced cases. Another potential avenue for exploration, following on the work of Li and Redden [20], would be to examine TIE rates under Wald tests and the LRT for GLMMs, in particular binomial, Poisson, and negative binomialdistributed outcomes, including various link functions. Further, a generalized ICC has been derived [35] and validated [36] for the negative binomial distribution, so the analysis could be replicated in a straightforward way. Type II errors may also be a concern for researchers, and investigating the role of different analytic methods on these could be an area for future work. Finally, the impact of these data/approach effects on statistical power should be determined so that analysts can make appropriate sample size calculations during the design phase of a CRT.
5,499.2
2019-12-02T00:00:00.000
[ "Computer Science" ]
Endoplasmic reticulum stress mediates the myeloid-derived immune suppression associated with cancer and infectious disease Myeloid-derived suppressor cells (MDSCs), which are immature heterogeneous bone marrow cells, have been described as potent immune regulators in human and murine cancer models. The distribution of MDSCs varies across organs and is divided into three subpopulations: granulocytic MDSCs or polymorphonuclear MDSCs (G-MDSCs or PMN-MDSCs), monocytic MDSCs (M-MDSCs), as well as a recently identified early precursor MDSC (eMDSCs) in humans. Activated MDSCs induce the inactivation of NK cells, CD4+, and CD8+ T cells through a variety of mechanisms, thus promoting the formation of tumor immunosuppressive microenvironment. ER stress plays an important protecting role in the survival of MDSC, which aggravates the immunosuppression in tumors. In addition, ferroptosis can promote an anti-tumor immune response by reversing the immunosuppressive microenvironment. This review summarizes immune suppression by MDSCs with a focus on the role of endoplasmic reticulum stress-mediated immune suppression in cancer and infectious disease, in particular leprosy and tuberculosis. Introduction Myeloid-derived suppressor cells (MDSCs) are immature myeloid suppressor cell populations that are derived from the bone marrow. MDSCs accumulate and exert immune suppressive effects during pathologic conditions such as cancer, inflammation, infection, autoimmune disease, and obesity [1]. The MDSCs suppress T cell activation by downregulation of L-selectin and sequestration of cysteine, which the T cells cannot synthesize spontaneously and that they require to become activated. The development, expansion, and activation of MDSCs were triggered by the tumor microenvironment, particularly the immune microenvironment, and regulated by differential intracellular signaling molecules [2]. The microenvironment during these pathologic conditions is characterized by a low pH, hypoxia, nutrient deprivation, and free radicals. This microenvironment disrupts protein folding, which triggers cellular "endoplasmic reticulum (ER) stress" [3]. ER stress impacts inflammatory and tumor microenvironment-induced immune suppression [4][5][6]. Furthermore, tumor cells can transmit ER stress to immune cells recruited to inflammatory tissues [7,8]. Most noteworthy, there is compelling evidence that ER stress can transform immune cell populations into immunosuppressive phenotypes [6,9], with MDSCs from cancer patients and tumor-bearing mice producing a robust ER stress response. Many factors can induce ER stress in MDSCs. Reactive oxygen species (ROS), one of the main inducers of the ER stress response, are a significant product of MDSCs [10]. Lipids can also induce ER stress [11], with lipid accumulation associated with MDSCs [12]. Currently, MDSCs are becoming the main immunotherapeutic targets. How ER stress regulates the biological properties of MDSCs in the tumor microenvironment is critical for MDSCs-targeted immunotherapy. This review summarises the ER stress effect on the immunosuppressive function of MDSCs in different kinds of tumors and infectious diseases, focusing on Mycobacterium leprae and Mycobacterium tuberculosis infections. We also summarized investigated molecules as the immunotherapy targets aiming to provide a more comprehensive theoretical basis for targeted MDSCs immunotherapy in the clinic. MDSCs, their expansion, roles, and mechanisms in immunosuppressive function in pathological conditions The terminology of MDSCs was first defined in 2007 and referred to the origin and the suppressive function of these cells. On physiological conditions, hematopoietic stem cells (HSCs) first develop into common myeloid progenitors (CMPs) and then into immature myeloid cells (IMCs). IMCs further differentiate into mature functional granulocytes, macrophages, and dendritic cells (DCs). However, during pathological conditions, IMCs differentiate into MDSCs within the bone marrow and then migrate to peripheral tissues [13] . MDSCs are a heterogeneous population composed of monocytes, polymorphonuclear leukocytes and immature myeloid cells. MDSCs are broadly divided into three subgroups: granulocytic MDSCs or polymorphonuclear MDSCs (G-MDSCs or PMN-MDSCs), monocytic MDSCs (M-MDSCs) [14], as well as a recently identified early precursor MDSC (eMDSCs) in humans [15]. In mice, Gr-1 and CD11b are used to identify MDSCs. Ly6G and Ly6C are used to distinguish M-MDSCs (CD11b + Ly6G − Ly6C high ) from G-MDSCs (CD11b + Ly6G + Ly6C low ) [16]. In humans, the common MDSC phenotype is CD11b + HLA-DR −/ low . CD33 is the common myeloid marker for humans, while CD14 and CD15 are used to distinguish M-MDSCs (CD11b+HLA-DR −/low CD33+CD15−CD14+), G-MDSCs (CD11b + HLA-DR −/low CD33 + CD15 + CD14 − ), and eMDSCs (CD11b + HLA-DR −/low CD33 + CD15 − CD14 − ). It is difficult to identify G-MDSCs from neutrophils in mice or humans, as they have a similar phenotype. However, the two cell populations can be distinguished by density gradient centrifugation, which has limitations [15,17]. Recently, it is found that Lectin-type oxidized LDL receptor-1 (LOX-1) is a unique surface marker of human G-MDSCs, which can be used as a distinguish marker of G-MDSCs. Meanwhile, S100A9 has been used to refine identification of M-MDSCs in human [18]. Work from several groups has demonstrated that the key immunosuppressive feature does not distinguish MDSCs from conventional myeloid cells during inflammation [19]. A combination of molecular markers is considered being the most accurate means by which to identify different subtypes of MDSCs, with the caveat that different methods for collection and analysis of MDSCs can influence outcomes. When MDSCs expanded to the tumor or inflammatory sites, activation signals were launched and endowed MDSCs to carry out the inhibitory function. The NF-κB signaling pathway is essential to MDSC activation. IL-1β activates MDSC recruitment and promotes IL-6 and TNF-α production through the NF-κB pathway. M-MDSCs from cancer patients produced a high level of TGF-β secretion when treated with PGE 2 activating p38 MAPK/ERK signaling [27,28]. For G-MDSCs, eIF2 and eIF4 were related to ER stress, mTOR and MAPK pathway upregulation [29]. T cell immunosuppression is due to the depletion or sequestration of amino acids. The stress of low extracellular amino acid levels promotes the activation of metabolic sensor (GCN2 kinase, FATP2, and AMPK) and the accumulation of metabolic waste products within the tumor microenvironment (TME) [2]. Furthermore, MDSC-mediated immunosuppression can be induced by the metabolic conversion of the amino acid l-arginine by arginase 1 (ARG1) or by the production of inducible nitric oxide synthase (iNOS). The iNOS degrades l-arginine to produce nitric oxide (NO) and citrulline. ARG1 uses l-arginine as a substrate to produce l-ornithine and urea. As a result of l-arginine starvation, T-cell proliferation and the synthesis of T-cell effector molecules are impaired, leading to severe T-cell dysfunction [21,22]. Further, NO production by iNOS prevents IL-2 production by activated leukocytes in that the stability of IL-2-encoding mRNA is impaired. The loss of l-cystine and l-cysteine inhibits activated T cell synthesis of the anti-oxidant glutathione, which impairs proliferation and activation of T cells [23]. In addition, MDSCs induce tryptophan depletion via indoleaminepyrrole 2,3-dioxygenase (IDO). IDO catalyzes extracellular l-tryptophan to kynurenine. Tryptophan depletion and kynurenine exposure hinders T cell proliferation and facilitates the expansion of regulatory T cells [2]. MDSCs also produce reactive nitrogen species (RNS) and ROS as well as other suppressive molecules that blunt TCR signaling and reduce T cell survival [30]. Furthermore, persistent ER stress promotes tumor progression by affecting malignant cells and infiltrated MDSCs. The regulation of MDSC expansion and suppressive function was shown in Table 1. In addition, ferroptosis can promote anti-tumor immune response by reversing immunosuppressive microenvironment. A comprehensive index of ferroptosis and immune status (CIFI) was concluded from twentyseven prognostic ferroptosis-and immune-related signatures in hepatocellualr carcinoma, which could predict a subgroup of patients with a worse prognosis. These patients have higher fractions of cancer-associated fibroblasts (CAFs) and MDSCs [31]. A manganese porphyrin-based metal-organic framework (Mn-MOF), FAP gene-engineered tumor cell-derived exosome-like nanovesicles (eNVs-FAP), NC06 or the Dihydroartemisinin (DHA) were explored to treatment hypoxic tumors or as the a candidate tumor vaccine, which was designed to reduce the number of MDSCs by targeting ferroptosis [32][33][34][35][36][37]. It is indicated that ferroptosis inducer by controlling MDSC polarization or population is a promising immuno-therapeutic strategy [38]. The ferroptosis-based immunotherapy targets affecting the MDSCs population in different kinds of cancers was shown in Table 2. Unfolded protein response (UPR) and ER stress ER is a closed plumbing system within the eukaryotic cytoplasm. It is divided into rough ER and smooth ER. The rough ER performs functions related to membrane synthesis and secretion of proteins and is widespread in cells with high secretory capacity. The smooth ER is responsible for the synthesis and transport of lipids. ER is a crucial cell organelle that is involved in the regulation of calcium homeostasis, protein synthesis, lipid metabolism, post-translational modification, transport, and is an essential organelle for synthesis and folding of secreted and transmembrane proteins. However, cellular stressors such as hypoxia, nutrient deficiency, and Ca 2+ homeostatic induced ER function disorder which lead to unfolded or misfolded protein accumulation in the ER lumen. If proteins are not properly folded, they are ubiquitinated on the ER membrane and subsequently degraded in a process known as ER associated protein degradation (ERAD). When accumulated misfolded proteins are not eliminated by ERAD, the ER activates the unfolded protein response (UPR). UPR signaling has important roles in immunity, inflammation, and different types of cancer [39,40]. The UPR has three important sensors: inositol requiring enzyme 1 (IRE1ɑ), protein kinase RNA-activated (PKR)-like ER kinase (PERK), and activating transcription factor 6 (ATF6), which are transmembrane proteins associated with the ER [41]. As well, glucose-regulated protein 78 (GRP78), also referred to Bip or HSPA5, is a Table 1 Regulation of MDSC expansion and suppressive mechanisms ER stress or exposure to tumor-related ER stress augments the immunosuppressive potential of MDSCs The tumor microenvironment (TME) comprises tumor cells, immune cells, the extracellular matrix and chemokines, cytokines, growth factors, and extracellular vesicles. MDSCs, DCs, and macrophages accumulate within infectious or tumor microenvironments in which hypoxia, nutrient starvation, low pH, and increased levels of free radicals trigger a state of ER stress in cancer cells and in infiltrating myeloid cells. The UPR response triggered by ER stress protects cells from damage. However, when damage is excessive, UPR signals self-destruction, which removes bacteria and prevents further damage. In response to ER stress, cancer cells and MDSCs activate the UPR to promote cell survival and adaptation during adverse environmental conditions [2]. MDSC infiltrates tumor tissues and displays immunosuppression function by suppressing NK cells, T cells, and Treg cells. MDSCs could also display ER stress to survive in the hypoxia-induced tumor microenvironments. The survival MDSCs could produce Arg1, NO, and TGF-β and play roles in immunosuppression [21,22]. Thus ER stress plays an essential protecting role in the survival of MDSC, which aggravates the immunosuppression in tumors (Fig. 1). The role of ER stress in immune modulation has not fully characterized, but the effects of ER stress on MDSCs during infection and cancer are described below. Immunosuppressive effects of ER stress and UPR on MDSCs in infectious disease When inflammation or infection occurs, MDSCs rapidly expand and travel to the injury sites and regulate the host's immune system. Therefore, it is crucial to have a thorough understanding of immunomodulatory mechanisms of infection and inflammatory diseases, which may also assist in exploring therapeutic targets. The current data highlight the contribution of ER stress to MDSC immuno-suppression function. Indeed, ER stress can also occur in infectious diseases. Therefore, exploring the role of ER stress in MDSC regulating inflammation helps overcome bacterial infection. By now, the research concerning the interaction of ER stress and MDSCs in infectious disease mainly focuses on Mycobacterium leprae and Mycobacterium tuberculosis infections. ER stress activates MDSCs and mediates immunosuppression during Mycobacterium leprae and Mycobacterium tuberculosis infections Leprosy and tuberculosis are caused by intracellular M. leprae and M. tuberculosis, respectively. Various immune system components such as M1 and M2 macrophages, natural killer (NK) cells, DCs, and diverse subtypes of lymphocytes are involved in these infections. Infection can also trigger the accumulation of MDSCs at inflammatory sites [44][45][46]. However, M. tuberculosis and M. leprae can escape and evade the host's innate immune system [47,48]. Tuberculoid leprosy (T-LEP) is self-limiting with few bacilli. The host response is a Th1 type. Lepromatous leprosy (L-lep) is a progressive form of disease that is characterized by a high bacillary load within macrophages. The host response to L-lep is a Th2 type [49], with the number of MDSC greater in L-lep patients than in T-lep patients. MDSCs from L-lep patients suppress T-cell proliferation of M. leprae-specific T cells and reduce the production of IFN-γ, which allows bacterial growth and disease progression. Therefore, immunosuppression by MDSCs may worsen M. leprae infection and contribute to the progression of leprosy [44,45,50]. GM-CSF and M-CSF drive the expansion of myeloid immune cells within the bone marrow and spleen. MDSCs can be recruited and activated by many factors, such as the proinflammatory cytokines IL-1β, IL-6, and IFN-γ. ER stress can also activate MDSCs and trigger these cells to produce iNOS, ROS, and Arg-1, that are immune suppressive [51][52][53]. Kelly-Scumpia et al. [54] found an increase in immature myeloid cells displaying a granulocytic MDSC cell-surface phenotype (HLA-DR-CD33+CD15+) and T-cell suppressive activity in the blood of patients with disseminated/progressive leprosy when compared to self-limited T-Lep. In terms of mechanism, ER stress significantly regulates the T cell inhibitory activity of MDSCs. Further, ER stress promotes IL-10 secretion, which contributes to MDSC activity and highlights the role of ER stress and IL-10 in MDSC-mediated effects during human M. leprae infection [55]. Further, MDSC ER stress can be caused by circulating IL-1α, IL-6, and IFN-γ [53] in L-lep and tuberculosis infections. These cytokines also cause ER stress in macrophages, DCs, and T cells in L-lep patients, suggesting ER stress may be another factor contributing to the exacerbation of leprosy and tuberculosis [6,54]. Uncontrolled bacterial growth worsens the ER stress in MDSCs, resulting in increased production of IL-10 and enhanced immunosuppressive activity [5,56]. Taken together, MDSC mediated immune-suppression is a leading cause of M. leprae and tuberculosis infection, with ER stress activating MDSC immunosuppressive activity. Crispr-cas9, ZFNs, and TALENS are new genetic tools [55,57] that can block IRE1α and XBP1 signaling and stabilize the ER of MDSCs. Previous studies have shown that reducing the expression of CHOP in MDSCS can promote immune activity and stimulate T cells [8]. Therefore, targeting the UPR could regain or reduce ER stress in tuberculosis and leprosy, thereby reducing the immunosuppressive activity of MDSCs [58,59]. Breaking ER homeostasis in MDSC may be a potential strategy to combat and eradicate leprosy and tuberculosis. Immunosuppressive effects of ER stress and UPR on MDSCs in cancers MDSC was initially described as immunosuppressive myeloid cells that evade cancer. The MDSCs, which accumulate in tumor-bearing mice and cancer patients, are site-specific inflammatory and immunosuppressive agents that contribute to cancer progression in different cancers. MDSCs accumulated in the TME under chronic inflammation conditions and cancer contributed to the growth of tumors. Furthermore, the population of immunosuppressive MDSCs decreased after radiotherapy. Thus, preventing MDSC development and/or interfering with their immunosuppressive functions in cancer could reduce immunosuppression, thereby increasing antitumor immunity. In this part, we will discuss ER stressactivated MDSCs and enhanced immunosuppression, which may serve as targets in immunotherapy for different kinds of tumors. ER stress and MDSCs as therapeutic targets for ulcerative colitis and colorectal cancer Colorectal cancer (CRC) is one of the primary causes of cancer-related deaths globally, with more than 2.2 million new cases projected by 2030 [60]. Ulcerative colitis is a chronic colon inflammation, a complex, recurrent, and remitting form of intestinal inflammation [61]. For ulcerative colitis and CRC, MDSCs are a main component of the inflammatory microenvironment with infiltration of the intraepithelial and lamina propria layers. When activated, MDSCs reduce T cell immune function and recruit tumor-associated macrophages (TAM) that down-regulate immune activity in the colonic epithelial barrier [62]. Furthermore, MDSCs secrete M-CSF and GM-CSF that recruit tumor-associated neutrophils (TANs) and TAMs in inflamed colon intraepithelial, lamina propria, and cancerous tissues [63]. In addition, studies have shown that UPR activation and ER stress are involved in colitis and tumorigenesis. During colitis, the stable status of the ER protein-folding environment is disrupted by physiologic, pathologic, or environmental injury, which results in the accumulation of misfolded proteins. When the accumulation of misfolded proteins exceeds the tolerance threshold, the ER-resident sensors trigger the UPR, resulting in transcriptionally enhanced ER protein folding capacity [64]. Colonic mucosal cells undergo apoptosis if these corrections are insufficient. However, if cells limit pro-apoptotic UPR successfully, ER stress can promote tumorigenesis [65]. Thus, continuous activation of robust ER stress sensors can confer tumorigenesis. Studies have shown that controlling robust ER stress is an effective therapeutic strategy for the prevention of colitis and tumorigenesis [5]. Feng Wang et al. demonstrated a new derivative of myricetin, (M10), to inhibit ulcerative colitis and colorectal neoplasms by weakening gross ER stress. Inhibition of ER stressinduced the UPR pathway by direct regulation of mTOR expression. Therefore, M10 may be a promising drug for chemo-prophylaxis of colitis and tumorigenesis [66]. In addition, MDSCs may be an effective therapeutic target in that emerging evidence suggests critical roles for GM-CSF and M-CSF in chronic, relapsing, and complex inflammatory states in colonic tissues [67]. MDSCs can also produce IL-6 and TNF-ɑ, which are involved in the IL-6/STAT3 pathway signaling, playing an immunosuppressive role in the tumor microenvironment [68,69]. It has been reported that naringin inhibits MDSCs, proinflammatory mediators (GM-CSF/M-CSF, IL-6, TNF-ɑ), and the NF-κB/IL-6/STAT3 cascade in colorectal tissue, reducing the severity of colitis and colorectal adenoma. Naringin inhibits ER transmembrane proteins (GRP78, ATF6, and IRE1), as well as activated PERK, phosphorylated eIF-2α in colorectal mucosal cells. Further, naringin prevents the secretion of the ATG3, ATG5, ATG7, ATG12, ATG16, and ATG16L1 complex, thus preventing the occurrence of colitis and colorectal cancer [70]. ER stress, MDSCs, and breast cancer Triple-negative breast cancer (TNBC) accounts for 15.0−25.0% of all breast cancers. TNBC cells do not express the estrogen receptor (ER), the progesterone receptor (PR), or the human epidermal growth factor receptor-2 (HER-2). TNBC is an early onset and highly aggressive malignant tumor with a poor prognosis and a high distant metastasis rate [71,72]. Activated PERK, one of the ER-membrane-resident sensors, can phosphorylate eIF2 and induce a comprehensive stress response that results in global translation inhibition and selective translation of repair proteins [73,74]. Overexpression of P-EIF2A has been associated with tumor progression [75,76] and a protective clinical effect [77,78]. Thus the effect of the tumor PERK/P/EIF2A signaling pathway is controversial. In breast cancer (BC), P-EIF2A has been reported to predict disease-free survival in patients with TNBC [79]. Zou et al. reported EIF2A mRNA levels to be negatively associated with TNBC relapse-free survival and negatively related to metastasis. P-EIF2A promotes the activity of tumor-infiltrating T cells and inhibits the activity of MDSCs by inhibiting PDL1 and CXCL5, thereby regulating TNBC metastasis. The PERK/EIF2A pathway also regulates carboplatin resistance in highly metastatic TNBC. IRE1α, one of the ER-membrane-resident sensors, remodels the TME in TNBC by increasing pericyte levels and vascular normalization while decreasing CAFs and MDSCs [80]. Matrix cellular proteins, a group of extracellular matrix (ECM) proteins, are transducers and modulators of the interaction between cells and the extracellular microenvironment. These proteins include osteopontin (OPN), thrombospondins (TSPs), osteonectin, tenascins, periostin (POSTN), and CCNs [81]. POSTN is highly expressed in many tissues but is significantly associated with the degree of tumor malignancy, metastasis, hyperplasia, and fibrosis of inflammatory tissue. POSTN is expected to become a detection index for diagnosing and treating of many tumors and inflammatory diseases. It has recently been reported that lung fibroblast-derived POSTN is an important limiting factor of metastatic breast cancer cells within the lung by promoting of the self-renewal of breast cancer stem cells [82]. Furthermore, POSTN is reported to be associated with a poor prognosis for basal-like breast cancer, with POSTN-integrin ɑvβ3 signaling required to establish a micro-environmental niche for breast cancer stem cells [83]. It is interesting to note that POSTN can also be produced by bone MDSC cells and their derived cells, which indicates that POSTN promotes MDSC-mediated pulmonary pre-metastatic niche formation. Breast cancer metastasis could occur through the accumulation of MDSCs within the lungs. These results provide new and promising avenues to develop practical therapeutic approaches for breast cancer treatment, especially TNBC. ER stress, a key regulator of LOX-1+ PMN-MDSCs derived from nasopharyngeal carcinoma survivors with chronic hepatitis B virus Lectin-type oxidized LDL receptor-1 (LOX-1) is a specific marker for human PMN-MDSCs [7] that can separate and identify PMN-MDSC cells. CD15 is also a marker for neutrophils as such LOX-1+ and CD15+ cells in human blood are PMN-MDSC. In contrast, CD15+ but LOX-1− cells are normal neutrophils (PMNs) [29,57,84]. survivors with CHB. These observations suggest that ER stress may affect the survival of LOX-1+ PMN-MDSCs and disease progression. LOX-1+ PMN-MDSCs from NPC survivors with CHB had higher NOX2 mRNA levels, a critical ROS-related gene, suggesting that ROS mediates the immune suppressive effect of LOX-1+ PMN-MDSCs. These results suggest that PMN-MDSCs play an immunosuppressive role in the host immune response to CHB through ER stress/ROS effects [85]. ER stress may be the key regulator of PMN-MDSCs in hepatocellular carcinoma patients PMN-MDSCs (LOX-1+CD15+) is significantly up-regulated in the peripheral blood of hepatocellular carcinoma (HCC) patients compared to healthy controls. T cell activation is significantly suppressed by LOX-1+CD15+ PMN-MDSCs, inhibiting CD4+ and CD8+ T cell proliferation as well as IFN-γ production. This immune suppression is mediated by the cellular production of ROS and by the activation of arginase I. Moreover, LOX-1 expression and suppressive function are mediated by ER stress that increases the expression of XBP1, ATF3, and CHOP [86]. These results suggest ER stress may be an essential regulator of PMN-MDSC in HCC. In addition, PMN-MDSCs of cancer patients exhibit signs of an ER stress response [29,87], with some myeloid cells in peripheral blood exhibiting ER stress. These peripheral blood cells were distant from the tumor site, which suggests tumor-induced ER stress in myeloid cells in a remote manner. However, neither serum nor the TCM from HCC patients induced healthy donor CD15 + cells to differentiate into PMN-MDSC, nor was ER stress-induced. The underlying mechanism for this phenomenon warrants further investigation [57]. ER stress may mediate prostate cancer tumorigenesis by regulation of MDSC immune suppression Prostate cancer is the most common urological malignancy in men, with three-quarters of cases in patients over 65. Compared with the United States, prostate cancer incidence and mortality are relatively low in China, although incidence and mortality have increased in recent years [88,89]. The use of anti-CTLA-4 as an immune checkpoint blockade for prostate cancer treatment has not been clinically successful [90][91][92], which may be due to TME immunosuppression [93]. Myeloid-derived cells are essential components of the TME and may contribute to treatment failure in prostate cancer patients. Clinical studies have demonstrated increased numbers of infiltrating macrophages in primary prostate tumors, which may be associated with failure of androgen ablation [11]. The proportion of M-MDSCs in the peripheral blood of prostate cancer patients is significantly increased compared to age-matched controls [94,95]. Mechanistically, T cell-suppressed proliferation, and high IL-10 levels have been confirmed in vitro [96]. Therefore, targeting MDSCs or regulating their recruitment has the potential for immunotherapeutic treatment of prostate cancer patients [97]. Recently, ER stress has been shown to be transmitted from tumor cells to myeloid cells. When cultured in the conditioned medium of ER-stressed tumor cells, macrophages also demonstrate an ER stress response with Hspa5 and XBP1 up-regulated. The proliferation of prostate cancer cell lines can be regulated by XBP1s [98], but how XBP1s regulate MDSCs is unknown and requires future investigation. ER stress-sensitive factor, XBP1, can induce the expression of Arg1 and Nos2, which are essential regulators of the immunosuppressive function of MDSCs [99]. ER stress may play an important role in prostate cancer, mediating tumorigenesis and tumor development by regulating the immunosuppressive phenotype of prostate cancer MDSCs [100,101]. ER stress and MDSCs as therapeutic targets in cancer and inflammatory disease MDSCs play an essential role in tumor immunosuppression. More and more studies have shown that MDSCs are closely related to the effect of tumor immunotherapy. Therefore, it is of great significance to change tumor immunosuppression by inhibiting the function of MDSCs. Tumor-derived ER stress in MDSCs mediates the immunosuppressive activity. Therefore, researchers predicted ER stress-related proteins in MDSCs could be potential therapeutic targets in infectious diseases and cancers. ERK, AKT, and STAT3 decreased in Periostin (POSTN) -deficient MDSCs. The the pro-metastatic role of POSTN is limited to ER-negative breast cancer patients, which indicates that POSTN is a potential target for the prevention and treatment of breast tumor metastasis [91]. M10, a novel derivative of Myricetin, prevents ER stress-induced autophagy in inflamed colonic mucosal cells by targeting the NF-κB/IL-6/STAT3 pathway, which develops M10 as a promising regimen in the chemoprevention of colitis and colorectal cancer [66]. Insights from studies might substantiate PMN-MDSCs as a potential therapeutic target for lung carcinoma [97], hepatocellular carcinoma 6, and Chronic hepatitis B (CHB) with nasopharyngeal carcinoma (NPC) [85]. Further research were warranted to confirm ER stress-related proteins, including PERK, CHOP, IRE1α, and XBP1s, as potential therapeutic targets in cancers [102][103][104]. ER stress sensors or signals triggering MDSC activation could be investigated as therapeutic targets in cancers and infectious or inflammatory diseases as shown in Table 3; Fig. 2. Future perspectives The pathologic microenvironment associated with inflammation and tumors is characterized by hypoxia, nutrient deprivation, low pH, and free radicals that can trigger ER stress and the accumulation of MDSCs, resulting in immunosuppression. Reactive oxygen species and lipids are significantly elevated in MDSCs and are the main causes of the ER stress response. Inhibition of MDSC has been shown to be a potential and promising cancer therapy based on its complex role in promoting tumor genesis, development, and metastasis in the tumor microenvironment. Over the past few years, many preclinical studies have focused on exploring drugs, such as Sunitinib [105] and 5-phosphodiesterase inhibitors [106], to inhibit its immunosuppressive activity. New strategies that remodel tumor-associated myeloid cells into mature immune cells will greatly improved the efficacy of tumortargeted therapies.
5,962.6
2023-01-02T00:00:00.000
[ "Medicine", "Biology" ]
Distinct Biochemical Activities of Eyes absent During Drosophila Eye Development Eyes absent (Eya) is a highly conserved transcriptional coactivator and protein phosphatase that plays vital roles in multiple developmental processes from Drosophila to humans. Eya proteins contain a PST (Proline-Serine-Threonine)-rich transactivation domain, a threonine phosphatase motif (TPM), and a tyrosine protein phosphatase domain. Using a genomic rescue system, we find that the PST domain is essential for Eya activity and Dac expression, and the TPM is required for full Eya function. We also find that the threonine phosphatase activity plays only a minor role during Drosophila eye development and the primary function of the PST and TPM domains is transactivation that can be largely substituted by the heterologous activation domain VP16. Along with our previous results that the tyrosine phosphatase activity of Eya is dispensable for normal Eya function in eye formation, we demonstrate that a primary function of Eya during Drosophila eye development is as a transcriptional coactivator. Moreover, the PST/TPM and the threonine phosphatase activity are not required for in vitro interaction between retinal determination factors. Finally, this work is the first report of an Eya-Ey physical interaction. These findings are particularly important because they highlight the need for an in vivo approach that accurately dissects protein function. TPM, the PST domain alone, and the TPM alone, respectively. Eya and So bind to each other through the ED of Eya and the Six domain of So 8,11 to form a transcriptional activator complex. In addition, a series of Drosophila S2 cell-based transcriptional activation assays defined the PST/TPM domain as essential for Eya/So-mediated transactivation of a reporter. UAS-eya transgenes that lack both the PST-rich region and the TPM have drastically reduced ectopic eye-inducing capacity, with induction efficiency dropping from 98% to 1.5% 10 . In addition to regulating transcription, Eya has predicted tyrosine and threonine phosphatase activities in the ED and TPM, respectively [17][18][19][22][23][24] . In Drosophila, tyrosine phosphatase-dead mutations lead to strongly reduced activities in ectopic eye induction and in vivo genetic rescue using the GAL4-UAS system 18,19,24 . In contrast to these studies, our previous findings revealed that eya genomic rescue (GR) constructs carrying mutations in two key tyrosine phosphatase active-site residues fully restore viability as well as eye formation and function in an eya null mutant background 25 . In mouse and Drosophila, the threonine phosphatase activity has been suggested to play an important role in the innate immune system 17 and a recent study using the GAL4-UAS system reported that Eya threonine phosphatase activity is not required for normal Drosophila eye development 24 . Although previous cell culture and in vivo GAL4-UAS based expression studies have suggested specific functions for conserved Eya domains, we have shown that such assays may not always be reliable. In particular, we have developed a genomic rescue (GR) system that provides an accurate method for assessing the functional significance of individual protein domains in vivo 25,26 . In this study, we have used the GR strategy to conduct functional studies of Eya domains during Drosophila eye development. Interestingly, we found that a major function of Eya is transcriptional coactivation, while the threonine phosphates activity plays only a minor role during Drosophila development. Results The threonine phosphatase activity of Eya plays only a minor role in normal Drosophila development. To study eya function in vivo, we introduced a series of eya genomic rescue constructs (eyaGR) via site-specific transgenesis 27,28 to investigate the transcriptional activation and threonine phosphatase activity of Eya. A wild-type eya genomic rescue construct (eya + GR) is known to fully rescue viability and eye formation in an eya null mutant background, therefore serving as a positive control throughout our studies 25,26,29 . The eya Y4 GR construct has tyrosine-to-alanine substitutions for four key tyrosine residues known to be required for threonine phosphatase activity 17,24 (Fig. 1a). The eya ΔTPM GR construct has the entire TPM deleted but leaves the PST domain intact. Surprisingly, a single copy of each construct is able to substantially rescue eya 2 or eya cliIID mutant phenotypes, restoring viability and rescuing eye size to ~90% (Y4) or ~60% (ΔTPM) of wild-type, albeit with some mild disorganization (Fig. 1d,e and Fig. S1). While there appears to be a largely normal complement and arrangement of rhabdomeres in ommatidia of eya 2 ; eya Y4 GR/+ flies (Fig. 2e), eye discs from late third instar larvae (Fig. 2h) and 24 hrs after puparium formation (Fig. 2j) show defects in the number of cone cells and/or ommatidial fusion. Larval eye discs from eya Y4 GR and eya ΔTPM GR rescued animals are smaller and show a reduction in Eya and So staining anterior to and within the MF while expression levels are normal posteriorly (Fig. 2b' ,b",c' ,c"), suggesting that the threonine phosphatase activity does play a role during Drosophila eye development but this role is relatively minor as the eya Y4 GR construct can restore up to 90% of the eye size. The expression of the core RD genes Dachshund (Dac) and Eyeless (Ey) appear similar in eye discs of positive control and eya Y4 GR-rescued larvae (Figs 2a-c and 3a,b). In addition, we found no difference in photoreceptor axon projections between wild-type and eya 2 ; eya Y4 GR/+ flies (Fig. S2c), which show a regular pattern of projections in the lamina of the optic lobe. eya plays an important role in the developmental events associated with morphogenetic furrow movement. Specifically, clonal analysis has shown that eya is required for the initiation and propagation of the MF and for regulation of the cell cycle 4,8,30,31 . Since loss of threonine phosphatase activity leads to a reduction of Eya expression anterior to and within the MF, we analyzed the effects of eya Y4 on both G1-arrest and induction of the proneural gene atonal (ato). We used the cell cycle marker Cyclin B to monitor G1 arrest. Normally, Cyclin B is exclusively expressed in cells in the G2 and M phases 32 . Immunohistochemistry shows eya Y4 GR rescued animals have largely normal Cyclin B and Ato expression patterns ( Fig. 3c-f), implying that the threonine phosphatase-inactive mutations do not adversely affect G1 arrest and initiation of retinal differentiation. This is not surprising since the loss of retinal cells in flies rescued with a single copy of eya Y4 GR is relatively mild; therefore, strong alterations in the expression of markers of cell cycle progression or photoreceptor differentiation are not expected. The Eya threonine phosphatase-inactive mutation does not abolish interaction of Eya with Ey, So, or Dac. Phosphorylation is well known in other systems to regulate protein complex formation and protein stability via ubiquitin-mediated degradation 33,34 . Accordingly, we hypothesized that one or more of the RD proteins are direct substrates for Eya threonine phosphatase and that loss of this activity either disrupts the formation of RD protein complexes and/or destabilizes the RD proteins themselves. Furthermore, this effect may be specific to complexes involving Eyeless (Ey), thereby limiting effects anterior to the MF where Ey is expressed. We tested this hypothesis by doing co-immunoprecipitation (co-IP) in S2 cultured cells transiently transfected with epitope-tagged Eya, Ey, So, and Dac expression constructs. Similar amounts of RD proteins are expressed in transfected cells with or without Eya threonine phosphatase activity, and the Y4 mutation or the TPM deletion do not affect Eya protein expression levels in S2 cells (data not shown). As shown in Fig. 4, Eya Y4 and Eya ΔTPM co-IP with Ey, So, and Dac without obviously altered efficiency as wild-type Eya. Notably, this is the first report that Eya can bind to Ey. Previous studies also found that both Eya and Ey proteins interact with So 8,35 , suggesting Ey, Eya, and So may form a complex to mediate Drosophila eye development. Taken together, these observations suggest that the threonine phosphatase activity of Eya is not essential for interactions with other RD proteins. The threonine phosphatase motif of Eya has transcriptional activation function. In addition to threonine phosphatase activity, previous cell culture transactivation reporter assays showed that the TPM Scientific RepoRts | 6:23228 | DOI: 10.1038/srep23228 has transcriptional activation function 10 . To test the hypothesis that this function is biologically relevant in vivo, we replaced the TPM only with VP16, a well-known heterologous transcriptional activation domain (Chasman et al., 1989). The resulting construct, eya ΔTPM+VP16 GR, was tested for rescue activity. Remarkably, while eya ΔTPM can restore about 60% of eye size, VP16 is able to largely complement loss of the TPM and restore eye development to approximately 90% of wild-type, both in eya 2 (Fig. 1a,e,f) and eya cliIID mutant backgrounds (Fig. S3a). The external eye morphology of eya ΔTPM+VP16 GR rescued eyes shows only minor disorganization compared to eya ΔTPM GR. As shown in Fig. 2f, loss of the TPM causes abnormal ommatidial morphology in adult compound eyes. Flies rescued by one copy of eya ΔTPM GR have a reduced number and unusual arrangement of rhabdomeres compared with the normal trapezoidal array of photoreceptors in wild-type animals. Tangential sections of eya 2 ; eya ΔTPM+VP16 GR/+ adult eyes reveal ommatidia with the correct number and largely normal arrangement of rhabdomeres (Fig. S3b). Moreover, in contrast to wild-type ( Fig. S2a) and eya 2 ; eya ΔTPM+VP16 GR/+ (Fig. S2e) flies, axon terminations in the lamina plexus have irregular gaps and breaks (yellow arrows) in eya ΔTPM GR rescued flies, reminiscent of the photoreceptor axon defects in eya loss-of-function mutants 36 . These observations suggest that a major role of the TPM during Drosophila eye development is to provide transactivation function, that this activity is required for normal ommatidial development and photoreceptor axon projections, and that this function can be largely substituted by the VP16 domain. The entire PST/TPM domain of Eya is critical for transcriptional activation during eye development. The PST/TPM domain of Eya is critical for transactivation in cell culture reporter assays 10 . In order to characterize the Drosophila Eya transcriptional activity in its native context in vivo, we generated four genomic rescue constructs: eya ΔPST/TPM GR (deletion of the PST/TPM domain), eya ΔPST GR (deletion of the PST domain alone), eya ΔPST/TPM +VP16 GR (substitution of both the PST and TPM domains with the VP16 activation domain) and eya ΔPST+VP16 GR (substitution of the PST domain alone with the VP16 activation domain) (Fig. 1a). We found that eya ΔPST/TPM GR completely fails to rescue eya 2 or eya cliIID mutant phenotypes, even when the transgene is present in two copies ( Fig. 1h and data not shown). We can readily detect the predicted, truncated eya ΔPST/TPM transcript and protein ( Fig. 5a-f) in late second instar eye discs prior to MF initiation, suggesting that although the transgene is expressed, at least initially, the Eya ΔPST/TPM protein is non-functional. While the eya ΔPST/TPM GR construct completely fails to rescue eya 2 mutant animals, the eya ΔPST GR retains slightly more function and can rescue about 5% of normal eye size (Fig. 1j). Previous S2 cell culture studies have suggested that both the PST and TPM domains contribute transcription activation function 10 and our GR data are consistent with these results. In addition to eya ΔTPM+VP16 GR, our other VP16 substitution genomic rescue results also confirm these findings. Specifically, the eya ΔPST/TPM +VP16 GR is sufficient to rescue about 5% of eye size in an eya 2 background (Fig. 1i), similar to that of the eya ΔPST GR construct alone. eya ΔPST+VP16 GR is able to restore eye development to ~30% of wild-type (Fig. 1k). Two copies of eya ΔPST/TPM +VP16 GR or eya ΔPST+VP16 GR consistently rescue eya 2 eye size better than one copy (Fig. S4). Moreover, eya ΔPST/TPM +VP16 GR, eya ΔPST GR, and eya ΔPST+VP16 GR fail to rescue eya cliIID mutants. These functional dissection studies reveal that the transactivation domain PST/TPM is essential for eye formation and viability in Drosophila. In addition, the PST domain is likely playing a more significant role than the TPM during Drosophila development since eya ΔTPM GR rescues 60% of the eye size compared to 5% of the eye size rescued by eya ΔPST GR and eya ΔTPM GR is able to restores viability to eya cliIID null mutants. The PST/TPM domain regulates retinal determination gene expression. Eya can act as a transcriptional coactivator and physically interact with other RD proteins to regulate multiple developmental processes [7][8][9][10]37 . Therefore, we were interested in understanding the role of PST/TPM in RD gene regulation since it is critical for Eya function. Since eya ΔPST/TPM GR fails to rescue the eye phenotype of eya 2 animals and little Eya expression is detected at late third instar (data not shown), we used second instar larvae to assess the function of the PST/TPM when Eya ΔPST/TPM protein is still expressed (Fig. 5f). eya 2 flies rescued with two copies of eya ΔPST/TPM GR show slightly lower Eya expression compared to wild-type or eya GR-rescued animals at 68 hrs after egg laying (AEL) (Fig. 5c-f). We also found that Eya expression in eya 2 ; eya ΔPST/TPM GR eye discs is lower than that of wild-type discs at 56 hrs AEL (Fig. 5g-j). Similar reductions are observed for the expression of the retinal determination protein Dac, a known downstream target of Eya (Fig. 6a-h). In addition, in eya ΔPST/TPM clones (eya cliIID null clones rescued by a single copy of eya ΔPST/TPM GR) at 72 hrs AEL, Dac expression is reduced while the expression of Eya ΔPST/TPM is normal (Fig. 6i-l, yellow arrows). Taken together, these data imply that the PST/TPM domain of Eya is required for normal Dac expression. Moreover, ey-Gal4 induced So expression in eya 2 animals rescued by one copy of eya ΔPST/TPM GR partially restores Dac expression (Fig. 7a-d), but has no effect on expression of Eya (Fig. 7e-h). These observations suggest that the PST/TPM positively regulates expression of Dac through the Eya binding partner So. To test if the PST/TPM deletion affects Ey regulation and photoreceptor differentiation, we assayed Ey and Elav expression in eya ΔPST/TPM rescued eya null mutant clones. We found that eya ΔPST/TPM clones show a complete loss of Elav expression, a marker of photoreceptor differentiation 38 , posterior to the MF (Fig. 8a-d). In eya ΔPST/TPM clones posterior to the furrow, we found strong Ey expression (Fig. 8a'-d'), suggesting the PST/TPM domain of Eya is required for Ey suppression. Additionally, eya ΔPST/TPM clones result in the loss of photoreceptor development and black overgrowths in adults (Fig. S5d). Deletion of the PST/TPM does not abolish interactions between Eya and Ey, So, or Dac. So and Dac are known binding partners of Eya 7,8,11 . Since eya ΔPST/TPM GR rescued flies have no eyes, similar to the loss-of-function phenotypes of the core RD genes (ey, so, and dac), we hypothesized that the PST/TPM domain may mediate specific, essential interactions between Eya and Ey, So, or Dac. To test this hypothesis, we carried out co-immunoprecipitation (co-IP) experiments. As shown in Figs 4 and 9, both wild-type and Eya ΔPST/TPM can co-IP with Ey, So, and Dac, suggesting that deletion of the PST/TPM does not abolish the interactions between Eya and these three RD proteins. These observations are consistent with previous findings that Eya-So and Eya-Dac interaction is mediated via the ED of Eya 7,8,11 . The Eya domain that mediates Eya-Ey physical interaction remains to be determined. Discussion In this paper we report that loss of threonine phosphatase activity has little effect on Drosophila eye development, since eye development in eya Y4 GR rescued flies proceeds relatively normally. On the other hand, the essential function of the PST and the threonine phosphatase motif (TPM) is transcriptional activation that can be largely complemented by the heterologous activation domain VP16. Together with our findings that the PST and TPM are required for normal Drosophila eye development, we conclude that a major function of Eya during Drosophila eye development is as a transcriptional coactivator. Although the tyrosine phosphatase activity of the Eya Domain (ED) is dispensable for Eya function 25 , the specific role the ED plays in vivo has not been reported. The retinal determination (RD) network is a small group of highly conserved transcriptional regulators that are both necessary for eye development and sufficient to trigger ectopic eye formation when overexpressed (a,b) Co-immunoprecipitation (co-IP) studies between wild-type and threonine phosphatase-dead Eya (Eya Y4 and Eya ΔTPM ) and Ey or So are shown. Flag-tagged Eya was co-expressed with HA-Ey or Myc-So in S2 cells and co-IP with anti-Flag beads followed by western blotting (WB) was performed. Ey, So, and Eya (wild-type and mutants) were detected by anti-HA, anti-Myc, and anti-Flag antibodies, respectively. Lanes 1, 2, and 3 show that Eya WT , Eya Y4 and Eya ΔTPM can pull down So and Ey, respectively. Empty vector is the negative control ( in other imaginal discs [1][2][3][4][5][6][7][8]14,21,39 . As a vital member of the RD network, a unique feature of the Eya proteins is that they have several distinct biochemical activities. In Drosophila, previous cell culture reporter assays and cDNA-based Gal4-UAS genetic rescue studies suggested that the PST-rich region is a transactivation domain and plays a role in ectopic eye induction, while the TPM and ED possess threonine and tyrosine phosphatase activity, respectively 10,18,19,24 . Intriguingly, our results using genomic rescue constructs are consistent with previous studies of the PST/TPM transactivation domain, but are contrary to previous reports that the tyrosine phosphatase domain, but not the threonine phosphatase domain, governs Drosophila eye development. In our work, we have found that both the TPM and PST contribute transcriptional activation for normal eye development. Substituting the heterologous activation domain VP16 for the TPM and PST domain substantially restores Eya function. Two reasons could account for the failure of complete rescue by VP16. First, the TPM or PST have other, distinct functions. Although we have excluded the possibility that the TPM and PST are required for Eya binding with Ey, So, or Dac in this report, we cannot rule out other possibilities. For example, previous findings identified the PST/TPM domain of Eya as the primary target of Nmo and Abl-mediated phosphorylation in kinase assays 36,40 . Second, there may be insufficient activation function provided by VP16 -perhaps due to an inability to make specific contacts with other proteins, or that the fusion proteins do not have the proper conformation to interact properly via other domains. The transcriptional role of Eya has been studied in Drosophila through genetic and/or biochemical interaction with the transcription factors So and Dac 7,8 . In this paper, we further indicate that the PST/TPM domain positively regulates Dac expression and this regulation may be mediated via So. Moreover, the PST/TPM is required to suppress Ey expression posterior to the furrow. These observations are consistent with previous reports that dac expression requires both so and eya 7,14,39,41 and both Eya and So are necessary to mediate Ey repression posterior to the MF 42 . Our studies localize these functions of Eya to the PST/TPM domain. Although genetic interactions between Eya and Ey have been widely reported, physical interactions between these two RD proteins have not. In this paper, we report that Eya physically interacts with Ey for the first time. Previous studies also found physical interactions between Eya-So 8 and Ey-So 35 , suggesting that Ey-Eya-So may form a ternary complex. In addition, previous findings show that ectopic eye induction by Ey requires the presence of Eya and So 43 , and the expression patterns of all three genes overlap extensively and are nearly identical anterior to the MF 43 . Moreover, misexpression of Eya and So induces the formation of ectopic eyes; however, this effect is lost in an ey mutant background 8,21 . Finally, ey is a direct target of Eya and So 11,44 and vice versa -eya and so are direct targets of Ey 45,46 . Since Groucho is a repressor of the Eya-So complex 10 , Ey may act as an activator of 1-3 show RT-PCR on RNA prepared from eyes discs 68 hrs after egg laying (AEL). Lane 1: wild-type; Lane 2: eya 2 ; eya ΔPST/TPM GR; Lane 3: eya 2 ; Lane 4: water. A truncated ΔPST/TPM transcript is readily detected (Lane 2). (b) An anti-Eya Western blot on extracts prepared from 68 hrs after egg laying (AEL) eye discs (n = 40/lane) from either eya cliIID /CyO; eya + GR/+ or eya cliIID /CyO; eya ΔPST/TPM GR/+ animals shows a readily detectable, truncated ΔPST/TPM protein (*). Heterozygous eya cliIID animals were used to obtain enough tissue for the experiment. Western blot presented in b is cropped to improve clarity and full-length blot is presented in Supplementary Fig. S10. (c-f) Eye discs prepared from larvae 68 hrs AEL are stained for Eya expression. (g-j) Eya staining of 56 hrs AEL eye discs. Eya-So to increase transcriptional output of Dac. Consistent with this hypothesis, loss of ey, eya, or so function causes loss of Dac expression, suggesting that Ey, So, and Eya are primary regulators of Dac 7,8,47 . Similar relationships have been observed with Pax6, Eya1/2 and Six3, mouse orthologs of ey, eya, and so, respectively. Specifically, mouse Pax6 mutants have reduced levels of Eya1 and eya 2 in the optic vesicle and overlying ectoderm 48,49 and Pax6 induces expression of Six3 when ectopically expressed in mice 50 . In addition, we used STRING 51 , a database of known and predicted protein interactions, to predict protein-protein interactions for Ey, Eya and So. As expected, we found equally high associations for all three pairs of complexes (Fig. S6), providing further evidence of strong interactions among these RD proteins, which may act together in a ternary complex. In addition, our genomic rescue assays show that the threonine phosphatase activity is largely but not entirely dispensable for Drosophila eye development. Our threonine-phosphatase inactive GRs can robustly rescue eye formation in eya null mutants, but the rescued eyes show disorganized external and internal morphology as compared to wild-type rescue controls. This result is in contrast to another report based on the GAL4-UAS system that finds the threonine phosphatase activity of Eya to be dispensable during eye development 24 . The reason for this difference is that our GR system offers higher resolution thereby allowing detection of more subtle defects in morphology, while the GAL4-UAS system is a less accurate approach. In particular, Liu et al. did in fact observe a disorganized eye phenotype in eya 2 flies rescued by UAS-eya Y4 . However, this phenotype appeared similar to the imperfect rescue achieved with the wild-type UAS-eya transgene. For this reason, they could not uncover the requirement for the threonine phosphatase activity during differentiation. This report highlights the need for careful interpretation of results based on the GAL4-UAS system and the superior sensitivity of the GR method. Although the threonine phosphatase activity of Eya plays only a minor role during eye development, it has been reported to be involved in the innate immune response in both Drosophila and mouse 17,24 . In summary, we have shown that both the transcriptional activation and threonine phosphatase activity of Eya are required for normal Drosophila eye development. However, a primary function of Eya during this process is transcriptional coactivation, while the phosphatase activity plays only a minor role. Our study provides an accurate approach to assess the functional significance of individual protein domains in vivo, highlighting the importance of the transactivation function of Eya during Drosophila development. As Eya is conserved and plays important roles in retinal development throughout the metazoa, the underlying mechanisms of Eya function are likely to be conserved in vertebrates as well. Methods Fly strains and maintenance. All flies were maintained with standard corn meal and yeast extract medium at 25 °C. Canton-S was used as a wild-type control. Heat shocks were performed at 37 °C as described previously 52 . To test the function of the mutant eyaGR during eye development, we crossed transgenes into the following mutant backgrounds: eya 2 , which completely lack eyes due to a deletion of an enhancer required for eya expression during eye development 4 , and eya cliIID , which is a null allele caused by a premature stop codon that causes recessive embryonic lethality 53 . Wild-type clones and eya ΔPST/TPM clones were generated by crossing w/Y; FRT40A and w/Y; eya clillD FRT40A/CyO; eya ΔPST/TPM GR with ywhs-flp; w+ ubiGFP, FRT40A animals, respectively. Recombineering-induced mutagenesis of eya + GR and Drosophila transgenesis. A two-step recombineering method was used to create the Y4, ΔTPM, ΔTPM+ VP16, ΔPST/TPM, ΔPST/TPM+ VP16, ΔPST and ΔPST+ VP16 mutations in the eya + GR construct as described previously 54 . Recombineering products were verified by DNA sequencing and restriction enzyme fingerprint digestion prior to transgenesis. Constructs were inserted into the attP2 docking site on the third chromosome using PhiC31-mediated transgenesis and site-specific integration was confirmed by genomic PCR with attP/attB primers 28 . Transgenic flies were confirmed by genomic DNA PCR sequencing. Primer sequences are available on request. Construction of cell culture expression plasmids. We used the Q5 Site-Directed Mutagenesis Kit (NEB) to introduce a series of mutations in cell culture expression plasmids which were confirmed by DNA sequencing. These mutations include: pMT-Flag-Eya Y4 , pMT-Flag-Eya ΔTPM , pMT-Flag-Eya ΔPST/TPM and pMT-HA-Dac. pAHW-Ey was generated from destination vector pAHW and pUAST-Ey (a gift from Dr. Rui Chen, Houston, TX) according to the Gateway protocol provided by the Drosophila Genomics Resource Center. pMT-Flag-Eya, pMT-Myc-So, pMT-dac, and pAHW were kindly provided by Dr. Ilaria Rebay (Chicago, IL). Primers used in this report are listed in Table S1. Co-IP and western blots. Transfected cells were lysed by rocking at 4 °C for 30 min in Pierce IP lysis buffer (Thermo Fisher Scientific) with a Roche Complete, Mini, EDTA-free protease inhibitor cocktail tablet. The lysates were subjected to immunoprecipitation with anti-Flag-conjugated agarose beads (Sigma) for 2 h at 4 °C. After washing three times with lysis buffer, immunoprecipitates were boiled in 4× NuPAGE LDS sample buffer (Novex), and western blotting was carried out according to the NuPAGE electrophoresis (Novex) protocol with rabbit anti-Flag (1:1000, Sigma), rabbit anti-MYC (1:100, Santa Cruz Biotechnology), and rabbit anti-HA (1:200, Santa Cruz Biotechnology) antibodies. For tissue preparation, 68 hrs AEL eye discs (n = 40) were collected in cold RIPA lysis buffer (Thermo Fisher Scientific). After centrifuge at 20000 g for 10 min at 4 °C, the supernatant was transferred to a new tube and ready for western blot analysis. Histology and immunohistochemistry. Staining of eye discs and imaging of the adult eye were conducted as described previously 42 . Immunohistochemistry on 48 hr pupal eye discs and tangential sections of adult eyes were generated as previously described 55 . For antibodies used, please reference Table S2. RT-PCR. RNA was extracted from 56 hrs AEL eye discs using PureLink RNA Mini Kit (Ambion). Reverse transcription was performed according to the instructions of SuperScript One-Step RT-PCR kit (Invitrogen).
6,284.6
2016-03-16T00:00:00.000
[ "Biology" ]
Targeting triple‐negative breast cancer with an aptamer‐functionalized nanoformulation: a synergistic treatment that combines photodynamic and bioreductive therapies Background Areas of hypoxia are often found in triple-negative breast cancer (TNBC), it is thus more difficult to treat than other types of breast cancer, and may require combination therapies. A new strategy that combined bioreductive therapy with photodynamic therapy (PDT) was developed herein to improve the efficacy of cancer treatment. Our design utilized the characteristics of protoporphyrin IX (PpIX) molecules that reacted and consumed O2 at the tumor site, which led to the production of cytotoxic reactive oxygen species (ROS). The low microenvironmental oxygen levels enabled activation of a bioreductive prodrug, tirapazamine (TPZ), to become a toxic radical. The TPZ radical not only eradicated hypoxic tumor cells, but it also promoted therapeutic efficacy of PDT. Results To achieve the co-delivery of PpIX and TPZ for advanced breast cancer therapy, thin-shell hollow mesoporous Ia3d silica nanoparticles, designated as MMT-2, was employed herein. This nanocarrier designed to target the human breast cancer cell MDA-MB-231 was functionalized with PpIX and DNA aptamer (LXL-1), and loaded with TPZ, resulting in the formation of TPZ@LXL-1-PpIX-MMT-2 nanoVector. A series of studies confirmed that our nanoVectors (TPZ@LXL-1-PpIX-MMT-2) facilitated in vitro and in vivo targeting, and significantly reduced tumor volume in a xenograft mouse model. Histological analysis also revealed that this nanoVector killed tumor cells in hypoxic regions efficiently. Conclusions Taken together, the synergism and efficacy of this new therapeutic design was confirmed. Therefore, we concluded that this new therapeutic strategy, which exploited a complementary combination of PpIX and TPZ, functioned well in both normoxia and hypoxia, and is a promising medical procedure for effective treatment of TNBC. presence of high levels of specific reductases, to develop new therapeutic strategies. Rapidly growing tumor cells are often exposed to hypoxia, a common sign of stress. Tumor cells may multiply farther away from the blood supply, which results in a relatively low oxygen tension of <8 mm Hg (1 %) compared with a normal blood oxygen pressure of 70 mm Hg or 9.5 % [14]. Photodynamic therapy (PDT), one of the clinically approved treatments, is a minimally invasive approach for treating various cancers [15]. The mechanism of PDT is based on the activation of photosensitizers (PSs) to the excited singlet state after visible light absorption, followed by intersystem crossing to the excited triplet state [16]. The excited PSs undergo photochemical reactions with oxygen (O2) to form cytotoxic reactive oxygen species (ROS) [17], which not only eradicates cancer cells, but also normal cells such as blood vessel epithelial cells. PDT has been implemented as an anti-breast cancer strategy [18], but the hypoxic tumor microenvironment in tumorous breast tissues often impeded the utility of PSs, such as PpIX, and caused undesirable consequences, such as angiogenesis, low rate of cancer cures, and increased likelihood of tumor recurrence [19]. In other words, PDT alone may enable a blockade of nutrients and oxygen in the cancerous areas, resulting in the killing of cancerous cells, but the formation of hypoxic areas somehow alters cancer cell metabolism and may thereby contribute to therapy resistance [20]. Therefore, it is clear that tumor hypoxia remains as one of the greatest challenges in treating solid tumors because cancer cells in such regions are a potent barrier to effective radiation therapy and immunotherapy [21]. Since oxygen consumption is a limiting factor for PDT [22], several strategies have been developed to improve its therapeutic efficacy to increase radical formation. Xia and co-workers [23] introduced oxygen-independent free radicals produced by a polymerization initiator system to destroy hypoxic cancer cells. Additionally, BD or hypoxia-activated prodrugs (HAP) have also been alternatives to eliminate hypoxic cancer cells. It is known that both BD and HAP are inactive but can be converted into potent toxins under conditions of either low oxygen tension or in the presence of high levels of specific reductases [24,25]. For example, the cytotoxicity of drugs RB-6145, SR-4233 (TPZ), and E09 (Aqaziquone) that were used in a hypoxic environment, was approximately 50200 fold higher than that in an aerobic environment [26]. TPZ is a class of cytotoxic drugs with selective toxicity towards hypoxic mammalian cells that can be catalyzed by NADPH: cytochrome c (P450) reductase to form toxic hydroxyl and benzotriazinyl radicals, followed by the generation of ROS to damage DNA when a cell was deprived of oxygen [27]. TPZ has been evaluated in clinical trials in non-small cell lung cancer, head and neck cancer, cervical cancer, and metastatic melanoma [25]. Regrettably, it did not show satisfactory results as originally anticipated in clinical trials due to low cellular uptake efficiency, unsatisfied pharmacokinetics, and adverse side effects [25]. Based on the hypoxic tendency of TNBC and the complementary functions of PS and BD, we were motivated to target normoxic and hypoxic tumor areas by adopting PDT in conjunction with bioreductive therapy to evaluate the synergistic antitumor effects of this new nanoVector-assisted therapeutic strategy for TNBC. Co-delivery of PpIX and TPZ can be realized readily using hollow mesoporous silica nanoparticles (HMSNs), which is an ideal type of drug carrier because of its biocompatibility, degradability, high loading capacity and versatile surface chemistry [28][29][30][31][32]. In this study, MMT-2, a novel type of thin-shell HMSNs with three-dimensionally interconnected mesopores we previously developed [33], was applied to integrate the therapeutic utilities of PpIX and TPZ and the targeting capability of the DNA aptamer LXL-1 for TNBC cell line MDA-MB-231. PpIX and LXL-1 were modified covalently on the mesopores and external surface of MMT-2, respectively, and TPZ was finally loaded largely into the hollow interior of the functionalized MMT-2, designated as LXL-1-PpIX-MMT-2, by impregnation (Scheme 1). In vitro and in vivo studies showed that the obtained nanoVector TPZ@LXL-1-PpIX-MMT-2 was accumulated selectively at the tumor site and demonstrated high efficacy in killing MDA-MB-231 cells in normoxic and hypoxic areas under 630 nm irradiation; the synergistic effect was manifested clearly in a complete tumor eradication with enhanced efficiency. Our results confirmed that this reliable nanomedical platform offers a promising strategy for TNBC targeted therapy, and it provides a solution for limited therapeutic efficacy that is often associated with PDT due to the deprivation of oxygen level in cancer cells. Characterization of materials The as-synthesized MMT-2 displayed a characteristic XRD (X-ray diffraction) pattern that corresponded to Ia3d symmetry (cf. Figure S1), and the hollow morphology and the thin shell with ordered mesostructure of each nanoparticle could be observed by TEM (transmission electron microscopy) (Figure 1a). The ordered mesostructure was retained after subsequent steps of functionalization, as evidenced by the TEM image of LXL-1-PpIX-MMT-2 (Figure 1b). Successful functionalization of maleimide groups on the external surface and PpIX on the mesopores were confirmed by TGA (thermogravimetric analysis) and FTIR (Fourier-transform infrared spectroscopy), and the final conjugation of the DNA aptamer LXL-1 with maleimide groups was supported by the change in surface potential. The relative organic content of M-PpIX-MMT-2 (maleimide functionalized, PpIX-anchored MMT-2) was higher than that of M-MMT-2 (maleimide functionalized MMT-2), which showed weight losses of 29.8 wt% for M-PpIX-MMT-2 and 12.7 wt% for M-MMT-2, as revealed by TGA (Figure 1c). Characteristic IR signals confirmed the presence of the maleimide groups and PpIX (Figure 1d). The amount of PpIX in M-PpIX-MMT-2 was estimated to be ~0.343 mole per gram of the sample by analyzing the absorbance at 405 nm [34] in the UV-vis spectrum of the sample. After conjugation of the highly negatively charged DNA aptamer, the zeta potential measured in PBS changed from -19 mV for M-PpIX-MMT-2 to -38 mV for LXL-1-PpIX-MMT-2 (Figure 1e). The hydrodynamic size of LXL-1-PpIX-MMT-2 was around 345.0 ± 3.4 nm as measured by dynamic light scattering (DLS) (Figure 1e). In addition, the cell viability of MMT-2 and LXL-1-MMT-2 on MDA-MB-231 cells was shown in Figure S2. Cellular uptake and in vivo targeting efficiency of LXL-1-PpIX-MMT-2 The cell uptake and targeting efficiency of LXL-1-PpIX-MMT-2 toward MDA-MB-231 breast cancer cells (a TNBC cell line) were investigated. After treating with either free PpIX or LXL-1-PpIX-MMT-2 for 5 h, cells were harvested and lysed to determine the uptake of PpIX by a fluorospectrometer. In addition, a Laser Confocal Microscope was used to monitor targeting efficiency. (Figure 2a). The LXL-1-PpIX-MMT-2-treated group demonstrated four times more PpIX accumulation than that of the cell group treated with PpIX alone. The confocal microscopy imaging also indicated that cells treated with LXL-1-PpIX-MMT-2 (for 5 h) showed greater enhanced fluorescence than that in the free PpIX-treated group (Figure 2b). These results suggested that LXL-1-PpIX-MMT-2 was able to be taken up and accumulated in MDA-MB-231 cells (Figure 2a, b). Next, the targeting efficiency of LXL-1-PpIX-MMT-2 was studied. Three breast cancer cells were Due to structural characteristics, the hydrophobic PpIX is supposedly to deposit in hepatic rather than renal excretion [35]. Our LXL-1-PpIX-MMT-2 was designed to increase PpIX accumulation in tumor, but to reduce in the other organs. In vivo targeting was revealed by an IVIS (Figure 2e). Aside from the tumor site, free PpIX accumulated significantly in the liver, lung and kidney. On the other hand, LXL-1-PpIX-MMT-2 was apparently retained in the tumor, and somewhat less in the others. There was no doubt that our LXL-1-PpIX-MMT-2 helped to deliver the cargo drug to the targeted region, which enhanced the accumulation of the PS in the tumor. These results showed that LXL-1-PpIX-MMT-2 was able to target MDA-MB-231 xenografts without significant residuals in other organs, which promoted safety and therapeutic efficacy. Effect of oxygen level on photodynamic cytotoxicity Considering the basic principles of PDT, factors including oxygen level, irradiation time, and the concentration of photosensitizer are presumably key elements that decide the efficacy of PDT. Optimal conditions to maximize the effectiveness of PDT were, therefore, investigated. Figure S3 showed the efficiency of PpIX for generating singlet oxygen upon irradiation under different oxygen level. Figure S4 revealed the effect of oxygen level on in vitro production of ROS in cancer cells. In addition, MDA-MB-231 cells were cultured at either a normal oxygen level or under hypoxic conditions (at 5%, 2%, and 1% oxygen) to examine the minimum oxygen level to obtain an acceptable photodynamic therapeutic outcome. Significant photodynamic cytotoxicity of PpIX at oxygen level of 21% and 5% was observed, whereas reduced cytotoxicity was exhibited under hypoxic conditions (2% and 1% oxygen level) (Figure 3). With sufficient oxygen supply, cytotoxicity increased with the elevated amount of photosensitizer and irradiation time. In contrast, under hypoxic conditions, decreased cytotoxicity occurred at a relatively high photosensitizer concentration (0.8 μM of PpIX, equivalent to 0.46 μg/mL) and long irradiation period (4 min). Our results agreed with previous studies [36] indicating that satisfactorily high oxygen level was required to photoactivate PpIX to induce photodynamic cytotoxicity. Based on our results, 0.4 μM PpIX at 2% oxygen was able to eradicate ~50% of treated cells, thus, 0.4 μM PpIX was selected for further study (Figure 3b). Effect of oxygen level on TPZ cytotoxicity We know that high levels of oxygen caused cytotoxic TPZ radicals to become less harmful TPZ molecules. The low oxygen level ranging from 0.3% to 4.2% in tumor microenvironments has been discussed extensively [14], which encouraged us to examine TPZ cytotoxicity under various hypoxic conditions (oxygen levels: 1%, 2%, 5%) and under normoxia (oxygen level: ~21%). As anticipated, the observed cytotoxicity was enhanced with the increase in TPZ concentration and lower oxygen level (Figure 3e). TPZ displayed low toxicity (cell viability ~80% at 60 μM) at a high oxygen level (~21%). With a limited supply of oxygen, improved cytotoxicity was revealed even at a low TPZ concentration (cell viability ~50% at 20 μM). In addition, significant cytotoxicity (<50% cell viability) was found at a higher TPZ concentration (60 μM, equivalent to 11 μg/mL) with low oxygen levels (such as, 5%, 2%, and 1%). Therefore, a TPZ concentration of 60 μM was identified as the optimum effective dosage for further studies. Our observations agreed with the results as reported in previous studies [24,37,38], in which the cytotoxicity of TPZ was inversely associated with oxygen level. Synergistic effect of PDT-and TPZ-based combination therapy The antitumor effects of PDT highly depend on the tumor oxygen level, but are hindered by hypoxic tumor microenvironments. To improve poor effectiveness of PDT associated with tumor hypoxia, we established a new therapeutic approach that combined two cancer drugs that work in a complementary fashion. PDT requires sufficient oxygen to generate toxic radicals that are harmful to tumor cells, so bioreductive prodrugs that can be activated to be highly toxic under low-oxygen conditions were a perfect match. Therefore, the combination treatment of PpIX and TPZ was conducted in vitro. With the elevated oxygen level, the cytotoxicity of PpIX and TPZ showed opposite trends (Figure 4a). Cell viability increased from 31% to 88% for the PDT-only group with the lower oxygen level (5% to 1%), but cell viability in the TPZ-only group decreased from 42% to 35% (oxygen level from 5% to 1%). Once we combined free PDT with free BD, elevated cytotoxicity was observed for all groups, in general, at different oxygen levels. However, cell viability increased from 4% to 22% with the decrease in oxygen level from 5% to 1%, which indicated that PDT played a dominant role in determining therapeutic efficacy. Moreover, the synergistic effect provided by this new combination treatment was observed because CDI (coefficient of drug interaction) values of 0.3, 0.49, and 0.7 were obtained at oxygen levels of 5%, 2%, and 1%, respectively, whereas CDI values that were more than or equal to one indicated antagonistic or additive effects, respectively [39] (Figure 4b). It was also claimed previously [37,40] that the combination of PDT and HAP prodrugs increased cell cytotoxicity synergistically. Furthermore, it is noteworthy that the combination treatment with two free drugs exhibited less cytotoxicity at a low oxygen level of 1% compared with higher oxygen levels (5% and 2% O2); however, the combination treatment with our nanoVector, TPZ@LXL-1-PpIX-MMT-2, further decreased cell viability not only at the 1% oxygen level, but also at 2% and 5%, which holds promising potential to be used in hypoxic environments for tumors. We believe that this was due to the effective targeted delivery of PpIX and TPZ to MDA-MB-231 cells. Although numerous reports have demonstrated evidences on the utilities of nanoDrug Delivery Systems in vitro and/or in vivo, limited research was conducted to evaluate therapeutic efficacy of nanotherapy on hypoxia formation and cytotoxicity in hypoxic regions. The use of nanoVectormediated combination therapy based on the complementarity of PDT and BD to enhance therapeutic efficacy against cancer, especially for tumor hypoxia, was addressed herein. We again confirmed that low oxygen level impaired PDT cytotoxicity, but promoted the activity of TPZ (cf. Figure 3, 4), which was in agreement with previous findings [25,38,40,41]. TNBC is aggressive with high mortality and difficult to treat [42]. The unsatisfactory therapeutic outcomes of conventional chemotherapy and therapeutic agents, primarily poly(ADP-ribose) polymerase inhibitors and EGFR inhibitors, argue for development of an effective targeted therapy for this ER/PR/HER2 receptor expression-lacking tumor. A genetic mutation in p53 has been revealed recently in TNBC that could be a therapeutic target [43]. Interestingly, the cytotoxicity of TPZ was observed previously in p53-dysfunctional epidermoid carcinoma (A431) cells [41]. In fact, there are a number of studies that utilized TPZ in combination with cisplatin to treat head and neck cancer, lung cancer, and breast cancer [44]. The utility of our nanoVector, together with findings obtained from previous studies [40,41], validated the effectiveness of PDT/BD combination therapy to eradicate cancer cells with the TP53 mutation, which offers an alternative approach for TNBC treatment. Antitumor activity of LXL-1-PpIX-MMT-2 in a MDA-MB-231 xenograft tumor model Conventionally, chemotherapy is often given after surgery because information collected from postsurgical pathology is necessary to determine the optimum regimen for cancer treatment. Today, given the increasing interest in local/regional therapy, localization of the tumor is feasible [45]. Numerous molecular approaches for diagnosis and characterization of breast tumors are available to provide detailed information to predict chemotherapy outcomes before surgery [46]. With the precise localization of tumors, we believe that the direct injection of chemotherapeutic drugs at the site of the tumor will enable the relief of serious systematic toxicity caused by the drugs themselves. Accordingly, intratumoral administration was performed in our in vivo study, which attempted to further improve the survival and quality of life for patients. To evaluate further the therapeutic effectiveness of this novel nanotherapeutic strategy, we used NU/NU female mice (4 wk old) that carried human breast tumor xenografts in two thighs. NanoVectors and free drugs were administrated i.t. as described previously (Figure 5a). Treatment with TPZ@LXL-1-PpIX-MMT-2 demonstrated the best therapeutic efficacy among all experimental animal groups (Figure 5b, c). Additionally, no significant body weight loss was observed during the study period ( Figure 5d). Moreover, as evidenced by H&E staining (Figure 5e), tumors treated with our nanoVectors showed reduced cell density compared with those groups treated with single free drugs (PpIX or TPZ), or a combination of free drugs (PpIX + TPZ). The tumor hypoxic area was also examined by immunohistochemical staining of pimonidazole-protein adducts in hypoxic areas (Figure 5e). The hypoxic zone in the PpIX-treated group was larger than that of the PBS-treated and TPZ-treated groups. TPZ@LXL-1-PpIX-MMT-2 not only restrained the formation of notable hypoxia, but also promoted cell death in the same region as observed by reduced cell density compared with the PBS group. PDT increased hypoxia due to its inherent cytotoxic mechanism, where photosensitizers interacted with oxygen to form ROS that led to the formation of a hypoxic tumor microenvironment. In summary, MMT-2 comprising thin-shell hollow mesoporous silica nanoparticles was selected as the drug vector for PDT/BD combination therapy. The material featured large hollow interior, thin mesoporous shell and uniform particle size, and was promising for the development of drug delivery systems. The interstitial hollow cavities served as depots to accommodate various therapeutic agents, and mesopores enabled therapeutic agents to diffuse through the shell. Furthermore, the surface silanol groups on the mesopores and external surface enabled versatile and selective functionalization for anchoring targeting (e.g. DNA aptamer LXL-1) or functional (e.g. photosensitizer PpIX) moieties. In short, we developed a novel nano combination therapeutic approach that targeted TNBC. The combination of PDT and TPZ eradicated cancer cells synergistically and effectively in both normoxic and hypoxic regions of tumor tissues. This nanotherapy enhanced the retainment of chemotherapy drugs in tumors, yet decreased drug accumulation in the other non-target organs, which suggested it is a promising strategy for treating TNBC. Our study not only verified the feasibility of PDT/BD combination therapy in cancer treatment, but also paved the way for the development of a therapeutic strategy for malignant neoplasm in hypoxic regions. Conclusion Given the lack of effective treatments for TNBC, numerous efforts have been devoted in the past to augment therapeutic opportunities for TNBC patients. The phase III IMpassion130 trial using chemotherapy plus atezolizumab (a fully humanized, engineered monoclonal antibody of IgG1 against the programmed cell death ligand 1, PD-L1) compared with chemotherapy plus placebo brought breast cancer into the era of antibody-based therapeutic approaches; however, limitations of the therapeutic antibody approach included high medical cost, poor tissue accessibility, insufficient pharmacokinetics, and imperfect interactions with the immune system. Previous studies have been reported on the applicability of nanoDrug Delivery Systems; however, the effectiveness of PDT/BD combination nanotherapy in tumor hypoxia was less frequently discussed. Herein, we successfully developed a synergistic approach to target TNBC under both normoxia and hypoxic conditions. The use of HMSNs modified with the aptamer, LXL-1, was confirmed to target TNBC and release TPZ to eradicate tumors under hypoxic conditions. On the other hand, a photosensitizer that was fixed inside HMSNs generated a sufficient level of radicals to shrink tumors under normoxic conditions with PDT. This design employed the mechanism of action using a combination of two medicines, which demonstrated promising potential for TNBC therapy. These observations encourage us to conduct further investigations of our nanoVector to treat hypoxia-associated diseases because hypoxia-induce heterogeneous environments promote tumor invasiveness, angiogenesis, drug resistance, and metastasis, and impair therapeutic efficacy. Chemicals and reagents All chemicals and reagents were of analytical grade and were used as received without further purification. Benzylcetyldimethyl-ammonium chloride (BCDAC, 97%), bovine serum albumin (BSA), Preparation of TPZ@LXL-1-PpIX-MMT-2 MMT-2 was synthesized following the procedures reported previously [33]. In a typical synthesis, TEOS that contained 5% CO2 at 37 o C. The growth medium was changed every 48 h, and cells were trypsinized (using 0.1% trypsin) and subcultured when they grew to about 90% confluence. In vitro effectiveness of PpIX, TPZ, and TPZ@LXL-1-PpIX-MMT-2 To investigate the in vitro effectiveness of PpIX, TPZ, and nanoVectors, cell viability studies at both normoxia and hypoxia condition were conducted. For normoxia effectiveness, MDA-MB-231 cells were first seeded in a 96-well plate at a density of 10 4 per well/well and then placed directly in a cell incubator for 18 h after addition of designated drugs. To test for hypoxia effectiveness, home-made double-layered atmosphere bags (atmobag) filled with the desired gas composition, which was N2 5% CO2, and oxygen levels of 5%, 2%, or 1%, were used to mimic the hypoxic condition experimentally. We treated tested cells with TPZ@LXL-1-PpIX-MMT-2 that was suspended in PBS originally and mixed with culture medium at an appropriate ratio prior to use, or appropriate concentration of free PpIX, which was dissolved in DMSO originally and mixed with culture medium at a ratio of 1:99 prior to use. The Histological analysis of tumors For histological studies, tumor tissue was fixed in 10% formalin for one week and embedded in paraffin. Tumor tissue was sectioned (3 m) before being fixed on glass slides and allowed to dehydrate overnight. Sections were subjected to the dewaxing and rehydration through the use of xylene and a series of decreasing alcohol concentrations (100%, 95%, 90%, 80% ethanol/ddH2O, and finally ddH2O). For hematoxylin and eosin stain (H&E) analysis, sections were stained with hematoxylin and eosin to confirm the cell density and to observe the details of cellular and tissue structures. To visualize hypoxic areas immunohistochemically, a commercially available hypoxyprobe kit (Hypoxyprobe™-1 Omni kit, Hypoxyprobe, USA) was used according to the manufacturer's protocol. In brief, each group of animals 20 was i.p. administered with 60 mg/kg Hypoxyprobe™-1 solution (Pimonidazole) 1 h before sacrifice; the tumor tissue that was intended to be analyzed for the amount of hypoxia was prepared as described above. Next, the deparaffinized tissue sections were treated with 3% H2O2 to block endogenous peroxidase activity, followed by incubation with FBS to reduce non-specific binding. The primary antibody (PAb2627A) (1:200, Hypoxyprobe, Inc, USA) was added to the tissue section-mounted slide and allowed to react overnight at 4 °C. After washing three times with Tris-buffered Saline (TBS) with tween-20, the slide was subsequently incubated with the secondary antibody for 1 h to complete tissue preparation for immunostaining. Statistical analysis Experiments were performed in triplicate and repeated at least three times. Data were presented as means ± standard deviation (SD). The t-test was used to evaluate whether there was any statistical significance between the means of two independent groups. In this study, p-values of <0.05 represented results that were statistically significant, and p-values of <0.01 were considered to be highly statistically significant. Supplementary information Supplementary information accompanies this paper at https :// Additional file 1: Figure S1 XRD pattern of MMT-2. Figure S2. Competing interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Ethics approval and consent to participate All animal experiments conducted in current study were performed in compliance with the NHMRC Taiwan Code of Practice for the care and use of animals for scientific purposes, and approved by the institutional animal care and use committee (IACUC) of National Taiwan University. Consent for publication Not applicable. Availability of data and material All data generated or analyzed during this study are included in this manuscript. Funding The authors gratefully acknowledge the financial support from the Ministry of Education of Taiwan
5,567.8
2021-03-29T00:00:00.000
[ "Medicine", "Engineering" ]
PUResNetV2.0: a deep learning model leveraging sparse representation for improved ligand binding site prediction Accurate ligand binding site prediction (LBSP) within proteins is essential for drug discovery. We developed ProteinUNetResNetV2.0 (PUResNetV2.0), leveraging sparse representation of protein structures to improve LBSP accuracy. Our training dataset included protein complexes from 4729 protein families. Evaluations on benchmark datasets showed that PUResNetV2.0 achieved an 85.4% Distance Center Atom (DCA) success rate and a 74.7% F1 Score on the Holo801 dataset, outperforming existing methods. However, its performance in specific cases, such as RNA, DNA, peptide-like ligand, and ion binding site prediction, was limited due to constraints in our training data. Our findings underscore the potential of sparse representation in LBSP, especially for oligomeric structures, suggesting PUResNetV2.0 as a promising tool for computational drug discovery. Supplementary Information The online version contains supplementary material available at 10.1186/s13321-024-00865-6. Introduction Proteins are dynamic molecules that play essential roles in various biological processes by interacting with other molecules, such as organic compounds, nucleotides, metal ions, and other proteins.A full understanding of the function of a protein often requires the identification of its ligand binding sites, which are specific sites on a protein that interact with ligand molecules.A classic example of the importance of understanding protein-ligand binding sites is the development of targeted therapies in the field of oncology.Precise knowledge of binding sites [1,2] has allowed for the creation of drugs that specifically target and inhibit cancer-promoting proteins, revolutionizing cancer treatment.Furthermore, insights into the binding sites of enzymes involved in bacterial replication have facilitated the development of potent antibiotics.These examples underline the critical role of accurate protein-ligand binding site identification in scientific and therapeutic advancements.However, the experimental determination of binding sites, such as by mass spectrometry and mutagenesis, is costly and time-consuming, necessitating the development of computational methods for ligand binding site prediction (LBSP). Over the years, a plethora of computational methodologies have emerged to improve LBSP, including geometry-based, energy-based, consensus-based, and template-based paradigms.While these paradigms have advanced the field of LBSP, they come with their own sets of limitations.For instance, spatial geometry-based methods [3][4][5][6][7][8][9] rely heavily on intricate geometric calculations derived from protein structure information, which are computationally expensive and may not always accurately capture the dynamic nature of protein-ligand interactions.Energy-based techniques [10][11][12] involve detailed calculations of the interaction energies between proteins and chemical probes, but these methods can struggle with multisite interactions and may not adequately represent all biological conditions that influence these interactions.Template-based methods, whether they are evolutionary-based methods [13] sequencebased approaches [14] or structure-based techniques [15,16] are heavily dependent on the quality and availability of reference datasets and may overlook novel binding sites that do not match known templates.These challenges highlight the need for the development of more advanced, efficient methods, such as those based on machine learning and deep learning, for LBSP. The ever-increasing volume of data in the LBSP field has enabled significant advances through the incorporation of machine learning and deep learning techniques.Notable machine learning methods [17,18] critically hinge on the accuracy of the designed features and can often lead to false-positive predictions, such as the identification of regions that are not feasible targets for drug interactions.Deep learning methods [19][20][21][22][23] that do not necessitate manual feature engineering, employ 3D convolutional neural networks (CNNs) that represent protein structures as fixed-sized voxels.In general, these methods can be broadly categorized into two distinct groups based on their approach to problem formulation: binding pocket prediction and binding residue prediction. In the case of binding pocket prediction, the focus is to identify potential pockets on the protein structure where ligands could bind.P2rank and DeepSurf calculate the Solvent Accessible Surface (SAS) points and predict the ligandability score of these points.P2Rank employs Random Forest Classifiers, while DeepSurf uses 3D CNN for this purpose.Both methods then cluster SAS points based on ligandability scores to form and rank predicted pockets.In other hand, DeepSite and PUResNetV1.0 conceptualize protein structures as 3D images, where each voxel represents atoms.DeepSite adopts a subgrid sampling strategy using a sliding window with a step of four voxels and employs deep convolutional neural networks (DCNN) to classify these subgrids as being proximal to the actual binding pocket whereas PUResNetV1.0 utilizes a 3D Segmentation technique based on the UNet architecture, classifying each voxel to determine whether it belongs to the binding pocket. In contrast, binding residue prediction methods such as DeepCSeqSite and GRaSP specialize in identifying specific residues on the protein surface that are likely to engage in ligand binding.DeepCSeqSite embeds each residue in a multidimensional feature space, comprising seven types of features.Utilizing a 1D DCNN, Deep-CSeqSite classifies each residue as either a binding or non-binding residue, effectively discerning the potential interaction sites on the protein surface.In other hand, GRaSP adopts a comprehensive approach by generating a feature vector for each residue, employing the Extremely Randomized Trees algorithm, GRaSP predicts the likelihood of each residue being involved in ligand binding.These diverse methodologies, from P2Rank and Deep-Surf 's solvent accessible surface analysis to DeepCSeqSite and GRaSP's intricate residue-level feature engineering, collectively represent significant strides in LBSP.They demonstrate how leveraging large datasets and complex structural features through advanced computational techniques can overcome the limitations of traditional methods, leading to more accurate and insightful predictions in protein-ligand interaction studies.Despite these advancements, deep learning techniques are significantly impeded by the sparse nature of protein structures.Here, 'sparse nature' refers to the fact that protein structures are mostly empty space, with atoms occupying only a small fraction of the total volume.Typically, these techniques utilize dense representations of protein structures as fixed-sized voxels, much like the pixels in a 3D image.However, this approach has two main drawbacks.First, it involves substantial computational costs, as it requires information to be stored and processed for all voxels, including those that do not contain any atoms.Second, it can lead to a loss of information because proteins have diverse, complex shapes that cannot be accurately represented within fixed-size voxels.Thus, dense representations are less suited for modeling the full complexity of protein structures given their inherent sparsity. Applying sparse representation techniques to protein structures finds parallels in fields where high-dimensional data are represented in a sparse manner to perform more effective computations.Notably, light detection and ranging (LiDAR)-based semantic segmentation [24,25] in autonomous vehicle navigation and robotics is a pertinent example.LiDAR semantic segmentation labels each point in a sparse 3D point cloud generated from LiDAR sensors with a class label that describes the object to which it belongs (such as a road, a pedestrian, a vehicle, etc.)The challenge lies in the sparsity of the given point cloud data, like the sparse nature of protein structures.In the realm of LBSP, one can draw an analogy where atoms in a protein structure are equivalent to points in a LiDAR point cloud, and the goal is to classify which of these atoms belong to the binding site; this Minkowski Convolutional Neural Network (MCNN), a specific type of sparse convolutional neural network that operates on a Minkowski SparseTensor, is particularly suitable for handling such tasks. In this work, we introduce ProteinUNetResNetV2.0 (PUResNetV2.0),a cutting-edge LBSP approach that fundamentally addresses the inherent sparsity of protein structures, which is a major obstacle in the field.Inspired by LiDAR semantic segmentation, our strategy is centered around representing protein complexes as Minkowski SparseTensors and utilizing MCNNs.The developmental workflow encompasses five stages: generating training data by applying a tailored parser for Minkowski SparseTensor representations of the protein structures obtained from the RCSB [26] database based on information provided in BioLip [27] database; implementing PUResNetV2.0 based on MCNNs; optimizing PUResNetV2.0 using Optuna [28]; evaluating PUResNetV2.0 in terms of success rate based on the distance from the center of the predicted binding pocket to the center of the ligand (DCC) and the minimum distance from the center of the predicted binding pocket to any atom in the ligand (DCA), precision, recall, F1 score, and MCC; and deploying PUResNetV2.0 accessible at https:// nsclb io.jbnu.ac.kr/ tools/ jmol.We show that by representing protein structures as Minkowski SparseTensors, PUResNetV2.0 exhibits remarkable capabilities in terms of handling diverse scenarios, such as oligomeric structures and structures interacting with peptides.Furthermore, PUResNetV2.0 outperformed established methods such as P2Rank, DeepSurf, PUResNetV1.0,DeepSite and GRaSP, as evidenced by evaluations across four distinct benchmark datasets: Coach100, which focuses on monomeric protein structures; Holo801, featuring ligandbound oligomeric structures; Apoholo45, encompassing both 'apo' and 'holo' protein structures; PDBBind1681, aims at providing high-quality protein complexes, making it a promising tool in the realm of LBSP. Data acquisition and processing In this study, as shown in Fig. 1, we acquired a nonredundant set of biologically relevant protein-ligand interactions information from the BioLip database.We then downloaded the relevant protein structures from the RCSB database.We discarded any structures that had resolutions above 2 Å, that contained multiple models with different numbers of atoms, or that included DNA or RNA.Next, we parsed the atomic records according to the specifications mentioned in the WorldWide Protein Data Bank (wwPDB) [29].Each parsed atom was featurized using Open Babel [30,31]; this process entailed downloading the residues (in SDF format) from the RCSB ligand database (https:// www.rcsb.org/ ligand/) and loading them as Open Babel molecule objects.This process allowed us to acquire a diverse and accurate dataset for the experiment. To construct a sparse representation model, we represented each protein structure as a Minkowski SparseTensor using atomic coordinates and the associated features and labels to formulate a semantic segmentation problem.The featurization process was carried out in the same way as that used by our previous method (PUResNetV1.0), in which each atom was described based on nine atomic features, namely, hybridization, heavy atoms, heteroatoms, hydrophobia, aromatics, partial charges, acceptors, donors, and rings.Consequently, we represented each protein structure as a sparse tensor with a matrix C and a matrix F. where (x i, y i , Zi ) denotes the 3D coordinates of the ith atom of the t i th protein structure in the b i th batch and f i is the feature vector of the ith atom. For each protein structure, if any atom was within 5 Å of the ligand atom, then it was labeled as a binding site atom; the binding site atoms were represented as a matrix L. Finally, we prepared a dataset of 61,691 protein complexes, which included 25,780 biologically relevant small molecule binding sites.Careful curation was performed to exclude HETATOM record from the PDB file.Overall, the protein complexes in this dataset were sourced from 4729 different protein families, providing a diverse set of protein structure for the experiment. Curating the benchmark datasets To conduct a comprehensive evaluation of diverse methodologies, we generated three benchmark datasets, Holo801, Coach100 and PDBBind1681, which were derived from the extensively employed Holo4k [32], Coach420 and PDBBind [33] datasets, respectively.methods, we eliminated the protein structures found in both our training datasets and the sc-PDB [34] dataset.Consequently, the newly formed Holo801, Coach100 and PDBBind1681 datasets comprised 801, 100 and 1681 protein complexes, respectively. To facilitate an accurate comparison among various Furthermore, we established the Apoholo45 dataset derived from D3PM [35], an extensive collection encompassing 45 pairs of bounded and unbounded structures.This dataset was curated excluding any protein structures or structures with binding sites that were present in the PUResNetV2.0 training dataset. Model PUResNetV2.0, based on MCNNs, features an encoderdecoder framework [36] with 171 layers and 10,861,601 trainable parameters, tailored for binary segmentation, as illustrated in Fig. 2a.The architecture integrates an encoder and a decoder, both of which are constructed from multiple blocks.The encoder's role is to reduce the dimensionality of the input-a SparseTensor depiction of a protein structure-by incorporating an assembly of convolution and basic blocks.The decoder, conversely, aims to upscale the encoder-produced feature maps.It uses a series of transpose and basic blocks, which are augmented by concatenating the corresponding feature maps from the skip pathway.These skip connections equip the decoder with detailed information from the encoder. Figure 2b presents the convolution block, which is an integral part of the encoder pathway.The block incorporates a Minkowski convolutional layer [37], followed by a Minkowski batch normalization layer and a Minkowski ReLU activation function.Collectively, these components reduce the input dimensionality while simultaneously drawing out significant features.The combined use of Minkowski convolution, batch normalization, and ReLU activation empowers PUResNetV2.0 to learn complex input representations.Minkowski convolution layer inputs a 4-dimensional tensor, with three spatial dimensions (x,y,z) and one temporal dimension (t) and uses a hybrid kernel (non-hypercubic, non-permutohedral) of arbitrary shape for feature extraction [24].The convolution operation in Minkowski convolution layer can be described with the equation below, where N D is a set of offsets that define the shape of a kernel and N D (u, C in ) = {i|u + i ∈ C in , i ∈ N D } as the set of offsets from current center, u, that exists in C in .C in and C out are predefined input and output coordinates of sparse tensors.Minkowski convolution batch normalization layer and Minkowski ReLU activation function are adopted for sparse tensor from conventional batch normalization and ReLU activation function.The decoder's fundamental component, the transpose block, is visualized in Fig. 2c.It employs a Minkowski convolution transpose layer, Minkowski batch normalization, and a Minkowski ReLU activation function to upscale the input.These layers work in harmony to heighten the input's spatial resolution and concurrently isolate pertinent features.The integration of Minkowski transposed convolution, Minkowski batch normalization, and Minkowski ReLU activation layers facilitates efficient upsampling, enhancing the model's ability to distinguish between binding and non-binding atoms. Figure 2d highlights the ResNet [38]-inspired basic block, which features skip connections between the input and output and is applied in both the encoder and decoder pathways of PUResNetV2.0.The deployment of the ResNet-derived basic block allows PUResNetV2.0 to effectively extract high-level and low-level features, effectively circumventing the issue of vanishing gradients. Optimizing PUResNetV2.0 using Optuna To optimize PUResNetV2.0, we used Optuna, an automated hyperparameter optimization framework.We began the optimization process by defining the hyperparameter search space for PUResNetV2.0, incorporating parameters like batch size, learning rate, number of output planes and number of basic blocks.Our aim was to maximize the PRC AUC on the validation set, hence we established an objective function accordingly.Leveraging the Tree-structured Parzen Estimator (TPE), a Bayesian optimization algorithm, Optuna recommended hyperparameters by building a probabilistic model of the objective function.This sophisticated approach involves iteratively modeling and updating the probability distributions of hyperparameters to balance exploration and exploitation, ultimately guiding the search towards promising regions in the hyperparameter space with each iteration.We also incorporated an early stopping strategy to prevent overfitting, halting training if no improvement was seen in the validation loss over a predefined number of epochs. Postprocessing the predictions yielded by PUResNetV2.0 In the postprocessing phase of PUResNetV2.0, we utilized the density-based spatial clustering of applications with noise (DBSCAN) [39] algorithm, known for its advantage of not requiring a predetermined number of clusters, which was ideal for our scenario where the number of binding pockets was not known a priori.This algorithm took the xyz coordinates of the predicted binding atoms and grouped the proximate predictions, forming distinct binding pockets under the criterion of a minimum of five atoms within a spatial distance of 5.5 Å. Implementing the kd_tree algorithm and setting a leaf size of 100 enhanced its computational efficiency.This transformation of atomic-level predictions into identifiable binding pockets enables a more comprehensive analysis in protein-drug interaction studies. Performance benchmarking against PUResNetV1.0, DeepSurf, DeepSite, GRasP and P2Rank To benchmark PUResNetV2.0's performance, we extracted predictions from several established models, including PUResNetV1.0, DeepSurf, DeepSite, GRaSP, and P2Rank.For P2Rank and DeepSite, we obtained the predictions directly from the P2Rank datasets, which are available at https:// github.com/ rdk/ p2rank-datas ets.We implemented the models for PUResNetV1.0 and DeepSurf using their respective GitHub repositories (PUResNetV1.0:https:// github.com/ jivan kandel/ PURes Net; DeepSurf: https:// github.com/ stemy lonas/ DeepS urf ) and subsequently generated predictions.For GRaSP, we sourced the predictions through its dedicated webserver at https:// grasp.ufv.br/ submit.With P2Rank, each predicted binding pocket was ranked, and we chose the highest-ranked pocket for evaluation.We then calculated the pocket's center based on its atomic coordinates.DeepSite provided the center for each predicted pocket, simplifying our extraction process.Similarly, DeepSurf also provided the predicted centers for the binding pockets.In the case of PUResNetV1.0, we used the atomic coordinates derived from the predicted pockets to calculate the center, facilitating standardization across the models for comparison purposes.Finally, with GRaSP, the predictions were supplied as residues within a CSV file, which we used to extract the necessary information for our analysis.Success rate based on DCA, Success rate based on DCC, precision, recall, F1 score, and MCC metrices are utilized to compare between methods. A. Pocket centric method 1. Success rate based on DCA DCA is the minimum distance between the center of predicted binding site to the any actual binding site atom.If the distance is ≤ 4 Å, then it its determined to be correctly predicted site and success rate is given by: 2. Success rate based on DCC DCC is the minimum distance between the center of predicted binding site to the center of the actual binding site.If the distance is ≤ 4Å, then it its determined to be correctly predicted site and success rate is given by: B. Residue centric method True Positive (TP) is correctly predicted residue as binding residues.False Positive (FP) is incorrectly predicted binding residues.True Negative (TN) is correctly predicted non-binding residue.False Negative (FN) is incorrectly predicted non-binding residue. Precision where n is the total number of protein structures. Recall where n is the total number of protein structures. F1 score where n is the total number of protein structures. MCC MCC where n is the total number of protein structures. Implementation of the Web Server for PUResNetV2.0 To facilitate the application of PUResNetV2.0, we implemented a web server (https:// nsclb io.jbnu.ac.kr/ tools/ jmol/) utilizing the Django Python web framework. The user interface was designed to provide options for uploading a PDB file or entering a PDB ID or Uni-Prot ID.The platform also provides flexible preprocessing settings: a user-submitted protein structure can be processed as a single complex, individual chains can be treated as separate complexes, or selected chains (identified by a comma-separated list of their identifiers) can be treated as a single complex.Once the necessary inputs and selections are provided by the user, the backend of the web server initiates the conversion of the protein structure into a Minkowski SparseTensor, as elaborated in "Data acquisition and processing" section.When the preprocessing setting is set to 'single complex' , the given protein structure is converted into a single Minkowski SparseTensor.If 'individual chains as separate complexes' is selected, each chain is represented by an individual Minkowski SparseTensor, thus creating a batch.In scenarios where specific chains are selected, these chains are converted into a single Minkowski SparseTensor.Subsequently, the PUResNetV2.0 model is activated to generate predictions.The predicted binding atoms are then postprocessed by following the steps discussed in the "Postprocessing the Predictions Yielded by PUResNetV2.0" subsection, readying them for visualization. In the final step, the predicted binding pockets are visually represented on the front-end side of the web server using the JSmol [40] Java-based viewer.This facilitates an interactive visualization of the 3D molecular structures of proteins and their predicted binding pockets, thereby providing the user with a graphical representation of the predictions.Furthermore, a list of the identified amino acids within each predicted binding pocket is also made available for download in the form of a PDB file.This comprehensive workflow from input processing to result visualization allows for a seamless and user-friendly experience on the web server, thereby maximizing the utility of PUResNetV2.0 for users. Diverse training dataset curated using a tailored parser In the initial phase of our research, we curated a dataset crucial for the development of the PUResNetV2.0 model, with a focus on protein-ligand interaction sites.This dataset comprised 61,691 protein complexes, encompassing 25,780 unique ligand-binding sites across 4729 protein families.We extracted PDB and Ligand IDs from the BioLip database and downloaded the corresponding structures from the RCSB database.To concentrate on small molecule ligand-binding sites, we excluded binding sites associated with ions, water molecules, small peptides, and polynucleotides.Moreover, we removed the HETATM records from each PDB file in the dataset preparation phase.This methodology offered a detailed perspective on vital ligand-binding sites.For example, among the protein complexes, 3.8% contained a HEM binding site, 2.3% an ADP binding site, and another 2.3% an III binding site.In terms of protein families, 5.2% of the complexes were from the Pkinase family, 2.7% from the PK_Tyr_Ser-Thr family, and 2.3% from the Hormone_recep family, illustrating the dataset's diversity as shown in Fig. 3.While preparing our dataset, we acknowledged the inherent imbalance that characterizes real-world protein-ligand interactions.Specifically, atoms involved in interactions are far outnumbered by those that do not partake in such interactions.To ensure a fair model performance evaluation and eliminate data leakage, we implemented an 80/20 split for the training and validation sets, ensuring that protein complexes interacting with the same ligand were exclusive to either set.This careful dataset preparation and division process laid the foundation for the successful training and optimization of our PUResNetV2.0model. To aid in the development of our dataset, we created a custom parser specifically designed for converting given PDB files to Minkowski SparseTensor representations.Our parser, based on the specifications provided by the WorldWide Protein Data Bank (wwPDB), parsed the atomic records from the associated PDB files.One of the key features of our parser is its capability to directly access and process protein structures from the RCSB database, streamlining the data input process for users and facilitating the efficient preparation of data for PUResNetV2.0 training and evaluation.More detailed usage examples are available in our GitHub repository. The PUResNetV2.0 optimization process improved the validation PRC AUC from 46 to 71%. In the quest to improve our model's performance on the highly imbalanced dataset, we initiated our optimization process using the Optuna library.The weighted adaptive moment estimation (AdamW) optimizer [41] was used as the optimizer.Initially, we used the Dice loss function [42] as a loss function due to its effectiveness in balancing the contributions of the foreground (atoms involved in interactions) and the background (atoms not involved in interactions) by considering both precision and recall in its calculation.Despite its well-regarded ability to manage disparities between classes, the application of the Dice loss function led to a validation area under the precision recall curve (PRC AUC) of 46% and an F1 score of 61%, as shown in Table 1, indicating that further optimization was required to enhance the performance of our PUResNetV2.0model.The loss graphs labeled as a, b, c, and d correspond to the respective hyperparameter configurations delineated in Table 1.Each graph presents a comparative analysis of training vs validation loss, substantiating the assertion that the model exhibits neither overfitting nor underfitting. In response, we pivoted our approach to focus on the focal loss function [43].Noted for its capacity to handle imbalanced datasets by concentrating on challenging examples, the implementation of the focal loss function served as the turning point in our model's optimization process.Through rigorous hyperparameter tuning performed using the Optuna library, we observed a significant leap in our model's predictive ability, with the validation PRC AUC of our optimized PUResNetV2.0 model reaching an impressive 71% and its F1 score improving to 65%, as shown in Table 1.As shown in Fig. 4, the graph shows for each hyperparameter configuration, the model is stable and well-tuned.Using the best hyperparameters identified, a 10-fold cross-validation was performed.The results, as detailed in Table 2, showed an average PRC AUC of 70.00% with a standard deviation of 0.011%, and an F1 score of 64.03% with a standard deviation of 0.016%, underscoring the model's steady performance across different data segments.This substantial improvement in the validation PRC AUC underscores the effectiveness of the focal loss function in cases with highly imbalanced protein-ligand interaction data.Notably, the high validation PRC AUC score indicates the ability of PUResNetV2.0 to correctly predict the atoms involved in interactions. PUResNetV2.0 identifies binding pockets of complex protein structures Our examination of PUResNetV2.0's prediction results obtained across the Holo801 and Apoholo45 datasets elucidated the model's capability to navigate the intricacies of protein structures.Evidently, the model's predictions varied significantly depending on the context in which we presented the protein structures: as individual chains each forming separate complexes, as groups of two or more chains that constituted a larger complex, or as an integrated structure treated as a single complex. A noteworthy aspect of our analysis was the model's approach for handling the protein structures in the Holo801 dataset that incorporated peptide-like ligands, specifically structures 1a2c, 8lpr, 1eoj, 1eol, 1i4f, 1eb1, 1p12, 1iht, and 1f0c, as shown in Fig. 5.As these peptidelike ligands were not represented in our training dataset, they introduced an element of novelty to the test scenario.If we presented the peptide-containing structures as a unified complex, the model refrained from providing predictions.In contrast, when we interpreted these structures as a complex excluding the peptides, PUResNetV2.0 successfully discerned the binding pockets for these peptide-like ligands.Additionally, in the cases with antibodyantigen complexes 1k4c and 1k4d from the Holo801 dataset shown in Fig. 5, when considering the heavy and light chains as a complex, PUResNetV2.0 successfully predicted the binding region where the antigen and antibody bound. Additional remarkable insights were derived from the analysis conducted on the Apoholo45 dataset, more specifically, structures 6m7j, 6v9y, 6wj5, and 5urv depicted in Fig. 5.For instance, when we treated chains C and D of the bounded structure 6m7j as a larger complex, PUResNetV2.0 accurately identified the binding pocket for the COL ligand, which interacted with both chains.Additionally, when treating structures 6v9y, 6wj5, and 5urv as a single complex, PUResNetV2.0 precisely predicted the binding pockets. In conclusion, the performance of PUResNetV2.0 in accurately identifying binding pockets across various scenarios and structures, including the nuanced complexities of structures with peptide-like ligands and antigen-antibody complexes in the Holo801 dataset and the ligand interactions across multiple chains in the Apo-holo45 dataset, exhibited PUResNetV2.0's versatility and adaptability, positioning it as a powerful tool in the field of protein structure analysis and ligand binding site prediction. Comparative benchmark analysis reveals PUResNetV2.0's better performance In this study, we evaluated the performance of PUResNetV2.0, our proposed LBSP model, against established methods such as P2Rank, DeepSurf, PUResNetV1.0,DeepSite, and GRaSP.This analysis used four distinctive benchmark datasets, Coach100, Holo801, Apoholo45 and PDBBind1681, each offering unique challenges.Coach100, comprising only monomeric protein structures, assessed the models' proficiency in handling simpler, individual protein structures.Conversely, Holo801, laden with ligand-bound oligomeric structures, tested the models' abilities to interpret complex interactions across multiple protein chains.The PDBBind1681 is a high-quality dataset originally used for developing and validating scoring functions and docking methods which contains bindings residues information for each target protein.The Apoholo45 dataset stood out by incorporating both 'apo' and 'holo' protein structures, pushing the models to discern and differentiate between these critical states for accurate ligand binding site prediction. A comparison of the model performances achieved across the benchmark datasets revealed insightful trends.The performance of various models on the Coach100 dataset, characterized by simpler monomeric structures and ion binding sites, was generally lower, with DeepSite exhibiting the poorest results among them.Conversely, when tested against the Holo801 dataset, composed of complex oligomeric structures, most models showed higher success rates, except for PUResNetV1.0, which exhibited a significant drop, indicating its challenges in managing such complex structures.The Apoholo45 dataset, which comprises both 'apo' and 'holo' protein structures, presented an added challenge that led most models, including DeepSurf and GRaSP, to struggle. PUResNetV2.0 consistently demonstrated improved performance over the P2Rank, DeepSurf, PUResNetV1.0 and DeepSite methods by attaining elevated DCA and DCC success rates across the Coach100, Holo801, and Apoholo45 benchmark datasets, as highlighted in Table 3.In the case of the Coach100 dataset, PUResNetV2.0 achieved 59.0% DCA and 38.0%DCC success rates.Regarding the Holo801 dataset, PUResNetV2.0 yielded 85.4% DCA and 53.7% DCC success rates.Finally, for the As shown in Table 4, compared to GRaSP, the PUResNetV2.0 method exhibited substantial performance enhancements on the Coach100, Holo801, Apo-holo45 and PDBBind1681 datasets, particularly in terms of the F1 score, MCC, and recall metrics.Nevertheless, GRaSP excelled with respect to precision on the Coach100.In the case of PDBBind1681 dataset, PUResNetV2.0 notably outperforms GRaSP, as evidenced by its enhanced metrics, achieving a precision of 78.68%, recall of 35.90%, F1 score of 47.10%, and a MCC of 46.70%.In summary, PUResNetV2.0 consistently yielded superior results for most metrics, achieving a minimum MCC increase of 10% and a 10% F1 score improvement over GRaSP. Discussion The insights garnered from this research have demonstrated the remarkable potential of PUResNetV2.0 for accomplishing the challenging task of LBSP.Through the careful curation of a comprehensive dataset encompassing a wide range of protein complexes and the optimization of PUResNetV2.0, we achieved a remarkable improvement in the validation PRC AUC from 46 to 71% in the presence of a highly imbalanced dataset.The key findings from our study revealed the ability of PUResNetV2.0 to adeptly predict binding pocket, especially for complex structures housing peptide-like ligand.Additionally, it consistently outperformed other methods across benchmark datasets, including Holo801, Coach100, Apoholo45, and PDBBind1681.These findings, characterized by PUResNetV2.0's enhanced performance not only underscore the significant strides our study has made in the field of LBSP but also set the stage for an in-depth exploration of our findings, their implications, and potential avenues for future research. The transition from PUResNetV1.0 to PUResNetV2.0 represents an important journey of continuous evolution in protein structure representation and feature extraction for LBSP.A critical insight gained during this process is the importance of the quality and diversity of the utilized training dataset in driving the predictive power and generalizability of the resulting model.Both versions are rooted in the robust UNet [36] and ResNet [38] architectures, yet their feature extraction methods differ: PUResNetV1.0 utilizes 3D CNNs, while PUResNetV2.0 adopts MCNNs.MCNNs are specifically designed for the efficient processing and extraction of features from sparse representations.However, transitioning to MCNNs and sparse representations was not sufficient for ensuring success; we also needed to address the imbalance issues that are frequently found in the datasets of this field.In PUResNetV1.0, we managed these issues with the Dice loss, a strategy carried forward to PUResNetV2.0.However, the Dice loss only yielded a 46% PRC AUC on our validation dataset.Upon switching to focal loss, our performance improved significantly, achieving a 71% PRC AUC on the validation dataset.This stark difference emphasizes the need for utilizing appropriate loss functions when handling imbalanced data, leading to our model's improved performance. PUResNetV1.0 was trained on the 2017 version of the sc-PDB dataset [34].Although comprehensive for its time, we found this dataset to be limited in terms of representing the diversity and complexity of protein structures, particularly oligomeric structures, which directly impacted the performance of PUResNetV1.0.PUResNetV1.0 had difficulties dealing with the Holo801 dataset, which is composed of complex oligomeric structures.However, it was more adept at handling the monomeric structures in the Coach100 dataset.The shortcomings of PUResNetV1.0 pushed us toward a more advanced approach for PUResNetV2.0, adopting a sparse representation method inspired by Minkowski SparseTensor's application in LiDAR segmentation. The proposed method outperformed existing ones across a variety of datasets, including Coach100, Holo801, Apoholo45 and PDBBind1681.The superiority of PUResNetV2.0 can be partially attributed to its ability to adeptly avoid the errors observed in other models.For example, while treating oligomeric structures as surface representations of a protein with a set of local 3D voxelized grids placed on the protein's surface, DeepSurf introduced errors in its predictions.DeepSurf identified residues of peptide-like ligands as binding residues (as shown in Additional file Table 1).A similar inability was observed with the machine learning based P2Rank method, while GRasP simply failed to process such structures.In contrast, PUResNetV2.0, leveraging the advantages of Minkowski SparseTensors to represent protein structures, was able to process such structures as well as restrained to predict residues of peptide-like ligands as binding residues as evidenced in "PUResNetV2.0Identifies Binding Pockets of Complex Protein Structures".This reflects the potential of sparse representation in LBSP, not only in terms of improving performance but also in advancing the field of LBSP, especially in scenarios involving complex molecular interactions.While PUResNetV2.0 has demonstrated a significant advancement over its predecessor and other methodologies, it does not come without its own limitations.A primary constraint is that the model's performance is contingent upon the input training dataset.Currently, our dataset does not account for all types of binding sites, with notable exclusions being ions, DNA/RNA, and peptidelike ligand binding sites.These ligand types demonstrate unique interaction patterns with proteins.For example, ions typically interact with proteins through salt bridges or coordinate bonds.Interactions between DNA/RNA and proteins typically engage larger surface areas, commonly occurring in the grooves or channels of the protein.Peptide-like ligands present a spectrum of interaction patterns, which are largely dependent on their lengths and sequences.In addition, our present training dataset primarily encompasses orthosteric sites, neglecting allosteric sites that play a vital role in protein-ligand interactions.The omission of these entities constitutes a significant limitation, as their unique interaction patterns can substantially influence the precision and applicability of our model's predictions.Consequently, PUResNetV2.0 in its current form may lack effectiveness in terms of predicting binding sites that involve these omitted ligand types. Addressing these limitations necessitates a more nuanced approach in future research.An integral part of this approach would be the expansion of the training dataset to include more diverse types of binding sites, especially those involving ions, DNA/RNA, and peptidelike ligands.Given their distinct interaction patterns with proteins, these ligand types could benefit from dedicated models trained on specialized datasets curated specifically for each ligand type.Our custom parser, developed to convert protein structures into Minkowski SparseTensor representations, provides a robust tool for streamlining the curation of these specialized datasets.However, we must not overlook the complexity that accompanies this approach.Developing and validating separate models for each ligand type could pose significant challenges, particularly in maintaining the balance between specialization and generalizability.This task also demands the careful tuning of model parameters and loss functions for each ligand-specific model.Nevertheless, the potential rewards-improved accuracy, broader applicability, and greater insights into unique interaction patterns-make this a promising direction for future research. Conclusion In conclusion, this study combines well-curated training datasets, innovative protein structure representations via Minkowski SparseTensors, and a strategically selected loss function, all geared toward addressing the intricate challenges of LBSP.The transition from dense to sparse data representations has significantly elevated PUResNetV2.0's ability to manage complex protein structures, outperforming previous methods across diverse datasets.Although some areas demand further refinement, specifically the representation of the complete range of protein-ligand interactions, the potential of PUResNetV2.0 in facilitating the drug discovery process, coupled with our user-friendly web server, stands as a significant achievement.This account highlights the strides we have made thus far.As we continue to refine our methodologies and broaden our training datasets, we expect to uncover deeper insights and achieve even higher levels of accuracy and inclusivity in predicting protein-ligand interactions, propelling the field of LBSP to new heights. Fig. 1 Fig.1Flowchart illustrating the overall process of preparing the training dataset.We initiate the process by procuring protein structures from the esteemed RCSB PDB database using the BioLip database as a reference.Subsequently, these structures undergo a parsing process using our customized PDB parser, followed by featurization through Open Babel.The final step involves the transformation of these structures into Minkowski SparseTensor representations.Within the figure, brown arrows signify the acquisition of information from external databases, blue arrows illustrate the directional flow of data processing, and red arrows denote the endpoints of data flows. Fig. 2 Fig. 2 Key components of the PUResNetV2.0 architecture.a The overall architecture, highlighting the integration of an encoder path for input downsampling and a decoder path for upsampling the feature maps, with skip connections between the corresponding blocks in both paths.b Illustration of the convolution block used within the encoder for input downsampling and feature extraction, which is composed of Minkowski convolutional layer, batch normalization layer, and ReLU activation function.c Presentation of the transpose block, which is deployed in the decoder path for input upsampling and consists of a Minkowski convolution transpose layer, a Minkowski batch normalization layer, and a Minkowski ReLU activation function.d Depiction of the ResNet-inspired basic block, which possesses skip connections for effective feature extraction and is utilized in both the encoder and decoder paths Fig. 3 Fig. 3 Distribution of the training dataset employed for PUResNetV2.0, categorized by its ligand types and protein families.These pie charts offer detailed insight into the breadth and depth of our dataset, encompassing 61,691 protein complexes.a Visualization of the distribution across 4729 protein families, emphasizing prevalent families such as Pkinase, PK_Tye_Ser-Thr, and Hormone_recep, which represent a significant proportion of the dataset.b Illustration of the diversity of 25,780 unique ligand binding sites included in the dataset, pointing out commonly found ligand binding site such as HEM, ADP, and III Fig. 5 Fig. 5 Visual representation of PUResNetV2.0's capabilities in terms of accurately predicting diverse protein-ligand binding sites across a variety of complex protein structures.The structures of proteins are displayed in cartoon format, with the corresponding ligands represented by stick models.The meshes overlaid on these structures signify the predicted binding pockets as determined by PUResNetV2.0.Each protein structure's PDB ID is provided at the bottom right side of the structure.This figure illustrates the model's competence in identifying potential binding sites across a range of protein-ligand complexes Table 1 Hyperparameter optimization results obtained for PUResNetV2.0 4ig.4Training loss vs validation loss for each configuration in Table1 Table 2 10-fold cross validation results Table 3 Comparison among the performances of P2Rank, DeepSurf, PUResNetV1.0,DeepSite, and PUResNetV2.0 on benchmark datasets Top values for each benchmark dataset are represented in bold Table 4 Comprehensive assessment of PUResNetV2.0 and GRaSP on benchmark datasets Top values for each benchmark dataset are represented in bold
8,701
2024-06-07T00:00:00.000
[ "Computer Science", "Biology" ]
Health professionals’ acceptance of mobile-based clinical guideline application in a resource-limited setting: using a modified UTAUT model Introduction Clinical guidelines are crucial for assisting health professionals to make correct clinical decisions. However, manual clinical guidelines are not accessible, and this increases the workload. So, a mobile-based clinical guideline application is needed to provide real-time information access. Hence, this study aimed to assess health professionals’ intention to accept mobile-based clinical guideline applications and verify the unified theory of acceptance and technology utilization model. Methods Institutional-based cross-sectional study design was used among 803 study participants. The sample size was determined based on structural equation model parameter estimation criteria with stratified random sampling. Amos version 23 software was used for analysis. Internal consistency of latent variable items, and convergent and divergent validity, were evaluated using composite reliability, AVE, and a cross-loading matrix. Model fitness of the data was assessed based on a set of criteria, and it was achieved. P-value < 0.05 was considered for assessing the formulated hypothesis. Results Effort expectancy and social influence had a significant effect on health professionals’ attitudes, with path coefficients of (β = 0.61, P-value < 0.01), and (β = 0.510, P-value < 0.01) respectively. Performance expectancy, facilitating condition, and attitude had significant effects on health professionals’ acceptance of mobile-based clinical guideline applications with path coefficients of (β = 0.37, P-value < 0.001), (β = 0.44, P-value < 0.001) and (β = 0.57, P-value < 0.05) respectively. Effort expectancy and social influence were mediated by attitude and had a significant partial relationship with health professionals’ acceptance of mobile-based clinical guideline application with standardized estimation coefficients of (β = 0.22, P-value = 0.027), and (β = 0.19, P-value = 0.031) respectively. All the latent variables accounted for 57% of health professionals’ attitudes, and latent variables with attitudes accounted for 63% of individuals’ acceptance of mobile-based clinical guideline applications. Conclusions The unified theory of acceptance and use of the technology model was a good model for assessing individuals’ acceptance of mobile-based clinical guidelines applications. So, enhancing health professionals’ attitudes, and computer literacy through training are needed. Mobile application development based on user requirements is critical for technology adoption, and people’s support is also important for health professionals to accept and use the application. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-024-05680-z. Introduction Clinical practice guidelines are methodically developed statements to assist health professionals and patients' decisions about suitable healthcare for specific clinical conditions.When it comes to a particular therapy, diagnosis, and pharmaceutical processes in patient care, clinical practice guidelines play a major role [1].The medical guideline isn't a fixed protocol that must be followed; it is also a recommendation for healthcare professionals to consider for correct patient diagnosis and treatment [2], as well as a written document that swiftly offers technical assistance, advice on the definition and operationalization of medical terms, and certain aspects of planning for implementation and evaluation [3]. A clinical guideline has several benefits and opportunities for healthcare practitioners, institutions, and patients.It enhances health professionals' communications and evidence-based practice [4][5][6].It serves as the same standard in all health institutions for diagnosis and treatment to ensure the consistency of patient care and is critical for quality audits and evaluations [7].Plus, clinical guidelines are part of the work of health professionals' consultants and are fertile for the care of patients as references for health professionals to access the right information when and where needed. Additionally, well-trained health professionals are not equally accessible in all health institutions in low-income countries; their educational and training qualifications vary; providing the training is expensive [8], their job function performance is limited, and treatment and medication errors are common in healthcare practice [9,10].Therefore, clinical guidelines are critical to solving such kinds of problems.However, it is manual (paper-based) and vigorously promoted as a means to improve the effectiveness of the healthcare system, patient outcomes, and healthcare costs [11].It needs huge physical space for storage, is exposed to fire and easily lost, and is inaccessible to health professionals [12].The manuals are poorly designed, present incomplete explanations that are difficult to read, have comprehension levels beyond the user's capabilities, lack explicit workflow, and increase the user's workload [13][14][15].Moreover, the clinical guidelines are available in voluminous text files and are very laborious and time-consuming to access [16].Therefore, this may promote distorted health information so that health professionals cannot access appropriate guidelines at the point of patient care [17]. Currently, technology has become commonplace in a healthcare setting, and there has been rapid growth in the development of medical application software [18][19][20].Several platforms are available to assist health professionals, such as patient information management and access, communication, and consulting [21,22], reference and information gathering, distance medical education and training, and clinical support systems for accurate decision-making [23,24].Mobile devices and mobile health applications are also among the fastest and most convenient ways for health professionals to access educational materials, including medication information, electronic clinical guidelines, and books [25,26]. In Sweden, a variety of wireless technologies such as mobile computing, wireless networks, and global positioning systems have been applied to ambulance care [27], and these are also functional for emergency patient care in the Netherlands [28].In Finland, an authorized and secured mobile healthcare services system was tested in 2003 and is available nationwide, that is used for consultation, electronic prescription, and easy access to health information via mobile devices [29].Though information technologies are an essential tool that fosters and promotes progress in healthcare and drastically reforms healthcare practices, the healthcare system in lowincome countries is recognized as having lagged behind other industries in the use and adoption of information communication technologies [30,31].Therefore, mobilebased clinical guidelines applications are used as job aid tools for real-time information and knowledge access and update, improving health professionals' performance by directing and guiding in an interactive and structured manner using mobile devices [32,33]. In low-income countries, mobile devices are not widely utilized for daily healthcare practice in terms of providing real-time access to clinical guidelines for healthcare practitioners.Mobile-based clinical guidelines add valuable functions for health professionals in terms of presenting completed information and reducing their workload.However, healthcare professionals did not adequately use mobile devices and related applications for healthcare systems.The development of mobile-based medical applications and technology-based healthcare practices is still in its premature stages [34].Information and communication technologies (ICT) are efficient and effective in many industries.However, they are not yet fully implemented and integrated into existing patient care systems, and healthcare institutions, particularly professionals are and computer literacy through training are needed.Mobile application development based on user requirements is critical for technology adoption, and people's support is also important for health professionals to accept and use the application.Keywords Mobile device, Clinical guideline, Acceptance, Application, UTAUT model noticeably lagging in accepting and adopting technologies [35]. The lack of acceptance due to a lack of awareness towards mobile-based clinical guideline application, a lack of system user self-efficacy, a lack of outcome expectations, health professionals' attitudes and perceptions [36,37], lack of commitment and motivation [34,38], lack of organizational support, the constructs of the technology acceptance model (TAM) [34,38], and socioeconomic characteristics of the health professionals [39] are factors for acceptance and utilization of mobile-based clinical guidelines applications in the healthcare practice.So, understanding why healthcare professionals could not accept and use mobile-based healthcare systems would accelerate hospital competition and enhance the acceptance and utilization of mobile devices and the Internet in healthcare practices [27,40].It is also important to provide critical insight for the development of effective strategies to increase the efficiency and effectiveness of healthcare personnel [41,42]. In Ethiopia, several eHealth technologies that could support healthcare practices have been introduced.Electronic medical record system, district health information system version 2 (DHIS2), routine health information system [43,44], interactive voice response system, patient appointment reminder system, electronic communitybased health information system, and international classification of disease version 10 (ICD-10) for disease coding and classification are mainly introduced in Ethiopia to support the healthcare system process, enhance documentation and reporting system [45,46].The implementation process of the systems is extremely costly and uncertain.As a result, eHealth technology adoption and dissemination in Ethiopia are still in their infancy [39,47,48].So, there is a high demand for an easily accessible electronic system for daily healthcare practice and challenges to patient care [47].Therefore, before starting the mobile-based clinical guideline implementation process, creating a clear understanding of the gap that exists between the manual, and the benefits of mobile-based clinical guidelines would create awareness for system users.This would also provide an effective and efficient system development process that could make the practitioners agree and be willing to accept mobile-based clinical guidelines [49]. According to our literature searching skills and the information we have, there are no adequate studies about health professionals' acceptance of mobile-based clinical guidelines in Ethiopia.Therefore, this study would have implications for policy design, facilitating dissemination updating clinical guidelines, receiving users' feedback, and enhancing the clinical guideline standards.This study is critically significant for health professionals' theoretical learning, enhancing understanding that mobile-based clinical guidelines application would help them access previous work experience, and patient history to provide accurate and consistent patient care practice. Hence, health policy implementers and practitioners were informed that medical errors could be reduced, the accuracy of patient care could be ensured, and health professionals could be easily supported by the hand-held clinical guideline application.The study would serve as a framework for further similar research.Therefore, this study aimed to assess health professionals' acceptance of mobile-based clinical guideline applications and test a unified theory of acceptance and technology utilization (UTAUT) model. Theoretical background and hypothesis development In the last decade, numerous theoretical models have been projected to assess and explain the end-user's acceptance of information and communication technology (ICT) [50].A unified theory of acceptance and use of technology (UTAUT) is one of the known theoretical models that is extensively used and practically tested on a wide range of ICT applications according to the endusers viewpoint [51].UTAUT is a combination of activity theory and technology acceptance models (TAM) and has been constructed as a framework to study end-users acceptance and use of new ICT applications [52].The UTAUT model proposed that the actual acceptance and use of technology are affected by end-users behavioural intentions (BI) [53].The UTAUT model is an extension of other models and therefore has a strong ability to explain the acceptance and use of technology as compared with other single models [54,55].The UTAUT model consists of four key construct elements that directly affect the users' BI of acceptance of mobile-based clinical guideline applications: performance expectancy, effort expectancy, social influence, and facilitating conditions [51,56].BI is additionally affected by individuals' attitudes toward acceptance and use of new ICT applications, which are directly affected by the four key constructs [39].Age, sex, and experience were used as moderator factors in this study.Various information communication technologies, mobile-based information systems, and integrated components that would test the health professional's behavioural intention toward acceptance of mobile-based clinical guidelines were considered for the articulation of the study.The modified UTAUT model was applied to test the user's acceptance, and intention to use various technologies for healthcare practice in low-income countries.For instance, a study conducted in Burundi states that the UTAUT model is critical to explaining users' intention to adopt mobile-based information systems [57].In Tanzania, the UTAUT model is used to test accredited drug dispensing outlet programs and to identify factors that would impact system users [58].In Ethiopia, various studies confirmed that the modified UTAUT model is suitable for the acceptance of electronic medical and personal health record systems among the health professionals perspective [59,60], the adoption of e-learning [61], and the sustainable adoption of the eHealth system [39].Moderators such as age [62,63], sex [64][65][66], and experience could influence the model predictors and health professionals' intention to accept mobile-based clinical guideline applications.The practical utilization of mobile-based clinical guideline applications in resource-limited settings has not been initiated and implemented in Ethiopia.Therefore, actual system use was not measured, and the experience was removed from the structural equation model analysis as the study participants had no familiarity with mobile-based clinical guidelines application.The actual modified UTAUT model framework of the study is presented in Fig. 1. Based on the above actual UTAUT model, the following hypotheses were developed. Performance expectancy Performance expectance (PE) is the degree to which individuals believe that using ICT applications has the benefit of enhancing one's job performance [67].PE is identified as a strong determinant of BI's use of ICT applications in different settings [67][68][69].Many studies have proven that using mobile-based applications in healthcare practice has benefits for one's health and enhances health practitioners' job performance [70][71][72].Performance expectance is one of the possible predictors for mHealth adoption in Burundi [57].However, a study in Australia confirmed that performance expectance does not affect individuals' intention to use cloud-based mHealth services [73].Accordingly, the following hypothesis was developed. H1 PE has positive effects on health professionals' attitudes toward mobile-based clinical guideline applications. H2 PE has a positive effect on health professionals' BI of mobile-based clinical guideline application acceptance. Effort expectancy Effort expectancy (EE) is one of the crucial elements of technology acceptance in the UTAUT model and it answers "How much the new ICT technology is easy to use?" [56].Studies depicted that EE influences users BI to accept and use new ICT applications, and it does not require efforts to work through new technology [39,74,75].A study in a low-resource setting shows that effort expectancy is a key determinant of health professionals' intention toward telemedicine [76].Another study in Canada shows that information systems and technology acceptance and use are significantly influenced by effort expectancy [77].Therefore, the following hypothesis was developed.H4 EE has significant effects on health professionals' BI to accept mobile-based clinical guideline applications. Social influence Social influence (SI) is the degree to which system users assume that others would encourage them to use the new ICT technology [56].According to studies, SI has a positive association with BI to accept and use new mobile health applications for healthcare practice [78,79].Accordingly, the following hypothesis was formulated. H5 SI has significant effects on health professionals' attitudes toward mobile-based clinical guideline applications. H6 SI has significant effects on health professionals' BI to accept mobile-based clinical guideline applications. Facilitating conditions Facilitating conditions (FC) is one of the constructor elements in the UTAUT model [56].It is a belief that whether there is the availability of ICT, technical infrastructure, and trustworthy support in the organization for system users [56,80].FC provides system users with a sense of psychological control that in turn, influences their willingness to adopt a particular behavior.Hence, mobile-based clinical gaudiness-receiving users are required to have specific basic skills such as how to operate and use mobile phones, and how users react to the basic function of a mobile device (phone calls, sending and receiving text messages) [81,82].If system users do not have these required operational skills and basic mobile functions, they will not accept and adopt mobilebased clinical guidelines applications.So, the following hypothesis was developed. H7 FC positively affects health professionals' attitudes toward mobile-based clinical guideline applications. H8 FC positively influences the health professionals' acceptance of mobile-based clinical guideline applications. Computer literacy Computer literacy (CL) is health professionals' basic information communication technology skill and knowledge, the ability they have, and how system users are technically good at using mobile-based clinical guideline applications [60,83].An individual also can seek, evaluate, and communicate information using media across a range of digital platforms, and influence acceptance of mobile-based clinical guidelines applications [59,84,85]. H9 CL has a positive effect on health professionals' attitudes toward mobile-based clinical guideline applications. H10 CL has a positive effect on health professionals' acceptance of mobile-based clinical guideline applications. Attitude Attitude (ATT) is a psychological construct that shows how people think, feel, and tend to behave about an object or a phenomenon [86].It is a predisposed state of mind regarding the importance of a new system in reducing workload, enhancing work performance, and accomplishing tasks efficiently and effectively [39,87].According to studies, attitude is appropriate in studying behavioural intention to accept and use new technologies, and it he one of the fundamental constructs for the successful implementation and adoption of a new technology [88][89][90].Therefore, health professionals' attitudes are crucial for the acceptance of mobile-based clinical guideline applications in the study setting. H11 ATT directly affects the BI of health professionals' acceptance of mobile-based clinical guideline applications. H12 ATT mediates the relationship between PE and health professionals' BI towards the acceptance of mobilebased clinical guideline applications. H13 ATT mediates the relationship between EE and health professionals' BI towards the acceptance of mobilebased clinical guideline applications. H14 ATT mediates the relationship between SI and BI of health professionals to accept mobile-based clinical guideline applications. H15 ATT mediates the relationship between FC and BI of health professionals to accept mobile-based clinical guideline applications. H16 ATT mediates the relationship between CL and BI of health professionals to accept mobile-based clinical guideline applications. The effects of moderators (age, and sex) Studies show in China that age has significant moderating effects on effort expectancy and behavioural intention to use health technology [62], home telehealth acceptance [69], and mobile health services adoption [63].Other studies show that age has a moderating effect on performance and effort expectancy, social influence, and behavioural intention to use health information communication technology, smart equipment, and wearable devices [91,92].Similarly, sex has moderating effects on the modified UTAUT model's construct elements [69,93].For instance, being female has a significant influence on the performance expectancy of behavioural intention to use wearable technology [93].Therefore, the following hypotheses for moderators (age and sex) have been formulated. H17 The effects of performance expectancy on health professionals' intention to accept mobile-based clinical guideline applications has moderated by age. H18 The effects of effort expectancy on health professional intention to accept mobile-based clinical guideline application has moderated by age. H19 The effects of social influence on health professionals' intention to accept mobile-based clinical guideline applications has moderated by age. H20 The effects of facilitating conditions on health professional intention to accept mobile-based clinical guideline application moderated by age. H21 The effects of computer literacy on health professionals' intention to accept mobile-based clinical guideline applications has moderated by age. H22 The effects of performance expectancy on health professional intention to accept mobile-based clinical guideline application has moderated by sex. H23 The effects of effort expectancy on health professional intention to accept mobile-based clinical guideline application has moderated by sex. H24 The effects of social influence on health professional intention to accept mobile-based clinical guideline application has moderated by sex. H25 The effects of facilitating conditions on health professional intention to accept mobile-based clinical guideline application moderated by sex. H26 The effects of computer literacy on health professionals' intention to accept mobile-based clinical guideline applications have been moderated by sex. Study design The institutional-based cross-sectional study design was employed among health professionals. Study setting and period The study was done among health professionals working in the Ilu Aba Bora Zone of the Oromia regional state, from July 04 to August 19, 2022.Ilu Aba Bora Zone is found in Southwest Ethiopia.The zone is located 600 km away from Addis Ababa, the capital city of Ethiopia.The public health facilities provide different health services for more than a million of the population in southwest parts of Ethiopia. Study population and eligibility criteria All healthcare professionals working in the public health facilities of the study area were the source population.All the healthcare professionals who were permanently employed were the study population.Healthcare professionals who were not present during the data collection period, who had a serious health problem, and on annual leave were excluded. Sample size determination and sampling procedures The sample size was determined based on structural equation model parameter criteria which were considered the number of all variance of the independent variable, covariance of exogenous variables, direct and indirect regression coefficients between latent variables, and coefficient between latent and loading of the items.Accordingly, we estimated 33, 10, 16, and 14 free parameters in the hypothetical model respectively.Consequently, a total of 73 free parameters were determined in the model.In structural equation model analysis, a minimum of 10 sample sizes were required for the single free parameters [94,95].Hence, 730 sample sizes were required, and considering 10% of the non-response rate, a total of 803 sample sizes were estimated.A stratified simple random sampling method was used.Once the sample was stratified based on the types of facility, the sample was allocated in each stratum proportionally.Then, a simple random sampling technique was used to select the study subjects in each public health facility. Data collection and quality management A pretested self-administered tool was used.The tool of the study was adapted in reviewing previously similar studies [39,75,96].The tool had two parts: the first part contains sociodemographic characteristics of the study participants, and the second part contains key constructs of individuals' behavioral intention of acceptance of technology in the UTAUT model [67].The questionnaire was constructed to test the formulated hypothesis.As shown in SI 1, a total of 26 items of questions were used for the second part.Of these questions, 4 items were for "performance expectancy", 4 items were for "effort expectancy", 4 items were for "facilitating condition", 4 items were for "computer literacy", 4 items were for "attitude", 3 items were for "social influence", and 3 items were for "BI of acceptance".All the items used to measure the key construct of BI were measured by using a Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).Two-day intensive training was delivered for the data collectors and supervisors.A pre-test was done outside of the study area (Buno Bedele Zone of Oromia region) with 10% of the total estimated sample units to check the readability and consistency of the tool.The data obtained from the pre-test was used to check the validity and reliability of the tool.Also, during the pertest health professionals' experience of using mobile-based clinical guidelines was assessed.As a result, the study participants had no experience using mobile-based clinical guideline applications. Mobile-based clinical guideline applications In this study, clinical guidelines are considered any clinical statements, guidelines, producers, and handbooks developed by governmental and nongovernmental agents and experts for assisting healthcare practitioners in making consistent and accurate evidence-based decisions.Therefore, properly handling these clinical guidelines using easily accessible mobile-based applications with a good format for accessibility and readability of clinical guidelines efficiently and effectively regardless of the health professional's location [97,98]. Health professionals In this study, health professionals include certified health practitioners from known governmental and private institutions who are concerned with diagnosing, treating, and preventing human illness, injury, and other physical, social, and mental health issues by the needs of the populations they serve through the standard principles and procedures [99]. Data processing and analysis A statistical analysis technique based on the Structural Equation Model (SEM) was used to test and validate the formulated hypothesis.The data from the questionnaire were exported into SPSS software version 25.Amos version 26 software was used to analyze the data.Descriptive statistics of the study participants were calculated and presented with frequency and percentage Composite reliability was used to assess the internal reliability of each item of the constructs.The acceptable value of composite reliability (0.6) was considered for the internal reliability test [100,101].Convergent validity was assessed using an Average Variance Extracted (AVE) and factor loading.Hence, AVE for each associated construct should exceed 0.50, and the items loading above 0.6 [102,103].The discriminant validity was assessed using the Fornell Larcker criterion which is the square root of the AVE and cross-loading matrix.The square root of the AVE in the diagonal elements must be greater than the entire corresponding columns and rows to satisfy the discriminant validity [104].To investigate the relationship between associated constructs, path coefficient (beta coefficients), 95% Confidence Interval, and p-value were used to check the hypothesis. For moderator testing, the two model such as unconstrained, and constrained models were used.For both models, the moderator (age, sex) is assessed whether the moderator had an effect or significant difference for a given variable to influence the constructs and outcome variables.Accordingly, if a significant difference between the two models exists with p-value < 0.05.Then, the moderator confirmed that it had a significant effect on influencing other construct variables on the health professional's intention to accept mobile-based clinical guidelines application. Socio-demographic characteristics of the study participants A total of 769 health professionals participated in this study, and returned the questionnaire, with a 95.8% response rate.From the total of 769 respondents, around one-half (52%) of the respondents were males, and the majority (63%) of the respondents were degree and diploma holders.More than half of the respondents (55.7%) were less than 30 years of age, and the majority (62%) of the health professionals had up to ten years of work experience.Five out of eleven study participants (45.30%) had a monthly salary of < = 600 birrs (Table 1). Descriptive results of the constructs of the modified UTAUT model In this study, 46.9%, 53.3%, and 61.1% of health professionals strongly agreed and intended to learn, use, and plan to use their smartphones for mobile-based clinical guidelines applications, respectively.According to the participants' computer literacy, 32.0%, 25.6%, and 27.0% of health professionals strongly disagree on properly searching information from the online database, correcting and fixing problems happening on their computers and smartphones, and downloading and installing applications, respectively.However, 31.9% of the participants strongly disagree that they would lack the skills to practice and use the basic functions of computers and smartphones they have.According to participants' attitudes, 46.2%, 48.5%, 45.5%, and 49.5% of participants agreed that mobile-based clinical guideline applications would be important to access the right information, useful for quality, and consistency of patient care, and they would not hesitate and fear to use the application, respectively.According to facilitating conditions, 33.1% and 36.5% of participants strongly disagreed that they would lack adequate skills and knowledge to use the application and that the application would not be compatible with their smartphone, respectively.Also, 56.4% and 43.1% of participants strongly disagreed with the resources they have, and the supportiveness of the organization to use the application, respectively. According to social influence, 39.8%, 42.8%, and 37.3% of the participants strongly agreed that people's influence, motivation, and options would be important to use mobile-based clinical guideline applications, respectively.According to effort expectancy, 49%, 38.8%, 54.7%, and 43.3% of the study participants strongly agree that mobile-based clinical guideline applications would be easy to use, not difficult, clear, and understandable, and would allow the practitioners to become skilful, respectively.According to performance expectancy, 30.9%, 42,7%, 43.6%, and 31.7% of the participants agreed that mobile-based clinical guideline applications would be useful to use, enable them to share information and update themselves, supportive for accurate and consistent patient care, and it wound to ensure the quality of patient care with low waiting time, respectively (SI 2). Measurement model The convergent validity of the structural model assessment is presented in Table 2. Based on the results, the internal consistency of each item of the latent variable was assessed by composite reliability.Composite reliability is acceptable and considered good if it ranges between 0.60 and 0.90 [104,105].As a result, values of composite reliability of the latent variables ranged from a minimum of 0.750 to a maximum of 0.890, and this indicated that the respondents' answers for each item of the latent variable were consistent and had strong internal reliability.Factor loading values of each latent variable range from a minimum of 0.63 to a maximum of 0.96.This showed that each latent variable was greater than a minimum acceptable value (0.6).The degree of variation of each latent variable was measured by the average variance extracted (AVE) value.Consequently, the analysis values of AVE ranged from a minimum of 0.582 to a maximum of 0.778.Hence, each latent variable has an estimated strong power variation between them.Consequently, the conditions for convergent validity were satisfied in this study.Furthermore, the factor loading of each item was significant on its respective construct (p-value < 0.001). The results of discriminant validity or divergent validity between different constructs are presented in Table 3.The elements in the matrix diagonals represent the square roots of the AVEs and are greater than the values in their corresponding row and column.As a result, all constructs in this study supported the discriminant validity of the data (Table 3). Model goodness of fit The model goodness of fit the data was checked using Chi-squire (P-value < 0.05), goodness of fit indices (GFI > 0.9), adjusted goodness of fit indices (AGFI > 0.8), normal fit indices (NFI > 0.95), Tucker-Lewis index (TLI > 0.9), comparative fit indices (CFI > 0.95), root mean square of standardized residual (RMSSR < 0.08), and (RMR < 0.08) model fit indices assessment criteria [86,106].To say that the model goodness of fit is achieved, the value of Chi-squire, GFI, AGFI, TLI, RMSEA, and RMR should fulfil the cut-off point.As a result, all the required criteria were achieved and the data fitted the goodness of the model (Table 4). The structural model analysis As shown in Table 5, the analysis report of the structural model showed that performance expectancy, facilitating condition, and computer literacy did not have any positive effects on health professionals' attitudes toward All the latent variables such as performance expectancy, effort expectancy, social influence, facilitating condition, and computer literacy accounted for 57% of health professionals' attitudes toward mobile-based clinical guideline application.All the latent variables such as performance expectancy, effort expectancy, social influence, facilitating condition, and computer literacy including health professionals' attitude accounted for 63% of health professionals' BI of mobile-based clinical guideline application acceptance (Fig. 2). Mediation analysis In the mediation analysis shown in Table 6, the relationship between effort expectancy, and health professionals' acceptance of mobile-based clinical guideline application had a significant partial mediation with attitude.In addition, the relationship between social influence, and health professionals' acceptance of mobile-based clinical guideline applications had a significant partial mediation with attitude.Accordingly, effort expectancy and social influence had an indirect effect relationship with health professionals' BI towards mobile-based clinical guidelines application acceptance with standardized estimation coefficient (β = 0.22, P-value = 0.027), and (β = 0.19, P-value = 0.031), respectively. Moderating effects of sex and age of health professionals on intention to accept mobile-based clinical guideline application The effects of sex, and age on the relationship between performance expectancy, effort expectancy, social influence, facilitating conditions, and computer literacy with health professionals' intention to accept mobile-based clinical guideline applications was investigated.The moderators were estimated both in constrained and unconstrained models.Accordingly, performance expectancy, facilitating conditions, and social influence on health professionals' intention to accept mobile-based clinical guideline applications had not significantly moderated by the sex of health professionals.However, computer literacy and effort expectancy on health professionals' intention to accept mobile-based clinical guideline applications was significantly moderated by sex.Being male had a significant effect on the effort expectancy of health professionals' intention to accept mobile-based clinical guideline applications with a path coefficient of 0.712 and a p-value of 0.018.Being female also had a significant effect on the computer literacy of health professionals' intention to accept mobile-based clinical guideline applications with a path coefficient of 0.316 and a p-value of 0.001 (Table 7).Therefore, H23 and H26 were supported in this study. For measuring the effects of age on the constructs, average age [36] was used as a cut-off point to dichotomize age as young (< 36 years) and old (≥ 36 years).Therefore, age had a significant effect on the computer literacy of health professionals' intention to accept mobile-based clinical guideline applications, where young health professionals positively influenced health professionals' acceptance of mobile-based clinical guideline applications with a path coefficient of 0.718, and a p-value of 0.031(Table 8).Therefore, H21 was supported. Discussion This study was conducted to determine the effects of constructs of the UTAUT model on health professionals' acceptance of mobile-based clinical guideline applications before the actual use of the applications.In this study total of 803 health professionals participated.Therefore, the study was different from other similar A hypothesis for all the constructs was formulated, and their effects on the health professionals' acceptance of mobile-based clinical guidelines applications were checked.As a result, performance expectancy, facilitating conditions, and computer literacy had no positive effects on health professionals' attitudes toward mobile-based clinical guidelines application (H1, H7, and H9).Additionally, facilitating conditions and computer literacy had no positive effects on health professionals' acceptance of mobile-based clinical guidelines (H8 and H10).Performance expectancy and effort expectancy had a significant effect on health professionals' behavioral intentions, and attitudes toward mobile-based clinical guideline applications, respectively (H2 and H3).Plus, facilitating conditions and social influence had a significant effect on health professionals' Behavioral intentions, and attitudes towards mobile-based clinical guideline application acceptance, respectively (H8 and H5).According to hypothesis H11, health professionals' attitudes had a direct effect on their Behavioral intentions toward the mobile-based clinical guidelines application.In the mediation analysis result, effort expectancy and social influence had a significant indirect and standardized partial relationship with health professionals' acceptance of mobile-based clinical guidelines applications. Effort expectancy had a significant effect on health professionals' attitudes towards mobile-based clinical guideline applications, and its relationship with health professionals' acceptance of mobile-based clinical guideline applications was mediated by the health professionals' attitudes.This finding was supported by similar studies conducted in different geographical areas [107,108].Other studies also proved that effort expectancy had a significant influence on the adoption of healthcare information technology, and MHealth applications [71,108,109].The finding opposes a study report that states mobile applications are difficult to use, the benefits of using mobile applications are offset by the effort to use the mobile application, as well as the more complex an innovation is, the lower its rate of acceptance, and adoption of the mobile-based clinical guideline application again [110,111].However, effort expectancy has a positive influence on individuals' acceptance of new technology (mobile-based clinical guideline application), and its indirect effect on attitude [112].This might be due to health professionals' attitudes, the belief that using the new application is easy, and the intention to use mobilebased clinical guideline applications positively influenced by the effort made to use mobile applications [39].Plus, effort expectancy is associated with diagnosis and medication error reduction [113], applications' flexibility, friendliness, familiarity, and its easiness of individuals to use.Additionally, mobile phones are now routinely used in education, entertainment, communication, and healthcare facilities [67].So, it might not need too much effort, and users might not face technical problems. The social influence had a significant effect on health professionals' attitudes toward mobile-based clinical guideline applications, and its relationship with health professionals' acceptance of mobile-based clinical guideline applications was mediated by the health professionals' attitudes.This was congruent with other similar studies [60,75,86,114].It was concluded that the viewpoints and opinions of others regarding the use of information technology in education and learning were affected by health professionals' behavioral intentions for the frequent and daily use of technology [115].This [116].Performance expectancy had a significant effect on health professionals' acceptance of mobile-based clinical guideline applications.This could be because mobilebased clinical guidelines applications could be useful for assisting health professionals in monitoring the disease progression of the patient and managing disease [117].Additionally, mobile clinical guidelines applications could also provide health professionals with real-time information on the patient's specific health condition [118,119].So, mobile-based clinical guidelines could be effective for better healthcare outcomes.Performance expectancy enhances the productivity of health professionals and is efficient for the time spent in operation, patient management, and the care provider's intention and attitude toward mobile-based clinical guideline application acceptance [39].This study's findings were similar to those of previous studies [72,120,121]. The facilitating conditions had a significant effect on health professionals' BI of mobile-based clinical guideline application acceptance.This finding was consistent with similar studies conducted in Ethiopia [60,86], Nigeria [122], South Africa [123], and Malaysia [124].Facilitating conditions such as organizational setting, preliminary skill, and knowledge they had on a mobile device, resources, and availability of training for information sharing [122], and system quality might have an important role in predicting users' actual acceptance of mobile-based clinical guideline applications [86].All these facilitating conditions might be user-friendly, comprehensive, and easily available for mobile-based clinical guidelines application acceptance by individuals. Attitude had a significant effect on health professionals' acceptance of mobile-based clinical guideline applications.This finding was consistent with previous studies [39,86].This might be because health professionals' attitudes toward using mobile-based systems have improved over time, and individuals' sociodemographic characteristics and educational level affect their attitudes which further affect their behavioral intention of technology acceptance [125,126]. Conclusions and recommendations This study reported that the unified theory of acceptance and use of technology (UTAUT) model proved a suitable model to assess health professionals' attitudes and behavioral intentions towards the acceptance of mobilebased clinical guidelines applications.Social influence, effort expectancy, and facilitating conditions were significant constructs for health professionals' acceptance of mobile-based clinical guideline applications.Health professionals' attitude toward mobile-based clinical guideline application was another strong construct in the UTAUT model for the acceptance of mobile-based clinical guidelines.Plus, effort expectancy and social influence had a positive effect on health professionals' attitudes toward mobile-based clinical guideline applications.The development of user-friendly mobile-based clinical guideline applications, based on user's requirements and in line with national standards of clinical guidelines, would be encouraged for consistent and accurate health professionals' decision-making processes.So, stakeholders and policymakers are advised to build the capacity and technical skills of health professionals to enhance their overall computer literacy.Moreover, resources and organizational support of health professionals would be critical for the acceptance of mobile-based clinical guideline applications. Implications of the study and future research directions Theoretical implications This study contributes to the growing body of literature on the application of mobile devices for healthcare practice and education promotion.The applied extended UTAUT model was proven to be suitable for predicting mobile-based clinical guideline acceptance.This study assessed the acceptance of mobile-based clinical guideline applications among health professionals' perspectives, which aided in the development and enhancement of locally relevant clinical practice guidelines.This study may alleviate any concerns of readers about the UTAUT model, and mobile-based clinical guidelines, and it serves as a baseline for researchers since there is insufficient evidence on a similar topic. Practical implications This study provides valuable implications for fostering the future implementation of mobile-based clinical guidelines.Based on the significant predictors, the current study may be important to offer tailored programs to increase users' digital knowledge and to ensure that using mobile-based clinical guidelines applications is easy and simple.Performance expectancy is a significant predictor of the acceptance of mobile-based clinical guidelines.This indicates that it is vital to demonstrate the advantages of mobile-based clinical guidelines to healthcare professionals. Implications for future research direction Future research should therefore concentrate on approaches to simplifying the acceptance level of mobilebased clinical guidelines, and removing technical barriers.Future research should focus on exploring further suitable and specific predictors to enhance the viability of the UTAUT model in a health-related context.The proposed predictors could also easily be applied in studies on the actual use of locally available mobile-based systems in healthcare practice that enable researchers to examine their ultimate predictive power.Researchers are also encouraged to conduct similar studies on governmental and non-governmental health institutions.Decision makers, care healthcare providers, and system developers could use this study's findings to increase the adoption of mobile-based clinical guidelines in the future. Strengths and limitations of the study This study will provide input for future research and mobile-based clinical guidelines application implementation and adoption in low-income settings.Additionally, this study proved that constructs in the UTAUT model affect health professionals' intention to accept new technology.Since the study is cross-sectional, there might be a temporal relationship between the effects of constructs and individuals' behavioral intentions to accept mobilebased clinical guidelines applications.This study did not attempt to control the impact of confounding variables on the health professionals' intention to accept mobilebased clinical guideline applications. Table 1 Sociodemographic characteristics of study participants (B-coefficient) of (β = 0.61, P-value < 0.01), and (β = 0.510, P-value < 0.01) respectively.Performance expectancy, facilitating condition, and attitude had a significant effect on health professionals' BI of mobile-based clinical guideline application acceptance with path coefficient Table 2 Constructs' convergent validity for healthcare professionals' acceptance of mobile-based clinical guidelines in a resourcelimited setting, northwest Ethiopia 2023 AVE: Average variance extracted Table 6 Mediation analysis result Table 7 The moderating effect of the sex of healthcare professionals on the intention to accept mobile-based clinical guideline application studies in terms of the representative sample size used, which is important to save resources to make decisions based on this study.In addition, the study verified that the constructs (PE, EE, SI, FC, CL, and ATT) of the UTAUT model would explain individuals' attitudes towards mobile-based clinical guidelines application and health professionals' acceptance before the actual use of the application.In this study, convergent and divergent validity were assessed, and the model goodness of fit was also tested.As a result, all the mentioned criteria of the structural equation model were achieved. Table 8 The moderating effect of the age of healthcare professionals on the intention to accept mobile-based clinical guideline application
9,478
2024-06-25T00:00:00.000
[ "Medicine", "Computer Science" ]
Theoretical Study of the β-Cyclodextrin Inclusion Complex Formation of Eugenol in Water The interaction between eugenol and β-cyclodextrin in the presence of water is studied by molecular mechanics and dynamics simulations. A force field model is used in molecular mechanics to determine the interaction energy and the complex configuration at the absolute minimum. The van der Waals term is the main contribution to the total energy, and so directly determines the configuration of the inclusion complex. The formation of inclusion complexes is simulated by molecular dynamics, in which their configurations are deduced from the position probability density that represents the preferred location and orientation of the guest in the simulation. When eugenol approaches from the rims of β-cyclodextrin, it tends to enter the cavity, remain inside for a short period and then exit from it. The guest tends to include the phenyl ring inside the cavity in the most probable configurations. Two inclusion complex configurations are proposed, each with the hydroxyl and methoxyl groups pointing towards one different rim of β-cyclodextrin. The initial guest orientation is the main factor determining these configurations. The model presented in this study reproduces the experimental findings on inclusion complex formation and proposes two possible complex configurations, one previously suggested by different authors. Introduction Cyclodextrins (CDs) are macrocyclic molecules composed of glucose units (six for α-CD, seven for β-CD, eight for γ-CD, etc.) forming truncated cone-shaped compounds. These give rise to cavities of different internal diameters, capable of containing molecules of different structure, size, and composition [1][2][3]. The ability of CDs and derivatized cyclodextrins to form inclusion complexes makes them useful in catalysis and chiral resolution of racemic compounds. Such processes are extensively employed in various research fields and technological applications. A well-known experimental outcome is that the size of the guest must be adequate to achieve maximum binding affinity in each CD, depending on molecular properties such as its composition and geometry [4][5][6][7]. There are certain general characteristics for the type of molecules capable of being included totally or partially inside the cavity of CDs, but each case must be analyzed individually. CDs and their inclusion complexes have been theoretically studied using several computational methods: molecular mechanics (MM) [8,9], molecular dynamics (MD) [6,10], and Monte Carlo simulations (MC) [11,12]. Eugenol (EG) is a phenol derivative that can be extracted from certain essential oils such as clove oil, basil, bay leaf, nutmeg, or cinnamon. It is used in the food, perfume, and pharmaceutical industries due to its ability to display biological activities such as antibacterial, antifungal, anesthetic, antiallergic, antioxidant, anticarcinogenic, antiinflammatory, and many other properties [13,14]. Instead of its multiple applications, EG presents some disadvantages such as light sensitivity or poor water solubility. The inclusion complex formation with CDs can increase its aqueous solubility and reduce the undesirable effects There is experimental evidence for eugenol inclusion complex formation in β-CD and some of its derivatives, both in solid state and aqueous solution [15][16][17]. These studies demonstrated that the molecular volume of EG fits the cavity size of β-CD, and that host and guest form inclusion complexes. They also suggested that the phenyl ring of EG is partially inside the cavity and within the hydroxyl and methoxyl groups that project outside the wider rim of β-CD. Among the studies related to EG, there are conformational studies of EG using Semiempirical and Density Functional Theory methods [18][19][20]. There are also theoretical studies of the inclusion complexes formed between water-soluble CD-grafted chitosan derivatives and EG, by means of molecular dynamics simulation [21]. In the inclusion complexes formed in this latter case, the guest also orients the hydroxyl and methoxyl groups towards the wider rim of the cavity. However, there are no previous molecular simulations of EG and β-CD with water. The aim of the present study is to theoretically examine the interaction between EG and β-CD in the presence of water, based on molecular mechanics (MM) and molecular dynamics (MD) simulations. The MM simulation calculates the interaction energy between EG and β-CD and deduces their configuration at absolute minimum energy, but this interaction occurs in processes where the molecules are moving and cannot always reach such a configuration. The MD simulation studies the molecular movements due to their mutual interactions, this method being more appropriate to describe the process of forming complexes. The model attempts to reproduce the capacity of inclusion complex formation and establish the complex configuration. The method applied was previously used to determine the interaction energy and β-cyclodextrin inclusion complex formation of molecules with different size, shape, and compositions [22][23][24][25]. The interaction potential and simulation method used in this study are presented in Section 3. Section 2 evaluates the interaction energies between β-CD and EG, and discusses the main results of a molecular dynamics simulation regarding the formation of the inclusion complex. The results obtained are corroborated by the experimental findings. Figure 1 represents the penetration potential (W) along with its contributions. W resembles a well potential because the interaction energy is deeper inside than outside the cavity, which represents the force attracting EG into β-CD. The values of W and the van der Waals (LJ) term are nearly the same because the order of magnitude of the electrostatic energy (ELE) is 2 × 10 −2 kcal/mol, and the H-bond term only contributes to E inter at some positions of the guest outside the cavity. The small amount of electrostatic energy is due to the presence of water, whose dielectric constant (ε) is 80, although this contribution is similar for smaller values of ε. For instance, the electrostatic contribution for solvents like ethanol (ε = 26) is about 7 × 10 −2 kcal/mol. Molecular Mechanics Simulation The minimum value of the interaction energy (E min ) is −10.22 kcal/mol, greater than the minimum value of E inter (−15.91 kcal/mol) because E intra is positive (5.69 kcal/mol). The main contribution to E intra is the torsional energy (3.98 kcal/mol), followed by the bond term (1.60 kcal/mol). Eugenol is located inside the cavity in the E min configuration (inclusion complex configuration), parallel to the cavity axis, with its centre of mass near the narrower rim of β-CD and the O atoms pointing towards the wider rim ( Figure 2a) [26]. The guest molecule is superimposed on the CD in Figure 2 for clarity. However, Figure 1 shows that the values of W near the cavity centre are similar to the minimum value of E inter , the differences being less than 1 kcal/mol. This means that EG can form β-CD inclusion complexes with similar energy but different configurations. In these complexes the guest centre of mass is located nearer the wide rim of β-CD and with different orientations, but its phenyl ring tends to stay within the cavity (Figure 2b) [26]. (Figure 3d). The size of the regions where the energy is attractive increases with the diameter of the cavity, and near the wider rim the guest tends to locate its centre of mass outside the β-CD. The energy is lower around the cavity walls than near the centre of the host and is again seen to be similar in the regions (Figure 3b,c), although the absolute minimum energy is located in (Figure 3b). Molecular Dynamics Simulation The movements of EG in the trajectories, and then the residence time t, interaction energy E, binding free energy F and position probability density depend on the initial values of the guest disposition and velocities. Whereas the velocities hardly influence the simulation, the initial centre of mass and orientation of EG determine the subsequent process. The initial dispositions of EG in the simulation are represented in Figure 4 [26], three trajectories in each relative position between host and guest. The results obtained in the simulation show that if the guest approaches the β-CD from the cavity rims, it tends to enter the cavity, remain inside for a short period (residence time t) and then exit from it, although not always passing through the cavity. When the guest is partially inside the cavity in the trajectories, it tends to include the phenyl ring because it thus adopts more stable configurations. The mean value of the energy in each trajectory, and therefore in the simulation, depends on the movements of the guest. The value of E mean is −2.39 kcal/mol, the contribution of E inter is −8.06 kcal/mol, and E intra 5.67 kcal/mol. E mean is greater than E min because the energy of every position with a probability other than zero contributes to the average energy in the trajectories. The van der Waals term contributes the most to E inter , and the torsional energy to E intra in the simulation. However, these energies do not tell us if the complex formation is energetically favourable with respect to the reagents, whereas the binding free energy F does. F varies from −7.02 to −10.34 kcal/mol in the simulation, since F mean = −9.15 kcal/mol. The initial guest disposition influences F mean and E mean , because they are due to movements of EG in the trajectories where the guest positions and orientations depend on the initial conditions. As seen in Section 2.1, there are different inclusion complexes formed by β-CD and EG with similar interaction energies and different configurations; the phenyl ring of EG is always located inside the host, along the Z-axis. The preferred location of the guest in the MD simulation, and therefore the capacity to form inclusion complexes, is deduced from the position probability density ( Figure 5). EG tends to locate its centre of mass preferably nearer either rim of β-CD, near the centre at the narrower rim (about 12.5%) but closer to the cavity wall at the wider rim (about 20%). The guest orientation in these zones of highly probable presence does not remain constant, it varies continually according to the small energy differences between the complex configurations. However, the size of the guest does not permit a rotation of 180 • with respect to the cavity axis inside the β-CD; this rotation only occurs outside the β-CD before entering the cavity. Once inside, the guest orientation is under strong restrictions due to the inner cavity size, and the variations are small. Some of the most probable guest orientations corresponding to the preferred centre of mass positions are shown in Figure 6 [26]. It can be concluded from the MD simulation that EG can form inclusion complexes with β-CD in which the phenyl ring is always inside the cavity, in agreement with the experimental findings. There are two types of inclusion complexes deduced from MD, whose configurations (Figure 6a,b) are very different because the hydroxyl and methoxyl groups are pointing towards either rim of β-CD. Nevertheless, none of these configurations agree with that of minimum energy, because the sizes of the host and guest do not allow the latter to move freely inside the cavity to adopt the minimum energy configuration. The inclusion complex in which the hydroxyl and methoxyl groups are projecting outside the wider rim of β-CD (Figure 6b) is that suggested by several other authors [15][16][17]. If these results are validated by experimental findings, it can also be concluded that there are other possible inclusion complex configurations. The two possibilities are obtained by considering the same number of trajectories in the MD with the mentioned initial relative positions, in order that all the different ways the molecules can approach each other appear in the simulation. To demonstrate that all the trajectories do not contribute equally to the more probable configurations, different position probability densities are determined (Figure 7). The density represented in Figure 7a corresponds to the trajectories in which the propene of EG is pointing towards the rims of β-CD, independently of the guest centre of mass (Figure 4a,b). In this case EG would form an inclusion complex like Figure 6b, after moving through the cavity with the initial orientation or changing it before entering β-CD. However, from the position probability density (Figure 7b) calculated with the trajectories in which the initial orientation of phenyl group of EG is towards the rims of β-CD (Figure 4c,d), the more probable inclusion complex is like Figure 6a. Therefore the initial guest orientation decisively influences the configuration and probability of forming one of this type of inclusion complexes. To assess the influence of the initial centre of mass of EG in the simulation, position probability densities are determined by considering the trajectories with initial positions near either rim of β-CD, independently of the guest orientation. If the guest approaches β-CD from the narrower rim (Figure 7c) or the wider rim (Figure 7d) it can reach both types of configurations, thus proving that the main factor determining inclusion complex formation is the initial guest orientation. This result also implies that establishing the inclusion complex configuration from experimental findings reveals the way host and guest preferably approach each other. The initial guest disposition also influences the time during which the host-guest interaction is attractive enough for them to remain close to each other. The shorter residence times correspond to the trajectories with initial dispositions near the narrower rim of CD. Residence times in the simulation vary from 96 ps to 860 ps (t mean = 360.3 ps), and the guest usually spends this residence time inside the cavity, as seen in the last paragraph. Molecular Mechanics Simulation The driving forces contributing to the formation of complexes with CDs are due to the electrostatic, van der Waals, hydrophobic and H-bond interactions. Whereas the electrostatic, van der Waals and H-bond interactions are modelled by different analytical functions in molecular simulation methods, the hydrophobic interaction is one of the less understood. Traditionally, negative enthalpy and negative entropy changes observed in the experimental studies have been associated to a small contribution of the hydrophobic effect to CD complexation. However, investigations with supramolecular complexes recently show the importance of the replacement of high-energy water by guest molecules to the formation of complexes in cavities, although they show that this contribution is much smaller for flexible hosts like CDs than others cavities [27,28]. Modeling the process of this replacement would provide a method to theoretically take into account the hydrophobic effect in the interaction with CDs. However, the presence of water in the process is not represented in the present study by discrete solvent molecules, but a uniform continuous medium and thus the hydrophobic interaction cannot be included in the computational model. The interaction energy E between EG and β-CD is modelled by the sum of the intramolecular E intra and intermolecular E inter energies, as in the Assisted Model Building with Energy Refinement (AMBER) force field [29,30]. The intramolecular energy is modelled by a sum of the torsional energy, bond stretching and bending functions, and represents the conformational adaptation of the guest and host. The intermolecular energy is determined by a sum of the van der Waals (Lennard-Jones potential), electrostatic, and H-bond terms: where r represents bond lengths, θ bond angles, φ torsional angles of molecules, and R ij the distance between the ith atom of the guest and the jth atom of β-CD The presence of water in the process is represented by a uniform continuous medium, with a dielectric constant ε = 80 in the electrostatic contribution to E inter . All atoms in the guest molecule are considered because some H atoms can contribute decisively to the formation of H-bonding between host and guest, and this may be reflected in the interaction energy E. The atomic coordinates of β-CD, its net atomic charges [31] and the AMBER force field parameters are taken from the literature [32,33]. The molecular configuration and atomic point charges of EG are calculated by the Hartree-Fock method using the 6-31G** basis set, implemented in the MOLPRO package [34,35]. The origin of the reference system is located at the centre of mass of the CD and the space-fixed frame over the principal axis of the β-CD, where the Z axis is collinear with the cone axis (thus the XY plane is parallel to the cone base). The configuration of EG is given by the coordinates of its centre of mass and the molecular orientation, defined by the Euler angles formed with respect to the absolute frame (X, Y, Z). The method is the same previously applied to study the interaction between β-CD and various molecules, therefore the energy E is calculated for different positions and orientations of the guest centre of mass, inside and outside the CD [22][23][24][25]. The complex configuration of EG with β-CD in water is determined from MM as the position and orientation of the guest in the absolute minimum energy E min . To obtain this minimum value, a grid is defined (−5 ≤ X ≤ 5, −5 ≤ Y ≤ 5, −5 ≤ Z ≤ 5) in which the distance between two consecutive points is 0.1 Å. At each grid point, E is determined for different orientations (about 23,000) and the minimum value is assigned to each location. The results obtained from the simulation are shown by the penetration potential (W), potential energy surface (PES), complex configuration, the minimum value of E (E min ) and its different contributions. The penetration potential W is the curve joining the minimum intermolecular energy for every plane Z = constant, and represents the variation in E inter through the cavity. The capacity of EG to form a β-CD inclusion complex in water is deduced from the MM simulation by the configuration of E min (the complex configuration), and it is considered an inclusion complex if the guest is totally or partially located inside β-CD. The PES is calculated at each grid point from the average Boltzmann energy corresponding to different guest orientations, instead of the lowest energy [36,37], because EG does not always reach the minimum energy orientation whilst moving inside and around the CD. The PES is represented by partitioning the range of Z-axis variation in the β-CD into four parts, depending on the position of the guest molecule's centre of mass near the narrower rim, centre or wider rim of the cavity. The length of each domain is about 2.5 Å and the potential surface area for each region is determined as the minimum value of the average Boltzmann energy, for every point on the plane in the corresponding interval of the Z-axis. Molecular Dynamics Simulation The classical equations of motion for the molecules are solved in MD to obtain the trajectories of EG due to its interaction with β-CD. A basic result of classical mechanics is that the translational motion of the molecule's centre of mass is governed by the total force acting on the body, whereas the rotation about the centre of mass depends on the total applied torque. The total force on the molecule is determined as the sum of the forces acting on each of its atoms ( . In order to avoid the problem of divergence in the orientational equations of motion, four quaternion parameters have been used as generalized coordinates. The trajectories are determined with different initial values of the guest disposition (centre of mass and orientation) and velocities (translational and rotational). The magnitude of the initial velocities depends on the temperature of the process (293 K), but their direction as well as the initial orientation of EG in each trajectory are determined randomly. When the initial guest centre of mass is located outside the CD near the cavity walls, it does not enter the CD, but rather continues moving around the host, tending to move away. When the starting position of EG in the simulation is located near the cavity rim, it tends to enter the cavity and remain inside for a short period, forming a stable complex (residence time t). It then moves away from the CD, as previously found in the MD simulation of different molecules with β-CD [22][23][24][25]. Basically, there are four relative positions between the molecules: the guest centre of mass near either rim of CD, with one end of it (the phenyl ring or the radical) pointing towards the cavity. Twelve trajectories are calculated in the present study, three starting from each relative position between the molecules. In this way, the contributions of different initial guest dispositions are considered equally in the simulation. Moreover, we determine some trajectories with the same initial orientation but different starting centres of mass of EG (near either cavity rim), so as to analyse separately the influence of these factors on the simulation. The length of each trajectory is not defined by the simulation time or the number of steps, we stopped integrating the equations of motion when the guest was located outside the β-CD, in positions where its interaction with the CD was not attractive enough to re-enter cavity [25]. The configuration, and kinetic and potential energies, were registered every 100 steps of 1 fs. We used an in-house program written in Fortran, and the equations of motion to perform constant-temperature molecular dynamics were integrated numerically using a variant of the leap-frog scheme (proposed by Brown and Clarke) [38], constraining the rotational and translational kinetic energies separately [39]. The results obtained for each trajectory were: the interaction energy E and its different terms, the binding free energy F, residence time t and position probability density. The average values of E, its different terms, F and t obtained for the simulation were also determined (E mean , F mean , t mean ). Whereas the residence time represented the time during which the interaction between β-CD and EG is attractive enough to remain close to each other (inside or outside the cavity), the capacity to form inclusion complexes was deduced from the position probability density, which represented the preferred location of the guest in the simulation. A guest molecule is able to form an inclusion complex with β-CD when it has greater probability to remain totally or partially inside the cavity. This position probability density was calculated by dividing the number density in a volume element by the total amount of possible centre of mass positions for the guest. The number densities of presence or number of guest positions in each volume element was defined by a grid [36,37]. There are several generalized Born Models to calculate electrostatic binding free energies [40], but the electrostatic contribution to E inter is nearly constant inside the cavity ( Figure 1) and thus in the trajectories. Moreover the van der Waals term is about 10 3 times greater than the electrostatic potential energy, making the process of inclusion complex formation essentially dependent on this contribution. Therefore, the total energy of the complex during the trajectories was considered to determine the binding free energy F in the simulation using: where W i is the energy of the complex during the trajectories, T the temperature of the process (293 K) and k B Boltzmann's constant [29]. Conclusions A molecular mechanics simulation of the interaction between eugenol and β-cyclodextrin in water was presented in this study. The van der Waals term is the main contributor to the total energy, particularly inside the cavity, and so directly determines the configuration of the inclusion complex. The small electrostatic contribution to the total energy is due to the presence of water, whereas the intramolecular energy reflects the structural relaxation of the host and guest molecules. The molecular mechanics simulation of the interaction between EG and β-CD in the presence of water demonstrates the capacity of EG to be included in β-CD, forming stable complexes in which the hydroxyl and methoxyl groups are pointing towards the wider rim of the cavity. The process of forming inclusion complexes was simulated by MD, showing that when EG approaches from the rims of β-CD, it tends to enter the cavity, remain briefly inside then exit from it, spending in the process a mean time of about 360.03 ps. The guest is not static inside β-CD, it varies in both its centre of mass position and orientation, although the latter is restricted by the molecular size. The guest is partially inside the cavity in the most probable configurations, although it always tends to include the phenyl ring. Two types of configurations are proposed for the inclusion complexes, each with the hydroxyl and methoxyl groups pointing towards a different rim of β-CD. The model presented in this study reproduces the capacity of eugenol to form inclusion complexes with β-CD, in agreement with experimental findings. It proposes two possible configurations of the complex, one of them suggested previously by several authors. This indicates that the main factor influencing the type of inclusion complex formed is the initial guest orientation.
5,735.8
2018-04-01T00:00:00.000
[ "Chemistry" ]
Effect of Teriparatide on Bone Mineral Density and Bone Markers in Real-Life: Argentine Experience Purpose To evaluate the effect of teriparatide (TPTD) on bone mineral density (BMD) and bone markers under clinical practice conditions. To assess whether the results in real-life match those published in clinical trials. Methods Cross-sectional study of postmenopausal women treated with TPTD for at least 12 months. Results 264 patients were included in the study. Main characteristics are as follows: age: 68.7 ± 10.2 years, previous fractures: 57.6%, and previously treated with antiresorptive (AR-prior): 79%. All bone turnover markers studied significantly increased after 6 months. CTX and BGP remained high up to 24 months, but total and bone alkaline phosphatase returned to basal values at month 18. There was a significant increase in lumbar spine (LS) BMD after 6 months (+6.2%), with a maximum peak at 24 months (+13%). Femoral neck (FN) and total hip (TH) BMD showed a significant increase later than LS (just at month 12), reaching a maximum peak at month 24 (FN + 7.9% and TH + 5.5%). A significant increase in LS BMD was found from month 6 to month 24 compared to basal in both AR-naïve, and AR-prior patients (+16.7% and +10.5%, respectively), without significant differences between the two groups. Comparable results were found in FN and TH BMD. Main conclusions. As reported in real-life clinical studies, treatment of osteoporotic postmenopausal women with TPTD induced a significant increase in bone turnover markers from month 6 onward and an increase in BMD from months 6–12 with continuous gain up to month 24. The real-life results of our study matched the results of randomized clinical trials. In addition, TPTD induced an increase in BMD, regardless of the previous use of AR. Introduction Osteoporosis is a chronic condition characterized by lower bone mass and bone microarchitecture deterioration, which compromises bone strength and increases fragility fractures. Currently available treatments for osteoporosis are antiresorptive (AR) or anti-catabolic medications, such as bisphosphonates (BP), denosumab (Dmab), oestrogens, and selective oestrogen receptor modulators, as well as boneforming agents, such as parathyroid hormone (PTH 1-84 or its fragment PTH ) and abaloparatide. Another treatment recently approved in several countries includes romosozumab. In Argentina, TPTD is approved as an initial treatment for severe osteoporosis, very low bone mass (T-score −3 SD), and previous fragility fracture in patients with remarkably high or imminent risk of fracture, or in cases of intolerance or failure of other treatments (intratreatment fracture or decrease in bone mineral density). Although patients are not prescribed TPTD when necessary for several reasons, such as afordability and insufcient medical knowledge, among others, the vast majority are previously prescribed AR agents. Treating patients with BP or Dmab before TPTD has been reported to induce less increase in bone mineral density (BMD) and less anti-fracture efcacy, especially at the hip [9][10][11]. Clinical randomized trials are the gold standard to demonstrate treatment efcacy, but observational trials based on daily clinical practice have enlarged efcacy and safety data in the real world and provided additional information. It is estimated that 80% of patients following osteoporosis treatment would not meet the inclusion criteria to participate in clinical trials, even when these patients tend to be more compliant [12,13]. Tis study aimed at evaluating the efect of TPTD on BMD and biochemical markers of bone turnover under clinical practice conditions in patients treated at centers specialized in bone metabolism. Additionally, the efect of TPTD on BMD in AR-naïve patients was compared with the efects on patients previously treated with AR (AR-prior). Patients and Methods Tis is a retrospective, cross-sectional, and multicenter study (11 centers in Argentina) in 264 postmenopausal women treated with TPTD at least for 12 months between 2006 and 2018. All women had either a T-score of less than −2.5 at the hip or lumbar spine or a T-score of less than −2.0 plus other risk factors for fracture. As inclusion criteria, we also considered that the patients had performed a BMD at least basal after 12 months. All patients simultaneously received calcium (at least 1000 mg/day) and vitamin D (at least 800 IU/day). In addition, patients were analyzed considering the previous use of AR and were divided into AR-naïve (n � 56) and AR-prior (n � 208). Te use of TPTD was considered after a lack of response to treatment with BP (38.3%), multiple fractures (17.8%), extremely low BMD (16.3%), or combinations thereof (24.6%). Te remaining 3% were due to atypical fractures (n � 6) and delays in fracture healing (n � 2). BMD (g/cm 2 ) was measured by dual-energy X-ray absorptiometry (DXA) with the GE Lunar Prodigy (GE Lunar, USA) at the lumbar spine (LS, L1-L4), femoral neck (FN), and total hip (TH) at 0 (basal), 6, 12, 18, and 24 months under TPTD treatment. Scans were performed according to the recommendations provided by the manufacturer, and the coefcient of variation was less than 2% at all centers [14]. Clinical vertebral fractures were studied with radiography, tomography, or magnetic resonance imaging. Te study was conducted in accordance with the Helsinki declaration. Each participant was identifed by a number to keep their identity confdential. Data Analysis Kolmogorov-Smirnov test for normality was used to assess the distribution of data, and the test was used as appropriate. Student's T-test or Mann-Whitney test was used to compare the two groups. A Wilcoxon signed-rank test was used to compare paired data. Data are expressed as the mean ± SD or mean ± SEM. Diferences were considered signifcant if p < 0.05. Statistical analyses were performed with GraphPad Prism 5.01 (GraphPad, San Diego, USA). Almost 79% (208/264) of the patients had used BP or Dmab treatment previously with a treatment mean duration of 5.9 ± 3.6 years: 65.4% of patients who used BP (n � 136) received only one BP, 25.9% (n � 54) were switched to another BP, and 8.7% (n � 18) were switched to Dmab (treatment duration 1.6 ± 0.7 years) before TPTD. In patients who received only one BP (65.4%), 82.4% were oral BP (59.8% alendronate, 30.8% ibandronate, and 9.4% risedronate), and 17.6% were intravenous (mainly zoledronate). In patients who were switched to a second BP before TPTD, 42.1% were switched to another oral BP, 45.6% were switched from oral BP to intravenous BP, and 12.3% were switched from intravenous BP to oral BP. Changes in Biochemical Parameters after the TPTD Treatment. tAP and bAP signifcantly increased from month 6 and 3, respectively, returning to basal values at month 18. While BGP signifcantly increased from month 3 and remained increased at month 24 with a maximum peak at month 6 (+160.8%). Dpyr signifcantly increased from month 6, returning to basal values in month 24, while s-CTX signifcantly increased from month 3 and remained high up to month 24 with a maximum peak at month 6 (+78.1%) ( Figure 2). PTH decreased signifcantly at month 6 and returned to baseline values at month 18. A sustained increase from month 3 to 24 was found for serum and urinary calcium, without clinical hypercalcemia or nephrolithiasis. No signifcant diferences were found in serum phosphate and 25OHD during treatment. Magnesium signifcantly decreased from month 3 until month 24 without hypomagnesemia, while uric acid showed an inverse pattern of behavior (Table 2). Discussion TPTD has both a direct action on osteoblast receptors and a reduced efect on sclerostin production by osteocytes, causing an increase in proliferation and diferentiation of osteoblast precursors through canonical Wnt signalling. Tis process, together with the stimulation of osteoblast function, led to increases in bone volume and a substantial proportion of new bone matrix in the trabecular and endocortical surfaces [15]. Further, TPTD also increased the production of RANK ligands by osteoblasts, resulting in osteoclast activation and, consequently, bone resorption. In this real-life study, we observed an early increase in bone formation markers. BGP signifcantly increased from month 3 and remained over the 24 months of treatment. tAP and bAP increased from month 6 and 3, respectively, returning to basal values at month 18. Bone resorption markers also increased early: s-CTX signifcantly increased from month 3 and remained high up to month 24; Dpyr signifcantly increased from month 6, returning to basal values at month 18. Te increase in bone formation markers was proportionally higher than the increase in bone resorption markers (BGP + 160.8% and s-CTX + 78.1%). Many studies show that P1NP is a bone formation marker that increases earlier and, therefore, is the most useful marker to assess TPTD action. Tis marker is not yet available in Argentina for clinical practice; thus, it was not included in our study [16,17]. In said studies, P1NP rises early by 150%-300% from the basal values, similar to our results with BGP in real-life observation [18][19][20][21]. Te s-CTX percentage of increase and its curve of rise in this study were similarly observed in other randomized trials [17,[21][22][23]. To our knowledge, there are no studies on bone turnover markers in real-life patients to be compared with our results. BGP was International Journal of Endocrinology the bone marker that had shown the highest increase since the early months of the study and remained high during the 24 months of treatment. Tis might be useful to prove compliance and to assume treatment response before densitometric measurements. Earlier and higher rise in bone formation than resorption allows for a signifcant gain in bone mass, especially in vertebrae with positive changes in microarchitecture, and consequently, in trabecular resistance [22]. As bone resorption increases, cortical porosity also increases, causing apparent detrimental changes in areal bone density in cortical regions, such as the hip and radius, thus explaining that bone mass in those areas may decrease when measured with DXA. Based on medical literature, we observed a greater increase in LS BMD, signifcantly higher from month 6 (+5.3%) with a peak at month 24 (+12.3%), similar to data already published [5,21,23,24]. Te major increase observed in month 24 is similar to that reported in randomized control trials (Neer et al.) in 1637 patients, where an increase close to 10% was noted in month 21; in the EUROFORS study, the increase was 11.2% [5,24]. TH and FN showed a signifcant increase from months 6 and 12, respectively, reaching a peak at month 24 (TH +5.0% and FN +7.2%). In randomized trials, hip BMD gain was also lower than LS, but in an even smaller proportion than in our study: +3% in the pivotal study and +4.2% in EUROFORS [5,24]. Patients included in this study difer from those in the randomized trial because we also included patients with two or more fragility fractures and not only with vertebral fractures like those in the randomized trials. Other real-life studies assessed the risk of fracture, but they did not include data from BMD or bone markers in comparison with our study [1][2][3][4]. For the reasons explained above, the hip BMD increase might not be the same as that reported in other clinical trials. Such diferences in daily practice might be possible if compared to randomized controlled clinical trials [5]. Observational studies would provide valuable additional information for further clinical trial conclusions. Tere was a diference between patients in our study and the randomized trial patients since we also included patients with 2 or more fragility fractures (not only vertebral fractures like those in the randomized trials). Other real-life studies assessed the risk of fracture, but they did not take into account BMD or bone markers in comparison with our study [24][25][26][27]. For the reasons explained above, the hip BMD increase may not be the same as that reported in other clinical trials. Such diferences in daily practice might be possible when compared to randomized controlled clinical trials [28]. Observational studies would provide valuable additional information for further clinical trial conclusions. Tere is evidence that TPTD treatment responses may be diferent in those patients previously treated with AR vs. naïve patients [12]. AR reduces bone turnover, preventing bone tissue repair and causing its aging and hypermineralization. Newly formed bone tissue from anabolic therapy is less mineralized than older bone tissue. As it is widely known, DXA measures mineralized bone; thus, it may underestimate TPTD bone mass changes, overestimating those produced by AR [2,29,30]. Te EUROFORS study included a cohort of women previously treated with BP who switched to TPTD for 24 months; those patients had an increase of 10.2% versus 13.1% without previous treatment. Tis is like our results: 10.5% vs. 16.7% [31]. Tis study also showed lesser increases of BMD in TH and FN than in LS, as well as less response capacity in those previously treated with AR vs. naïve patients (TH: 3.8% vs. 2.35%, FN: 4.8% vs. 3.9%). Similarly, we also found less gain in BMD in AR-prior (TH 7.2% vs. 5.2%, FN: 8.3% vs. 4.8%). Compared with this study, our results showed more gains in TH and FN BMD, but they were not signifcant. Tese diferences between studies may be due to the size of our sample, which was smaller, in addition to the fact that we conducted a real-life study with a smaller number of DXA measurements which may lead to these statistical diferences. A recent report, Lyu et al., with reallife data, analyzed patients who, after a median of 7 years under BP treatment, were switched to TPTD (n � 110) or Dmab (n � 105). Tose on TPTD reduced hip BMD values to less than basal values in the frst 12 months and then regained up to basal values, whereas those on Dmab did not reach the expected gain. We did not observe a decrease in hip BMD in AR-prior while on TPTD [32]. Sequential treatment study results suggest that TPTD as the frst drug followed by AR [11,17,18,33] is the ideal combination to obtain major gains in bone mass, as observed in our real-life study. Most of our patients, despite the presence of multiple fractures and very high risk, had been previously treated with AR for at least two years, but as with real-life patients, they were not always referred to specialists, and the concept of best sequential treatment was not well known. Adverse events were reported in our study, such as small increases in serum calcium, serum uric acid, and 24 h urinary calcium, with a small decrease in serum magnesium; none of them were clinically signifcant and were similar to those found in the pivotal TPTD studies [5]. Tis could suggest that these parameters should be monitored or instanced during teriparatide treatment and eventually magnesium supplements are to be prescribed when its defciency appears clinically relevant. Te limitations of our study are those of real-life evaluation since DXA and bone markers were not measured at a single center, even though the same methods were used. Not all patients had all the DXA or bone marker measurements. TPTD compliance was not evaluated. AR-naïve patients were fewer in number than AR-prior patients, losing some statistical value. Most of the AR-prior postmenopausal women had been switched to TPTD because of the poor clinical response to AR. Te fracture outcome was not reported because this study was a retrospective real-life study, and not all patients had a spine X-ray to evaluate the morphometric fractures at the end of the treatment with teriparatide. Since we conducted a retrospective study, we were unable to collect exact information about dietary calcium intake and ongoing glucocorticoid treatment. Conclusion As reported in clinical studies, treatment of osteoporotic postmenopausal women with TPTD in real life induced a signifcant increase in bone turnover markers from month 6 on, with a higher impact on BGP and s-CTX markers, and an increase in the lumbar spine, femoral neck, and total hip BMD from months 6-12 with continuous gain up after 24 months. Tis increase was earlier and higher in the lumbar spine. In addition, TPTD in real-life induced an increase in bone turnover markers and BMD, regardless of the previous use of AR, although this was less evident in those who previously used AR. Te fact that our biochemical results showed a safety profle like the pivotal studies should be highlighted. Data Availability Data available on request. Te data are available on request through a data access committee, institutional review board, or the authors themselves to Lucas Brun (e-mail: lbrun@ unr.edu.ar). Conflicts of Interest Te authors declare that they have no conficts of interest.
3,801
2023-01-13T00:00:00.000
[ "Medicine", "Biology" ]
A review on key challenges in intelligent vehicles: Safety and driver-oriented features The huge advantages of intelligent vehicles (IVs) in improving road safety and operating efficiency have become a research focus in the industry. IVs have made significant progress in recent years, but it is still face great challenges in order to be accepted by users on a large scale. In this regard, the authors propose that the research of IVs can be developed along the lines of safety, comfort and economy, gradually overcoming existing dilemmas. First of all, security is the most basic requirement of IVs. The authors sort out the key technologies and challenges of the basic architecture of IVs, and propose existing attack and defences strategies for IVs information security technology. Secondly, comfort is more about people’s subjective feelings. From two aspects of physiological comfort and psychological comfort, the paper studies the anthropomorphic decision-making to overcome the mechanized speed control, human computer interaction design, personalized driving style and ethical decision-making methods. Then, aiming at the micro-and macro-levels of economy, it outlines technologies such as economic driving behavioural, collaborative control of people, vehicles and roads and IVs sharing. Finally, the authors summarized the challenges and future development directions in the three stages of IVs development. INTRODUCTION With the rapid economic growth, the number of motor vehicles and the number of traffic accident deaths in China has become the first in the world. According to the data analysis of China Statistics Bureau, in 2019, China's automobile traffic accident rate dropped by 4.5% year-on-year, but there were still 159,335 automobile traffic accidents, resulting in 43,413 deaths and 157,157 injuries, causing huge economic losses [1,2]. And in the National Highway Traffic Safety Administration (NHTSA) report of the United States, 94% of the accidents are caused by driver's human error [3], and related to the driver's age and dangerous driving behavioural [4,5]. Studies have shown that intelligent vehicles (IVs) have great potential in improving road safety, passenger comfort, road traffic efficiency, energy conservation and emission reduction [6]. Therefore, the field of IVs has become the research focus of scientific research teams all over the world, and has achieved remarkable results. Secondly, according to the data of the World Health Organization, the number and proportion of the global elderly population continue to grow, and the scale is expected to reach about 2 billion by 2050 [7]. The aging trend of the world population is irreversible, and a series of traffic problems will occur at the same time. In the event of an accident, older drivers are unable to respond quickly in a short period of time, especially when cognitive function begins to decline. Simplifying operation tasks helps to improve safety [8]. In addition to the elderly, there are also vulnerable groups with difficulties in driving vehicles, such as people with disabilities, children and women. In summary, the development of IVs plays an important role, not only to meet the social interests, but also to simplify the way of travel. Although the development of IVs has many benefits, there is still a long way to go for large-scale application. First of all, [10][11][12][13] The literature in one box represents the same main scope. as the most basic requirement of IVs, safety has always been a research hotspot, especially in recent IVs accidents, we have to re-examine this topic. Secondly, comfort is to consider human factors on the basis of safety. Studies show that most people feel uncomfortable and uncomfortable with IVs. They are afraid of the potential danger of losing control [9]. Thirdly, the economy (Energy Saving, Environmental Protection) is also an important factor affecting the landing of automatic driving under the condition of meeting the user's safe and comfortable experience. The safety, comfort and economy of IVs interact with each other and are indispensable. Therefore, it is necessary to study the development mechanism of these three factors. In addition, the existing literature review is relatively small, and most of the literature coverage is relatively narrow, mainly in environmental perception, decision control and other aspects, the specific comparison is shown in Table 1. This paper aims to make a comprehensive and systematic overview of the research status in the field of IVs, so that we can have a new understanding. This paper is divided into five parts, the rest of the structure is as follows: In Section 2, we summarize the security framework of IVs, and discuss the communication security technology of IVs. In Section 3, we study the comfort and related concepts of IVs, including riding comfort and psychological comfort, focusing on anthropomorphic driving, human-computer interaction, moral and ethical decision making. In Section 4, we study the economic related technologies, including economic driving behavioural, IVs sharing, intelligent network coupling queue and so on. In Section 5, we make a systematic summary of the status quo of IVs development research, and look forward to the possible future development direction in view of the existing challenges. SAFETY OF IVS Safety [29] is the foundation of IVs landing and the most basic requirement of users. The safety problems of IVs mainly include functional safety, safety of the intended functionality (SOTIF) The basic architecture and functionality of IVs safety and information security. As a supplement to functional safety, SOTIF emphasizes to avoid unreasonable risks due to the limitation of expected functional performance. The basic architecture of IVs is an important part to solve the problem of safety, mainly including: environmental perception, intelligent decision making and control execution [19]. The specific architecture and functional safety issues are shown in Figure 1. In this section, we will discuss the research status of the above contents, as well as the existing challenges and future trends. Environmental perception system security The environmental perception technology is equivalent to the driver's eyes and ears, and needs to perceive the external The main task of environmental awareness information in real time. Its main task is shown in Figure 2. In the early stage, the environmental sensing system completed the detection task through various sensor systems, such as Vision system, Radar system and Lidar system. Due to the single sensor's own parameters and external environment interference, the sensing is limited (Field of view, Range, Direction, Number of scanning rays, Weather etc.), which cannot provide reliable 360 • environment sensing [30]. Single sensor often has insufficient perception, resulting in false detection and missed detection. The sensing range of environmental sensing system is expanded by fusing with other sensors [31,32]. In the sensor fusion system, the selection and combination of sensors directly affect the reliability and robustness of environmental perception of IVs, as well as vehicle production costs. Sensor fusion mainly includes radar and lidar fusion, radar and vision fusion, lidar and vision fusion and a variety of sensor fusion. Through the Bayesian theory, Kalman filter and Dempster-Shafer (DS) evidence theory, the fusion algorithm can accurately and reliably describe the external environment. In order to improve the perception accuracy of unsafe scenes in SOTIF, the development and verification of fusion algorithms in complex scenes are further strengthened based on the existing sensor fusion algorithms. Although sensor information fusion and redundancy design can effectively solve the problem of sensor failure, recognition algorithm error recognition and unrecognized, it still needs to solve the problem of motion compensation, time synchronization and real-time requirements [32]. Secondly, the existing sensor technology only provides the vehicle with the ability of 'seeing', while the remote communication technology and V2X technology can provide interactive perception to make up for the constraints of environment and distance. Jung et al. [33] proposed an over the horizon sensing system based on V2X communication technology. Through data fusion with the sensors of the vehicle, it can realize all-weather and uninterrupted accurate sensing. The application of 5G network technology in V2X communication can speed up data transmission and data security, and reduce the perception delay and instability characteristics [34]. Security of intelligent decision system Intelligent decision system is the brain of IVs, which is mainly composed of global planning, behavioural selection and local planning [19]. Its main research framework is shown in Figure 3. Common global planning algorithms include Dijkstra and A* algorithms, as well as various improvements based on these two algorithms. To deal with the complex urban road network and ensure the safe and effective arrival of the destination, the classic path planning algorithm is obviously not enough. Through V2X communication technology, multimodal route planning considering traffic congestion, public safety, traffic management and weather factors is the future development direction [35]. In order to generate obstacle free trajectory, behavioural decision making needs real-time risk assessment of the surrounding environment and relevant road traffic rules to select the driving mode. The application scenarios of time index and dynamic index as early risk assessment indicators are relatively simple. Wang et al. [36] proposed the concept of 'driving risk field', considered the influence of various traffic elements in the closedloop system composed of people-vehicle-road on driving risk, and predicted the driving safety trend through dynamic changes. Gao et al. [37] use stochastic environment model and Gaussian distribution model, not only can accurately assess and predict the risk within the prediction range, but also can assess the risk of the scene outside the prediction range. Katrakazas et al. [38] constructed a joint risk assessment framework based on interactive perception motion model and dynamic Bayesian network (DBN). In risk assessment, other traffic participants' movements are predicted to calibrate the assessment results. The vehicle movement on the road is generally regular, and we can predict the vehicle's next trajectory based on the deep learning (DL) model (Gaussian Mixture Model, Hidden Markov Model) based on the steering, acceleration and deceleratio and distance from the lane [39]. However, pedestrians can quickly change their speed and direction because of their agility. The accuracy and real time of intention estimation are both very challenging. Ahmed et al. [40] proposes a DL to estimate the future intention of pedestrians, taking into account the dynamic motion model (DMM) of motion trajectory, the skeleton characteristics of pedestrians. In the previous method, pedestrians are regarded as independent individuals. Vemula et al. [41] proposed a social attention trajectory prediction model, in which pedestrians adjust their own trajectory according to the movements of other people around them. Finally, according to the behavioural decision results, the local planning chooses the optimal collision avoidance trajectory in limited time and under various constraints. In conclusion, the existing decision-making algorithms are mainly based on empirical rules, data-driven, utility function, interaction and uncertainty. However, these algorithms need a large number of calibration data, and the calculation is complex, resulting in low real-time performance, easy to fall into local optimal FIGURE 3 Intelligent decision system research framework solution and other functional limitations, leading to decisionmaking errors in unknown scenes. Through the establishment of a typical scene library which integrates various types, high complexity and uncertainty, and testing the robustness, adaptability and generalization ability of vehicle decision function to different scenes. On this basis, the decision algorithm is continuously optimized to improve the processing ability of IVs in SOTIF scene. Control execution system security The control execution system is the key to realize the autonomous driving of IVs, and the level of control is directly related to the safety of vehicles. The control module is equivalent to the hands and feet of the human driver, which is used to implement decisions to realize the lateral and longitudinal control of the vehicle. Its control framework is shown in Figure 4. In the development of control algorithm, the longitudinal and transverse decoupling is often used under normal conditions. Zheng et al. [42] decoupled the longitudinal and lateral motion planning to realize the trajectory replanning in the normal lane changing process to avoid collision. In the face of complex traffic environment, single lateral/longitudinal control and simple coupling relationship will lead to weak robustness of the system. Aiming at the non-linear coupling relationship of vehicle lateral/longitudinal motion, the lateral dynamic model and longitudinal dynamic model of vehicle are integrated in the same FIGURE 4 Control execution system framework control framework, which has strong robustness [43]. At the same time, the advantages of model predictive control (MPC) are outstanding. It is not only simple in structure, but also can deal with complex process models with input constraints and non-linearity, and can specify the system input, state and output of the system [44]. In addition to the deviation from the ideal target caused by the limitation of its own model function, it also faces the SOTIF caused by the external environment, such as the limitation that the vehicle dynamic model is not enough to [45] used two-degree-of-freedom dynamic model to design linear quadratic regulator (LQR) algorithm controller, which can well solve the steady-state tracking error of curve driving. Ji et al. [46] proposed an IVs path tracking framework based on multiconstraint MPC considering geometric constraints of road and dynamic constraints of vehicle. Although the researchers have improved the existing models, the research on control execution under extreme conditions is not enough, such as the response ability of the system under the boundary conditions such as the minimum road adhesion rate, the maximum allowable lateral force interference, the maximum longitudinal slope, the maximum execution deviation and so on. Secondly, the failure control of the actuator itself is still a big problem, and the functional redundancy of the actuator provides an opportunity to reduce the safety requirements of a single actuator. Communication and information security More and more electronic control units and external communication technology interfaces are used in IVs, and the accompanying hacker attacks and network communication security are threatening the safety of vehicles and users' information privacy. In recent years, the occurrence of vulnerabilities in BMW's digital service system, remote Tesla intrusion incidents, and Nissan LEAF automobile API leaks have proved the necessity of information security research. The current sources of security risks mainly include external mobile communications, vehicle-mounted networks, vehicle-mounted terminals and cloud platforms, as shown in Figure 5. In order to accurately respond to information security threats, it is necessary to have a correct understanding of existing attacks. There are many types of attacks that affect the communication security of smart vehicles. Dibaei et al. [11] divided the types of attacks into denial of service attacks, distributed denial of service attacks, black hole attacks, reply attacks, Sybil attacks, Impersonation attacks, malware, falsified-information attacks and Timing attack. Considering that the degree of intelligence and networking of IVs systems is constantly improving, there are multiple attack portals in the vehicle life cycle, and a secure system needs to detect the type of attack of the system in real time. Early machine learning methods are widely used to identify various types of attacks, but the accuracy will decrease with the increase of classification tasks. Yin et al. [47] established an intrusion detection system model based on DL, and proposed a DL method using recurrent neural network for intrusion detection. Experimental results show that Recurrent Neural Network-Intrusion Detection System (RNN-IDS) is very suitable for create classification models with high accuracy, and its performance is better than traditional machine learning classification methods in binary and multi-level classification. In addition to DL as the main method of intrusion detection, there are signature-based detection, anomaly-based detection, malware detection and software vulnerability detection. As the number and complexity of intrusions increase, a single or isolated IDS is ineffective in many cases. In this regard, Meng et al. [48] designed collaborative intrusion detection systems/networks (CIDSs/ CIDNs) to allow intrusion detection system nodes to collect and exchange information required by each other. For the IVs communication system itself, the encryption and authentication features are effective countermeasures to reduce hostile attacks. Symmetric encryption, asymmetric encryption and attribute-based encryption are conventional methods, but there are still data leakage, high computational load and long delay time. To solve this problem, Ying and Nayak [49] proposed an anonymous lightweight authentication scheme based on the smart card protocol, which uses low-cost encryption operations to verify the legitimacy of the user (vehicles) and verify data messages. In the next stage, we can improve the accuracy and response time of the detection system, establish a unified communication protocol standard, optimize the computing resources of machine learning and obtain a data set that can be effectively trained. Secondly, blockchain technology was first applied to the cryptocurrency Bitcoin, which is an immutable peer-to-peer distributed database containing encrypted security information [50]. Considering the environment of IVs communication network and how to establish a reliable trust network based on blockchain between IVs will be another communication security revolution [51]. COMFORT OF IVS The biggest challenge of IVs is not only safety, but also human factors. The comfort of IVs mainly considers the subjective feeling of passengers, including physiological comfort and psychological comfort [52]. From the level of physiological comfort, it is necessary to ensure that the impact of speed change on the user's body is within a certain limit. From the perspective Research framework of IVs comfort of psychological comfort, driving mode is selected according to users' preferences, and information interaction with other traffic participants is mainly involved in trust and ethics issues. The research framework of IVs comfort is shown in Figure 6. Physiological comfort The causes of autopilot diseases are mainly due to unstable speed, excessive acceleration and deceleration, unstable posture and sensory conflict [53]. In particular, IVs make users lose control and cannot predict the future trajectory, which promotes the incidence and severity of carsickness [54]. Secondly, it can reduce the workload of users during the ride, and personalized design can also improve the physiological comfort of users. Therefore, some anthropomorphic decision-making control and human-computer interface interaction are used to alleviate the impact of physiological comfort. Anthropomorphic decision making The comfort performance can be effectively improved by learning the decision-making control of skilled human drivers in complex and potentially dangerous situations. Guo et al. [55] proposed a method for local trajectory planning by generating mixed potential graph of anthropomorphic behaviour. The method uses Bayesian network to generate the trajectory induced potential energy of the surrounding environment, and considers the driving skills and experience of drivers in real traffic environment, as well as traffic rules. He et al. [56] proposed a new cost function, which considered the safety and comfort of the track, and mainly referred to the lane change decision of human drivers in the natural driving data for lane incentive. Li et al. [57] proposed a social intelligent empirical decisionmaking network that imitates human beings to deal with the coexistence of human drivers and IVs on existing roads, and the misunderstanding between the two causes traffic conflicts and affects comfort. The traditional path tracking method is always eager to correct the error between the planned trajectory and the current state of the vehicle, which makes the vehicle need to continuously conduct steering, braking and acceleration operations. This process makes intelligent driving appear abnormal and too cautious. In this regard, Wei et al. [52] proposed a vehicle motion control frameworks for risk corridor. The Nonlinear Model Predictive Control (NMPC) model was established by using lateral offset tolerance and existing vehicle dynamic constraints under the condition of considering the comfort and safety of passengers. The results show that the method can track the planning path with a smooth trajectory. Zhu et al. [58] proposed a depth deterministic policy gradient (DDPG) algorithm, in which the reward function was learned from the natural driving following data, and finally the optimal strategy or vehicle following model was obtained from the anthropomorphic mapping of vehicle speed, relative speed between front and rear vehicles, distance between vehicles and acceleration of rear vehicles. In addition, the speed control strategy can effectively improve the comfort. Du et al. [59] proposed a disturbance rate model to describe the individual vibration sensitivity. The theoretical speed was calculated to maximize passenger comfort, and the annoyance rate was used to modify the evaluation results. González et al. [60] use Bezier curve to smooth acceleration and bump curve, and improve riding comfort of automatic driving vehicle by limiting global acceleration in the whole driving process. In order to ensure the vehicle comfort when the vehicle speed changes significantly, Wu et al. [61] proposed an adaptive cruise control (ACC) system with an active braking algorithm, and an upper decision controller based on the MPC algorithm. The results show that the speed and distance of the vehicle are always within the specified comfort range. Interaction decision As human drivers get rid of the control of vehicles, humancomputer interaction becomes more important, mainly divided into internal interaction and external interaction [62]. With the change of user identity, more and more demands are needed. Human-computer interaction as a link of information transmission with IVs system. In order to meet the interaction needs of a variety of people and reduce user cognition, the multi-channel fusion of multiple sensory channels (Visual, Auditory, Smell, Touch, Taste, Body Feeling) of human beings is integrated to generate interaction with the system [63]. Research by Manawadu et al. [64] has proved that multi-modal human-computer interaction can achieve comfortable riding experience by promoting efficient interaction and reducing user workload. In order to further improve the physiological comfort, the intelligent cockpit has a powerful situational awareness system, which judges the user's heart rate, respiratory rate, age, gender, shape etc. through the biosensor technology, so as to provide users with different scene riding experience [65]. Differences in users' visual information, sense of balance and expectation may also cause physiological discomfort [66]. In this regard, Sawabe et al. [67] raised a reduction of reality method based on acceleration stimulation, which can reduce the motion sickness of IVs by showing the intention of the vehicle to the user before the actual acceleration occurs and guiding the passenger's centre of gravity to move. Wang et al. [68] proposed a vehicle collision pre-warning algorithm based on driving safety field model. The algorithm can effectively express collision risk in various scenarios of car following and lane changing, and give warning to users in real time. Users need to know the dynamic environment around them regularly, so that they can take any action in time in case of emergency, so as to avoid long-term maladjustment caused by sudden change of the system. Psychological comfort Psychological comfort is to a large extent the subjective feelings of people, the existing theory and technology is difficult to quantify. In this regard, psychological comfort is to study the personalized decision making of IVs, interaction design with other participants and ethical decision making under the condition of solving the reliability and accuracy of the system, so as to improve the trust and acceptance of users. Security and trust Research shows that the IVs considering the user's personalized driving style can effectively improve the trust and acceptance of users [69]. According to the driving behavioural such as safe distance, acceleration curve and lane changing speed, drivers' styles can be classified into aggressive type, normal type and cautious type [70]. In order to reduce the tension of users during driving, Lu et al. [71] used the on-board sensing information to learn the driver's speed control experience online and proposed a personalized behavioural learning system (PBLS) to improve the comfort performance of traditional motion planning. The system is based on Neural Reinforcement Learning (NRL) and can adapt to the driving behavioural of different drivers and different driving scenarios. Sama et al. [72] used a DL automatic encoder to process a large number of experienced old drivers' driving data to extract potential features, and then clustered the features into driving behavioural, and created a speed profile to allow IVs to drive according to the user's driving style. Xu et al. [73] proposed a motion planning method for learning natural driving data. On the basis of considering trajectory comfort, efficiency and safety and other factors, combined with lane change decision making of human driving, lane change incentive cost function was established. This method can approach the trajectory of human drivers. Vallon et al. [74] proposed an automatic lane change algorithm based on support vector machine classifier, which can directly capture and copy the natural driving behavioural of human beings, and learn whether to continue to keep the lane or start to change lanes according to the performance preference of drivers. For other road participants, the anthropomorphic driving of IVs can effectively improve the sense of safety. Hang et al. [75] proposed an anthropomorphic decision-making framework FIGURE 7 Improve comfort through external human-computer interaction based on non-cooperative game theory, which not only considers personalized driving style, but also adds social interaction characteristics with other traffic participants, which can cope with complex mixed traffic flow. In addition, external humancomputer interaction (EHMI) enables traffic participants to understand the vehicle's intention more intuitively, so as to avoid psychological panic, as shown in Figure 7. In order to further study the status information requirements of pedestrians for IVs, Faas et al. [76] explored the acceptability of different EHMI variables (No EHMI, State, State + Perception, State + Intention, State + Perception + Intention) to traffic participants. The research results show that state + intention EHMI can increase user experience, perceived intelligence and pedestrian transparency more than other EHMI information. Rettenmaier et al. [77] proposed that in the future mixed traffic environment, when the IVs using EHMI communicates with other traffic participants, the time taken will be significantly shortened and the collision accidents will be less. Morality and ethics The development of IVs cannot avoid a series of dilemmas in moral and ethical decision making. For example, in an extreme environment, a smart car needs to decide whether to hurt the lives of users or multiple pedestrians, and any choice will bring about social ethical dilemmas [78]. In addition, IVs will be in the situation of mixed traffic flow for a long time, and will face the problem of responsibility attribution after accidents with other sudden traffic participants due to insufficient perception, violation of traffic rules in order to avoid obstacles etc. In this regard, Waldrop [79] insisted that if there is no clear moral code to guide the decision making of IVs, it is difficult to change the current situation of distrust of users, and at the same time, it will trigger public opinion. In order to quantify moral decision making, Gerdes and Thornton [80] proposed an analogy between the ethical framework of consequentialism and deontology in philosophy and the use of cost function or constraint in optimal control theory. Thornton et al. [81] used a set of moral framework to map the design decision of MPC problem to philosophical principles. By studying how to divide path tracking, vehicle occupant comfort and traffic law priority, the constraints of obstacle avoidance and vehicle turning rate were taken as constraints, which provided guiding principles for the responsibility planning of self-driving vehicles. Riaz et al. [82] put forward a new idea of improving the collision avoidance performance of autonomous vehicles by using human social norms and human emotions, and designed social norms by using emotions as the compliance mechanism, so that IVs can choose the collision with less harm as the decision-making mechanism in possible collisions. In order to obtain data and analysis driving decision-making factors about ethics and law of IVs in moral dilemma, Li et al. [83] combined with the current traffic laws and the cases of accident liability judgment, a series of experiments were carried out in the virtual reality environment. It was concluded that the number of collision targets and whether to comply with the traffic rules were the most important factors affecting the decision making, and they use grey correlation entropy analysis method was used to quantify the severity of collision injury of collision targets. In order to alleviate the severity of inevitable collisions, Wang et al. [84] considered adding potential severity and artificial potential field theory to the controller target to realize IVs emergency obstacle avoidance. On this basis, Wang et al. [85] proposed a Lexicographic Optimization-based model predictive controller, which can avoid obstacles with high assumed priority and solve the problem of moral decision making in vehicle accidents. ECONOMY OF IVS Energy saving and emission reduction has always been the focus of the automotive industry, and has become a key step in the large-scale application of IVs. IVs can improve the economy from many aspects, including anthropomorphic driving style changes fuel consumption rate, optimizing driving behavioural through vehicle road collaborative control, cruise control of automatic driving fleet, electrification and sharing of intelligent connected vehicles. Economical driving behaviour Driver style has a significant impact on vehicle economy. In order to analyse the relationship between driver behaviour differences and energy consumption in detail, Stogios et al. [86] studied that when driving on highways and under different traffic flows on main roads, fuel consumption on highways can be reduced by 26% when IVs is aggressive driving, while fuel consumption will be increased by 35% under caution driving. Fleming et al. [87] proposed an eco-driving system considering driver's personalized preference. By modelling longitudinal driver's behaviour in the optimal control framework, the balance between driver's preference and energy efficiency goal can be achieved. The control objectives and control protocols of IVs need to be adjusted adaptively according to different driving styles. Lv et al. [88] proposed a collaborative design optimization framework for device parameters and controller parameters of intelligent electric vehicles based on network physical system. At the same time, the driving conditions required by the existing driving style are difficult to meet. Malikopoulas and Aguilar [89] analysed the driving style factors affecting fuel economy and proposed a polynomial meta model framework for optimizing driving style. With the development of people-vehicle-road collaborative control, the future traffic development trend can be predicted in a short term, such as road shape, traffic flow change, traffic signal status and the movement of adjacent vehicles, so as to optimize driving behavioural and achieve higher fuel economy, as shown in Figure 8. The research shows that, based on the prior knowledge of road speed limit, safe speed on curve and average traffic speed estimation, speed conversion can be more energysaving when the expected speed constraint changes [90]. Therefore, Ding and Jin [91], based on the curvature information of the road ahead of the vehicle extracted from the high-precision digital map, combined with the established fuel consumption model and vehicle dynamics model, applied the dynamic programming algorithm to calculate the optimal speed profile of the entire curved road. And knowing the slope of the road in advance, the vehicle can slide or choose the appropriate speed. In addition, the impact of eco-driving at signalized intersections on energy efficiency is a hot topic in recent years. It is reported that in 2015, traffic congestion in the United States caused nearly 7 billion hours of delay and more than 3 billion gallons of fuel waste, a large part of which was caused by traffic signal intersection congestion [92]. Hu et al. [93] proposed an optimal vehicle routing algorithm considering waiting time at signalized intersections and ecological driving model. The algorithm is suitable for intersections with dense traffic lights, and the higher the density of traffic lights, the more obvious the advantages of the algorithm. Xu et al. [94] proposed a vehicle speed optimization method based on traffic signal control in intelligent network, which can simultaneously optimize traffic signal timing and vehicle speed trajectory. The method minimizes the total travel time of all vehicles by calculating the optimal traffic signal time and vehicle arrival time, and optimizes the engine power and braking force to minimize the fuel consumption of a single vehicle. The current challenge lies in the local or overall road network traffic signal and vehicle speed trajectory planning. It is necessary to optimize the traffic signal time and the average speed of traffic flow at each intersection to improve the economy and traffic efficiency. Economical travel mode The platoon is the prototype of multi-vehicle cooperation. By following each other closely, the air resistance of all vehicles can be reduced, the road traffic capacity can be increased and the fuel economy of vehicles can be improved [95]. Guo and Li [96] proposed a fuel time optimization principle based on Pontriagin's minimum principle to calculate the optimal speed of each vehicle and the speed setting value of the platoon. But the intelligent level of vehicles is low, and information between vehicles cannot be shared, and dense formation is easy to cause traffic accidents. He et al. [97] proposed a multi-stage optimal control scheme considering the length of vehicle queue and the state of traffic lights to obtain the optimal vehicle trajectory on the planned route. Although the queue length and the change of signal state are considered as constraints, it is difficult to estimate the queue length in real time. In this regard, Gao et al. [98] proposed a model based on shock wave perception and back propagation neural network perception, which can predict the queue length of waiting vehicles at signalized intersections in real time under mixed traffic conditions. In order to improve the driving efficiency of IVs in the environment of fuel consumption, Chen et al. [99] proposed a queue path planning strategy based on deep reinforcement learning of network edge nodes, considering the joint optimization problem of task duration and vehicle fuel consumption. Hao et al. [100] proposed a framework combining driving state recognition with queue operation and risk prediction to reduce the interference caused by driving state jitter, so as to improve the evaluation speed, efficiency and fuel economy of multi queue system. IVs is arranged in a long formation to maintain the desired formation while maintaining the safe distance and speed, which requires specific algorithms, controllers and strategies. Soni and Hu [101] summarized a variety of distributed and decentralized vehicle formation control methods, which were divided into leader-follower method, behavioural-based method and virtual structure method. The longitudinal control of vehicle platoon has been studied for many years, and vehicle merging lane changing has always been a hot spot in the research of lateral control. Early merging is based on single platoon MPC, but there are few merging strategies for two-vehicle platoon on adjacent lanes. Min et al. [102] proposed a double platoon merging strategy and designed a DMPC control strategy to control the queue merging problem of expressway. Compared with the traditional single vehicle merging method, the queue merging process is more accurate and time-saving. The non-linear dynamics and safety constraints in vehicle queue are also the research hotspots. He et al. [103] proposed a new distributed economic model predictive control (EMPC) method. By reducing the fuel consumption cost, the strategy ensures the asymptotic stability and leader follower stability of the platoon, and also ensures the fuel economy of the whole platoon. In addition to the concept of IVs queue, IVs sharing is considered to be the future economic travel mode. Intelligent shared vehicles (ISV) reduce travel time, reduce the cost of passengers per day and per kilometre and reduce carbon emissions to a certain extent [104]. Fagnant and Kockelman [105] proposed an ISV dynamic ride-sharing (DRS) model, which allows two or more users with similar departure, destination and departure time to carpool. The results show that each ISV can replace about 10 traditional family cars, which can effectively reduce the overall vehicle mileage and improve the economy. With the popularization of ISV electrification, the electric vehicles has limited endurance and relatively long charging time. In order to ensure the timeliness of travel, the vehicle scheduling and system rebalancing must consider the tram charging problem. Hu et al. [106] proposed a state of charge (SOC) estimation method of series connected battery pack based on fuzzy adaptive federal filter, which can accurately estimate the remaining power of electric vehicle. Considering the scale and uncertainty of the system prediction, Hu et al. [107] proposes an MPC framework with cost-optimal, which accurately estimates the degradation of fuel cell and battery system. Tang et al. [108] considered the real travel route information data of drivers to train the speed predictor, which can further improve the prediction accuracy, so as to improve the vehicle economy through IVs system control. Ammous et al. [109] modelled the routing problem between multiple charging stations as a multi-server queuing system, and set the goal as a stochastic convex optimization problem, which minimizes the average total travel time of all users relative to their actual travel time. In fact, the specific realization of vehicle to grid (V2G) reduces the energy loss of the whole system by optimizing the charging time and path planning of vehicles. More and more researches are focusing on the coupling characteristics of transportation network and power grid. The optimization problem of single vehicle is changing to the joint optimization of transportation network and power grid, and the performance analysis of charging network based on intelligent sharing platoon has been derived. CONCLUSION IVs can make up for the insufficient operation of human drivers, thus reducing traffic accidents, improving road traffic efficiency, providing convenience for vulnerable groups and changing the way of human travel. We review the development of IVs from three important aspects (safety, comfort and economy). On this basis, through a large number of literature survey, we put forward the development status and challenges of IVs in various stages. First of all, safe driving is the foundation of IVs landing. We outline the framework and information security technology to ensure vehicle safety. Due to the insufficient perception in the existing environment sensing systems, the detection errors and omissions are caused. How to improve the accuracy of the sensing algorithm, the fusion of multi-source heterogeneous sensing data is the primary problem in the research of perception. Moreover, in the face of complex traffic environment and weather changes, the sensor system cannot meet the requirements of over the horizon sensing. In the foreseeable future, the cooperative sensing of communication technology and sensor system will effectively improve the accuracy of sensing data. At the same time, the decision-making system is faced with the uncertainty of the rules of the unknown risk scenarios, so it is urgent to study a reliable risk assessment and prediction model, and the pedestrian trajectory prediction is the difficulty. There are some problems such as the deviation between the control target of the decision system output and the ideal goal of the control execution system, and the function limitation of the actuator. How to consider various vehicle dynamic instability constraints to establish control model and actuator failure redundant mechanism design are challenging problems. In addition, although the IVs networking can make up for the sensor perception defects and improve the accuracy of decision making and control, the information security problems cannot be ignored. How to build multi-domain layered intrusion detection system and active protection information security model and 'end management cloud' information security protection system become the next research focus. Secondly, from the perspective of human factors and social acceptance, the status quo and challenges of IVs comfort are discussed. The influencing factors of users' physiological comfort mainly come from the mechanization of decision control algorithm, the great difference of speed change and the impact of unknown scene. One of the biggest challenges for IVs is not only to drive safely, but also to drive as smoothly as an old driver. Therefore, it is necessary to use DL algorithm to refer to human driving experience for path planning and trajectory tracking, so as to improve the driving proficiency of IVs. At the same time, it is necessary to consider the speed control of physiological tolerance of most people. However, it is difficult to fully consider people's psychological feelings. Therefore, learning the driving style accepted by users or most users and the EHCI that is easier to be understood by other traffic participants can effectively improve the degree of trust; in the moral and ethical dilemma, to develop a complete set of decision-making standards to meet the acceptable level of users and society, and improve the fault tolerance rate of users are the key research objects in the future. Finally, for the economy of IVs, we describe the economic driving behaviour and travel mode from the micro and macro level. Although driving style can improve the economy, it is still difficult to balance the driving conditions, energy efficiency and personalized preference. In this regard, the use of peoplevehicle-road integrated collaborative control can deal with different road scenarios and optimize driving behaviour. For the IVs queue, the longitudinal and horizontal coupling control, multi-queue cooperative control, queue stability control and other aspects need to be studied in depth. With the sharing and electrification of IVs, it is necessary to solve the vehicle sharing mechanism, maximize the traffic efficiency and the coupling characteristics of transportation network and power grid brought by V2G technology. Especially in the next stage, the IVs are the coordinated control of vehicle, road, network and cloud. Accelerating the exploration of the integration of smart city, smart transportation and smart vehicle (SCSTSV) can improve the economy of the whole transportation system. This paper systematically combs the key technologies in all aspects of the development of IVs, hoping to promote the rapid development of IVs and provide systematic understanding for researchers in various disciplines.
9,592.8
2021-06-15T00:00:00.000
[ "Computer Science" ]
Bioassay-Guided Fractionation of a Leaf Extract from Combretum mucronatum with Anthelmintic Activity: Oligomeric Procyanidins as the Active Principle Combretum mucronatum Schumach. & Thonn. is a medicinal plant widely used in West African traditional medicine for wound healing and the treatment of helminth infections. The present study aimed at a phytochemical characterization of a hydroalcoholic leaf extract of this plant and the identification of the anthelmintic compounds by bioassay-guided fractionation. An EtOH-H2O (1:1) extract from defatted leaves was partitioned between EtOAc and H2O. Further fractionation was performed by fast centrifugal partition chromatography, RP18-MPLC and HPLC. Epicatechin (1), oligomeric proanthocyanidins (OPC) 2 to 10 (mainly procyanidins) and flavonoids 11 to 13 were identified as main components of the extract. The hydroalcoholic extract, fractions and purified compounds were tested in vitro for their anthelmintic activity using the model nematode Caenorhabditis elegans. The bioassay-guided fractionation led to the identification of OPCs as the active compounds with a dose-dependent anthelmintic activity ranging from 1 to 1000 μM. Using OPC-clusters with a defined degree of polymerization (DP) revealed that a DP ≥ 3 is necessary for an anthelmintic activity, whereas a DP > 4 does not lead to a further increased inhibitory effect against the helminths. In summary, the findings rationalize the traditional use of C. mucronatum and provide further insight into the anthelmintic activity of condensed tannins. Introduction Approximately 1.5 billion people worldwide suffer from infestations with soil-transmitted helminths (STH) [1], with Ascaris lumbricoides, Trichuris trichiura and Ancylostoma duodenale being the most common parasites [2]. Most people affected live in less developed countries of Sub-Saharan Africa, South America and South East Asia, where poverty, along with poor sanitary conditions, give rise to infections with intestinal helminths. Although not lethal in most cases, these parasites can cause considerable morbidity, such as anaemia and malnutrition, leading to decreased growth and cognitive retardation, especially in children in endemic countries [3,4]. The WHO is currently tackling these problems by setting up Mass Drug Administration (MDA) programs that aim at preventively treating school-aged and preschool-aged children with broad spectrum anthelmintics. Although providing access to effective treatments is desirable for all people affected by these parasites, the long term efficacy remains undetermined and large-scale preventive actions also bear the risk of resistances against the respective drugs to emerge [5][6][7]. This in turn will strongly limit the effective use of the very limited number of drugs against STH we are mainly relying on, namely albendazole, mebendazole, levamisole and pyrantel pamoate [8]. While at present the situation regarding resistances is not as severe as in veterinary medicine, monitoring of the drug efficacy should be improved and efforts in the development of new drugs be stepped up [9]. Natural products have ever since been a valuable source for the identification and the development of new lead structures against various targets, including helminths [10,11]. One approach to discover new active compounds is the investigation of plants based on their traditional usage by an in vitro confirmation of their respective bioactivity followed by advanced functional and phytochemical studies leading to an isolation of the potential active principles [10]. Therefore, an ethnopharmacological field study was carried out from October 2012 to February 2013 in the Ashanti region in central Ghana which revealed a leaf extract of Combretum mucronatum Schumach. & Thonn. to be among the most frequently used plant preparation against helminths [12]. The in vitro activity of a crude ethanolic extract was shown to be superior to other plant preparations against different kinds of nematodes, including Caenorhabditis elegans [12,13], but despite an entry of this plant in the Ghana Herbal Pharmacopoeia, knowledge about its phytochemistry and functionality is very limited. Recently, phytochemical investigations by Kisseih et al. revealed the presence of procyanidins and flavonoids, fatty acids, organic acids and carbohydrates as major components of the leaves of C. mucronatum [14]. Additionally, extracts from several other Combretum species have been assessed for their anthelmintic properties [15], but to our knowledge, no linkage has been established between defined compounds from the investigated extracts of the Combretum species and a potential anthelminthic bioactivity. This study aims at gaining further insight into the phytochemical composition of a hydro-ethanolic leaf extract of C. mucronatum and at the identification of the active principles responsible for the anthelmintic activity by a bioassay-guided fractionation. Phytochemical Characterization of a Hydroethanolic Leaf Extract from C. mucronatum Although the identification of the active compounds was one of the goals in this study, we did not perform a bioassay-guided fractionation in the strict sense. This method is one of the most common techniques to identify bioactive compounds from complex mixtures such as extracts by one or more separation steps accompanied by activity tests to select active fractions for further subfractionation (for review see [16]). In our case this would mean that while focusing entirely on the bioactivity, inactive components of the plant extract which have not been characterized yet would remain unexplored. For that reason, we performed the fractionation by testing the anthelmintic activity after each separation step, but additionally included the isolation and identification of inactive or less active compounds for an improved phytochemical characterization of the extract. As summarized in Figure 1, dried leaves were defatted and extracted by ethanol-water (1:1), followed by partitioning of the extract between ethyl acetate (EtOAc) and water. This protocol yielded an EtOAc fraction, mainly composed of flavonoids and oligomeric proanthocyanidins (OPC) with a degree of polymerization (DP) ≤ six, and a more hydrophilic H2O fraction containing higher oligomeric and polymeric proanthocyanidins, flavonoids and carbohydrates. The EtOAc partition was further fractionated by FCPC to yield 11 fractions (I to XI) from the mobile phase and one additional fraction (XII) formed by the remaining stationary phase. TLC analysis indicated the presence of flavan-3-ols, dimeric and trimeric proanthocyanidins and flavonoids. Subsequent fractionation and isolation of purified compounds was performed by preparative HPLC on an RP18 stationary phase, followed by identification of the purified compounds by NMR and spectroscopic means (CD, ESI-MS). All dimeric proanthocyanidins were identified by 1 H-NMR of the respective peracetates in comparison to published data [17,18]. Spectroscopic (NMR, ESI-MS, CD) identification of this trimer and further OPCs 6 to 10 obtained during subsequent isolation steps was performed after derivatization to the respective peracetates and comparison to published data. Due to a better resolution of the spectra it was also possible to assign the signals for the protons of the catechol ring for each of the three units in 6a and 7a. Additionally, signals of the carbon spectrum could be assigned, completing the spectroscopic data set for the peracetylated compounds [18,19]. An unusual dimeric procyanidin epicatechin-(6′→8)-epicatechin (5, Figure 3) with a linkage between position 6′ of the B-ring of the upper epicatechin unit and position 8 of the lower epicatechin unit was isolated from fraction XII. This compound has been described as a product formed from catechin or epicatechin by autoxidation, chemical or enzymatic oxidation via formation of an ortho-quinone and reaction with a hydroquinone (e.g., epicatechin) in a 1,4-Michael-addition [24,25]. The C-C linkage is preferably formed between position 6′ of the quinone, which can be easily attacked by a nucleophile, and position 8 of the hydroquinone which is sterically better accessible than position 6 [25]. Nevertheless, we could not determine the exact position of the linkage in ring D directly from the spectroscopic data obtained, but concluded from comparison to literature that the two rings are linked via position 6′ and 8. This compound or similar derivatives consisting of two catechin units have been synthesized enzymatically [26,27] and non-enzymatically [28,29] and have been also isolated from grape pomace [30] and oak bark [31]. Because of these findings it seemed possible that similar B-ring linked compounds with a higher DP might be present in the C. mucronatum extract. However, intensive HPLC and HPLC-MS investigations gave no hints for the occurrence of such oligomers, which means that the dimer seems to be the only biflavonoid of this type. It still remains unclear, whether such biflavonoids occur in genuine plant material or whether they are formed during the drying process of the plant material, during the extraction procedure or the storage. From the H2O partition a MeOH-soluble fraction was obtained which was further separated by MPLC in 4 subfractions H1 to H4 ( Figure 1). Fraction H4 was further purified by preparative HPLC and yielded pure isoorientin (13). H3 turned out to contain high amounts of OPCs. Analytical HPLC of H3 on diol stationary phase revealed a wide and homologues distribution of OPCs with different DPs (Figure 4). Subsequently, H3 was fractionated by preparative HPLC using a diol stationary phase for separation of distinct OPC clusters with defined DP [20][21][22][23]. This protocol yielded procyanidin clusters from DP2 to DP10 and a polymer fraction in good yields ( Figure 1). All clusters isolated were investigated by LC-MS concerning their respective masses which indicated the presence of B-type procyanidins; the existence of A-type linkages was excluded. The tetrameric procyanidin cinnamtannin A2 (9; despite the term "A", cinnamtannins "A2" and "A3" are B-type procyanidins) and the pentameric cinnamtannin A3 (10) were identified as the major compounds obtained from the OPC clusters DP4 (obtained from the H2O partition) and DP5 (obtained from the EtOAc partition) and were identified in form of the respective peracetates (9a and 10a). Data for 9a corresponds well to literature [25], whereas 10a could not be identified unambiguously, due to the limited amount of substance available for NMR and the lack of reference data for the peracetylated derivative. Based on the findings of the isolated OPC DP2 to DP4, the major component of each cluster consists of (4β→8)-linked epicatechin building blocks, whereas OPCs with a (4β→6) linkage were obtained in much lower yields. Therefore, we assumed that the main peak in the chromatogram of the OPC cluster DP5 should correspond to an epicatechin pentamer with a (4β→8) linkage. In the next step we tried to confirm this assumption by 1D ( 1 H, 13 C) and 2D (COSY, NOE, HMBC and HSCQ) NMR experiments, still, it was not possible to completely assign signals for all of the protons and carbons of the molecule due to the low amount of substance available for structure elucidation. All OPC procyanidin clusters with defined DP were used in the following functional investigations for potential anthelminitic activity. Triterpene saponins of the dammarane type [33], oleanane type [34] or cycloartane type [35] have been previously described for various other Combretum species, yet, investigations of the EtOH-H2O extract and various fractions by mass spectrometry did not reveal the presence of saponins in the leaf extract of C. mucronatum. Concerning the flavonoid content of the extracts, isovitexin (11) was isolated from fraction V as the most abundant flavonoid in the extract. Additionally, isoorientin (13) was obtained from fraction H4 of the H2O-partition and identified by spectroscopic analysis (NMR and MS). TLC and HPLC analysis of fractions VII to XI also revealed the presence of different flavonoids, which unfortunately could not be isolated on a preparative scale due to their similar retention times in preparative HPLC. Therefore, these flavonoids were identified by analytical HPLC as vitexin (12), isoorientin (13) and isoquercitrin (14) by spiking of the test solutions with a set of flavonoid reference compounds. Bioassay-Guided Fractionation L4 larvae and young adults of the free-living nematode C. elegans were used to assess the survival rate of the worms in vitro. Although not parasitic, it is closely related to certain parasites and is used worldwide as a well-established model organism for anthelmintic tests. Its short life cycle and easy maintenance under lab conditions are the main advantages compared to parasitic nematodes that usually require animal hosts for maintenance and propagation [36,37].The hydroethanolic extract of the leaves of C. mucronatum showed moderate anthelmintic activity, with an LC50 of 1.67 mg/mL. Subsequent testing of the EtOAc and H2O partition indicated that the active components were mainly located in the more lipophilic ethyl acetate fraction with an LC50 = 1.73 mg/mL compared to the aqueous fraction for which an LC50 could not be determined due to its weak activity. Generally, the LC50 values obtained in this study might seem quite high compared to results from other test systems, but despite its advantages in lab work C. elegans is known to be more resistant to drug treatment than other nematodes. For example, standard anthelmintic drugs such as albendazole and ivermectin were either shown to be inactive in vitro or require incubation over several days at concentrations in the mM range [38]. This also applies to the positive control levamisole-HCl (40 mM, approx. 14.5 mg/mL) used in this study for which the concentration is approximately 10-fold that of its therapeutic use. Further fractionation of the EtOAc partition by FCPC was performed and fractions V to XII showed anthelmintic effects with the activity increasing from V to XII. Phytochemical investigations by TLC and UHPLC revealed condensed tannins and flavonoids to be the major constituents of these fractions. As all active fractions were dominated by flavan-3ols (V) and oligomeric procyanidins (VI to XII) it was assumed that the OPCs contribute significantly to the anthelmintic activity. To prove this hypothesis OPCs were quantitatively removed from the EtOAc partition using polyvinylpyrrolidone (PVPP), followed by functional testing of the remaining OPC-free fraction. As expected, this OPC-depleted fraction (absence of OPCs had been proven by TLC and HPLC studies) had no anthelmintic activity at all (concentrations tested up to 5 mg/mL). This clearly indicates that condensed tannins are responsible for the anthelmintic activity of C. mucronatum leaves. Nevertheless, results from other investigations showed an activity of flavonols and flavonolglycosides against Haemonchus contortus [39], therefore we cannot rule out any synergistic effects by the flavonoids found in C. mucronatum, although they were shown not to be directly active. As observed in previous studies, the H2O partition obtained from the hydroalcoholic extract was expected to contain proanthocyanidins of higher molecular weight [17,23] and the molecular size of OPCs has been reported to be one major factor responsible for the bioactivity of condensed tannins in general [40] as well as for their anthelmintic activity [41,42]. Therefore, the MeOH-soluble subfraction of the H2O partition was further fractionated by MPLC despite its limited activity against C. elegans to yield OPC clusters with distinct DPs from 3 to 10 and a polymeric fraction ( Figure 4). With the exception of one dimeric proanthocyanidin with an epiafzelechin unit (compound 2) all other OPCs are entirely composed of epicatechin as building blocks (Figure 1). Compared to extracts from other tannin-rich plants which often show a broader variety in their molecular composition, e.g., catechin units beside epicatechin, mono-, di-and trihydroxylation of the B-ring or A-type linkages, the only variation among the OPCs isolated from C. mucronatum seems to be the type of linkage between the epicatechin units. This uniform pattern is an advantage for the determination of structure-activity relations, since it is possible to focus entirely on the role of the molecular size of the procyanidins. Until now, effects of condensed tannins against different kinds of nematodes have been subject of several investigations, but either the compounds tested did not exceed a DP ≥ 5 [41,43] or bioassays were performed using purified and well characterized fractions of condensed tannins [42,44,45], but not isolated compounds. Compounds 1, 4, clusters of DP 3 to 10 and the polymeric fraction were assayed under in vitro conditions to determine the influence of the respective molecular size on the anthelmintic activity ( Figure 5). While epicatechin and the dimeric procyanidin B2 turned out to be inactive at all concentrations tested (10 to 1000 μM), the survival rate of the worms decreased significantly when placed in contact with OPCs of a DP ≥ 3. On the other hand, no significant differences among OPC clusters with DP4 to DP10 were observed, indicating that a certain degree of oligomerization is necessary for the anthelmintic activity of OPCs, but once the number of epicatechin units exceeds four, no further increase in bioactivity against C. elegans occurs. Interestingly, the OPC polymer was not significantly different to the OPC clusters DP4 to 10, although we had expected its activity to be superior. A similar finding has previously been explained by the strongly reduced solubility of such polymers in aqueous systems [46]. Figure 6 correlates the DP of the different OPCs against their respective anthelmintic activity, confirming the observation that the best anthelmintic activity is mediated by procyanidin clusters DP > 3. These findings are in accordance with general observations regarding the ability of tannins to precipitate proteins for which the chain length seems to be the major factor. A DP > 3 is necessary for an astringent effect and the number of flavan-3-ol units is reported to be more important for an interaction with proteins than the hydroxylation pattern of the B-ring or the cis/trans ratio of positions 2 and 3 of ring C [39]. Furthermore, previous investigations lead to similar findings regarding the role of the OPCs' molecular size; while a certain chain length was required for astringent effects, the capacity to precipitate proteins seems to reach a plateau for OPC above a certain DP [47]. The impact of the molecular size for the anthelmintic effect of condensed tannins has also been a topic of recent investigations [41][42][43][44][45]. Williams et al. observed similar effects using purified fractions from different plant sources against Ascaris suum. Generally, fractions with a higher mean DP (> 5.4) were shown to be more effective than those consisting of smaller molecules (mDP 2.3 to 4.9), but no significant difference in the activity of the higher oligomeric fractions were observed [42], which is in accordance to our findings. On the other hand, Mohamed et al. observed a significant increase in the activity of OPCs from Paeonia suffruticosa against C. elegans with a molecular weight from 2100 to 4530 Da [41]. Nevertheless, investigations using purified fractions of condensed tannins [42,44,45] cannot describe the relationship between the DP and the anthelmintic as precisely as in this study using clusters of a defined DP. An increase in the activity with the molecular size up to a certain degree is typical for an unspecific tannin-like interaction of OPCs with proteins [40,46] therefore it seems likely that the active OPC clusters agglutinate certain proteins of the worms. This assumption is supported by investigations using hydrolysable tannins that revealed anthelmintic effects against different kinds of nematodes [41,48,49]. However, differences in the susceptibility among species or developmental stages within the same species [44,45,50] raise the question how "unspecific" these typical tannin-protein interactions are in nematodes. For example, Williams et al. recently observed damages in the cuticle and hypodermis of Ascaris suum L4 larvae [42] and adult Oesophagostomum dentatum [45] treated with fractions or extracts from hazelnut skin rich in condensed tannins, whereas Mori et al. did not observe any changes in the cuticle of C. elegans after incubation with ellagitannins [49]. Also, none of the OPC clusters tested in our assays showed any disruptions of the cuticle of the free-living nematode C. elegans, which has previously shown to be a lot more resistant to external factors than the cuticle of parasitic nematodes [51]. Plant Material and Chemicals Leaves from C. mucronatum were harvested in April and May 2011 from the Bosomtwi-Atwima-Kwanwoma area in the Ashanti region of Ghana, located between 0.15-2.25°W and 5.50-7.46°N. After botanical authentication the material was air dried for two weeks at room temperature and reference samples were stored at the Institute for Pharmaceutical Biology and Phytochemistry, Muenster, Germany (voucher no. IPBP-324). If not stated otherwise, all chemicals were purchased from VWR (Darmstadt, Germany). NMR spectra were recorded on an Agilent DD2 400 MHz or 600 MHz spectrometers (Agilent Technologies, Santa Clara, CA, USA). Samples were solved in chloroform-d1 or methanol-d4 and solvent peaks were set as reference at 7.260 ppm or 4.870 ppm respectively. Peracetylation of the oligomeric procyanidins was performed in pyridine/acetic acid anhydride (1:1) at room temperature for 24 h in the dark [17]. UHPLC-ESI-qTOF-MS: Separation was performed on a Dionex Ultimate 3000 RS Liquid Chromatography System (Thermo Fisher, Oberhausen, Germany) over a Dionex Acclaim RSLC 120, C18 column (2.1 × 100 mm, 2.2 µm) with a binary gradient (A: water with 0.1% formic acic; B: acetonitrile with 0.1% formic acid) at 0.8 mL/min. 0 to 9.5 min: linear from 5% to 100% B; 9.5 to 12.5 min: isocratic at 100% B; 12.5 to 12.6 min: linear from 100% to 5% B; 12.6 to 15.0 min: isocratic at 5% B. The injection volume was 2 µL. Eluted compounds were detected using a Dionex Ultimate DAD-3000 RS over a wavelength range of 200-400 nm and a Bruker Daltonics micrOTOF-QII time-of-flight mass spectrometer (Bruker, Bremen, Germany) equipped with an Apollo electrospray ionization source in negative mode at 5 Hz over a mass range of m/z 50-2000 using the following instrument settings: nebulizer gas nitrogen, 5 bar; dry gas nitrogen, 9 L/min, 220 °C; capillary voltage 3500 V; end plate offset −500 V; transfer time 100 µs, prepulse storage 10 µs; collision cell RF settings were combined to each single spectrum of 1000 summations as follows: 500 summations with 1400 Vpp + 500 summations with 350 Vpp. Internal dataset calibration (Enhanced quadratic mode) was performed for each analysis using the mass spectrum of ESI-L low concentration tunemix (Agilent Technologies) that was infused during LC reequilibration using a divert valve equipped with a 20 µL sample loop. Preparation of Plant Extract and Partitions (see Figure 1) Dried and pulverized plant material (1 kg) was defatted for 18 h by Soxhlet extraction with petroleum ether, yielding 3.6 g of extract. The remaining material (995 g) was successively extracted with ethanol water (1:1 v/v) in a drug-solvent ratio of 1:10 by Ultra-Turrax ® (IKA, Staufen, Germany) at 9500 rpm for 10 min under ice cooling. The suspension was centrifuged at 3000× g for 10 min, concentrated in vacuo and lyophilized. The crude extract (yield: 267 g) and all fractions obtained from the extract by the following fractionation were stored at −20 °C. A portion of the EtOH-H2O extract (265 g) was partitioned between ethyl acetate by dissolving portions of 15 g of EtOH-H2O extract in 500 mL of water and partitioning repeatedly five times with 500 mL of EtOAc. The aqueous and organic phases were filtered (filter paper 595, S & S, Dassel, Germany) and lyophilized. Yield: 42 g of the EtOAc phase and 161 g of the aqueous phase, corresponding to 15.7% and 60.4% of the EtOH-H2O extract respectively. All signals were in accordance with literature [30], the assignment of the protons at position 4 of the C-ring and of the F-ring was revised according to correlations obtained from NOE and HMBC spectra. final concentration of DMSO did not exceed 1% (v/v). Aliquots of the stock solution were added to a 24-well microtiter plate containing culture medium (50 mL 20 % (w/v) dextrose solution, 500 µL of a solution from cholesterol 5 mg/mL in ethanol, 500 µL 1 M CaCl2, 500 µL 1 M MgSO4, 12.5 mL 1 M KH2PO4/K2HPO4 and 500 µL penicillin/streptomycin solution (10,000 U/10,000 µg/mL) in 500 mL M9 buffer solution) to a final volume of 500 µL per well. Test concentrations ranged from 0.05 to 5 mg/mL for fractions and from 1 to 1000 µM for purified compounds. Each substance was tested in 4 replicates per treatment and each experiment was independently performed in triplicate. A solution of levamisole hydrochloride (40 mM) (AppliChem, Darmstadt, Germany) served as a positive control; DMSO 1% (v/v) was used as a negative control. 10 to 20 worms (L4 larvae or young adults) were incubated with the respective test substance at 20 °C and the mortality was assessed after 72 h by counting the number of dead worms under a dissecting microscope: worms that were immotile and completely straight were counted as dead if they did not respond when prodded with an eyelash. The percentage of dead worms was calculated as the number of dead worms in relation to the total number of worms per well. Statistical Analysis Data obtained from the in vitro assay were analyzed using GraphPad Prism ® Ver. 3 (GraphPad Software, Inc., La Jolla, CA, USA). Mean values of mortality rates were compared by a one-way ANOVA test followed by a Tukey's Test for multiple comparison. A p-value < 0.05 compared to the negative control was considered to be significant. Conclusions Unsubstituted oligomeric procyanidin units were found to be the active components of a hydroethanolic leaf extract of C. mucronatum, a plant which is traditionally used in West Africa as an anthelmintic remedy. Structure elucidation of the isolated OPCs revealed that they are almost entirely composed of epicatechin units with 4β→8 and 4β→6 linkages. The activity of these compounds increased with their molecular size showing a maximum activity from DP4 to DP10. These findings point towards an interaction of OPCs with so far unidentified proteins of the target organism. Our findings confirm and rationalize the traditional use of C. mucronatum and provide further insight into the anthelmintic activities of condensed tannins. Further studies evaluating the potential of extract and isolated clusters against different parasitic nematodes would be desirable. Additionally, the precise mode of action of condensed tannins apart from few microscopic observations remains to be investigated.
5,870.2
2015-08-01T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science", "Medicine" ]
Improving the efficiency of soybean breeding with high-throughput canopy phenotyping Background In the early stages of plant breeding programs high-quality phenotypes are still a constraint to improve genetic gain. New field-based high-throughput phenotyping (HTP) platforms have the capacity to rapidly assess thousands of plots in a field with high spatial and temporal resolution, with the potential to measure secondary traits correlated to yield throughout the growing season. These secondary traits may be key to select more time and most efficiently soybean lines with high yield potential. Soybean average canopy coverage (ACC), measured by unmanned aerial systems (UAS), is highly heritable, with a high genetic correlation with yield. The objective of this study was to compare the direct selection for yield with indirect selection using ACC and using ACC as a covariate in the yield prediction model (Yield|ACC) in early stages of soybean breeding. In 2015 and 2016 we grew progeny rows (PR) and collected yield and days to maturity (R8) in a typical way and canopy coverage using a UAS carrying an RGB camera. The best soybean lines were then selected with three parameters, Yield, ACC and Yield|ACC, and advanced to preliminary yield trials (PYT). Results We found that for the PYT in 2016, after adjusting yield for R8, there was no significant difference among the mean performances of the lines selected based on ACC and Yield. In the PYT in 2017 we found that the highest yield mean was from the lines directly selected for yield, but it may be due to environmental constraints in the canopy growth. Our results indicated that PR selection using Yield|ACC selected the most top-ranking lines in advanced yield trials. Conclusions Our findings emphasize the value of aerial HTP platforms for early stages of plant breeding. Though ACC selection did not result in the best performance lines in the second year of selections, our results indicate that ACC has a role in the effective selection of high-yielding soybean lines. Background Breeders are challenged to increase the rate of genetic gain. Genetic gain in a crop breeding program can be defined as �G = h 2 iσ p L , where h 2 is the narrow-sense heritability, i is the selection intensity, σ p is the phenotypic standard deviation and L is the breeding cycle time or generation [1]. This equation translates theoretical quantitative genetics into parameters that breeders can manipulate in their breeding pipelines [2]. In this context genetic gain can be increased in a number of ways, including: increasing population size to increase selection intensity, shortening the breeding cycle, ensuring suitable genetic variation in the population, and obtaining accurate estimates of the genetic values [3][4][5]. Phenotyping directly or indirectly influences these parameters which emphasize the need for accurate, precise, relevant and cost-effective phenotypic data [6]. Plant phenotyping has recently integrated new technology from the areas of computer science, robotics, and remote sensing, resulting in high-throughput phenotyping (HTP) [6][7][8][9]. Platforms have been developed based on high capacity for data recording and speed of data collection and processing in order to capture information on structure, physiology, development, and performance of large numbers of plants multiple times throughout the growing season [8,10]. Compared with other platforms, imagery-based field HTP using unmanned aerial systems (UAS) has the advantage of high spatial and temporal resolution [11] and is non-destructive. There are a number of applications of a trait that can be precisely phenotyped with an HTP platform in a breeding pipeline. Secondary traits may increase prediction accuracy in multivariate pedigree or genomic prediction models [12][13][14]. Alternately, traits measured with HTP can be used in selection indices or for indirect selection for yield [15]. Indirect selection may be preferable when the secondary trait is easier or less expensive to measure than yield and if it can be selected out-of-season or in earlier developmental stages or generations, accelerating decision-making steps, and consequently decreasing the breeding cycle [16,17]. In a typical soybean breeding program, after reaching desired homozygosity, a common procedure is to select individual plants and then grow the next generation in progeny rows (PR) trials [18]. At this stage, there is usually a large number of entries but a small number of seeds, limiting the experiment to unreplicated onerow plots at one location [19]. Due to these limitations, yield measurements in PR are inaccurate and may require a large investment of resources. In this scenario, HTP has the potential to remotely measure in a nondestructive manner traits correlated to yield in early stages of development, improving data quality and reducing time or cost, or, for selection [20,21]. Several studies have demonstrated that attaining full canopy coverage, and thus maximum light interception (LI), during vegetative and early reproductive periods is responsible for yield increases in narrow-row culture due to enhanced early growth [22][23][24]. As management practices change over time, more recent studies using different plant populations found that rapid establishment of canopy coverage improves the interception of seasonal solar radiation, which is the foundation for crop growth and yield [25,26]. LI efficiency, measured as leaf area index (LAI), was significantly correlated to yield in a study comparing soybean cultivars released from 1923 to 2007 [27]. In addition, the rapid development of canopy coverage can decrease soil evaporation [28] and suppress weeds [29][30][31]. Purcell [32] showed that soybean LI can be measured as a function of canopy coverage from images taken from above the plot using a digital camera. In addition, soybean canopy coverage can also be effectively extracted automatically from UAS-based digital imagery [33]. Xavier et al. [33] observed that average canopy coverage (ACC) measured early season was highly heritable (h 2 = 0.77) and had a promising genetic correlation with yield (0.87), making it a valuable trait for indirect selection of yield. In the same study, they found a large effect quantitative trait locus (QTL) on soybean chromosome 19 that resulted in an estimated increase in grain yield of 47.30 kg ha −1 with no increase in days to maturity (− 0.24 days). Candidate genes associated with growth, development, and light responses were found in genome-wide association analysis of imagery-based canopy coverage during vegetative development [34]. Jarquin et al. [12] found that early season canopy coverage, used to calibrate genomic prediction models, improved the predictive ability for yield, suggesting that it is a valuable trait to assist selection of high yield potential lines. Thus, early season canopy coverage has the potential to be used as a secondary trait for indirect selection for yield or as covariables to improve yield estimations in quantitative genetic models [21]. While several studies have shown the value of UAS to phenotype various traits for a number of crops [35][36][37][38][39][40], to our knowledge there is no study showing the use of UAS-derived phenotypes for applied breeding purposes. In addition, no empirical studies have reported on the efficacy of using canopy coverage phenotypes in a soybean breeding pipeline. Selection experiments are useful for comparing breeding methods by enabling the assessment of realized gains of different selection categories to identify the most effective method. Our aim was to perform a selection experiment to compare the yield performance of soybean lines selected from PR based on yield with those selected based on ACC from imagery acquired with UAS. Description of breeding populations This study used 2015 and 2016 F 4:5 progeny rows (PR) populations from the soybean breeding program at Purdue University. These trials were grown under a modified augmented design with replicated checks at the Purdue University Agronomy Center for Research and Education Phenotypic data For all trials, grain yield and days to maturity (R8) were collected for every plot. Grain yield (g/plot) was converted to kg ha −1 using harvest-timed seed moisture to adjust all plot values to 13% seed moisture. R8 was expressed as days after planting when 50% of the plants in a plot had 95% of their pods mature [41]. For PR 2015 and 2016 we quantified canopy coverage from aerial images collected using a fixed-wing Precision Hawk Lancaster Mark-III UAS equipped with a 14-megapixel RGB Nikon 1-J3 digital camera. Flights were performed at an altitude of 50 m, which resulted in a spatial resolution of 1.5 cm per pixel. We used eight sampling dates of early-season canopy development, ranging from 15 to 54 DAP (15,29,34,37,44,47,51,54 DAP) in 2015 PR, and seven sampling dates, ranging from 20 to 56 DAP (20,27,31,37,42,52, 56 DAP) in 2016 PR. The trials were maintained free of weeds to ensure that the images captured only soybean canopy. Image analysis, plot extraction, and classification were performed using a multilayer mosaic methodology described by Hearst [42]. This methodology allows for the extraction of the plots from ortho-rectified RGB images using map coordinates, resulting in several plot images of different perspectives from the same sampling date due to overlapping frame photos. The number of plot images from the same date varies from plot to plot. Image segmentation was done using Excess Green Index (ExG) and Otsu thresholding [42] to separated canopy vegetation from the background. Canopy coverage was calculated as the percentage of image pixels classified as canopy pixels. Median of canopy coverage values from replicated plot images was calculated for each sampling date. For each plot, average canopy coverage (ACC) was obtained by averaging the median canopy coverage among sampling dates. Figure 1 summarizes the process from image acquisition to the calculation of ACC. Statistical data analysis and selection methods of PR PR 2015 and 2016 yield, R8, and ACC phenotypes were fitted in a pedigree-based mixed model to estimate variance components and breeding values, using Gibbs sampling implemented in the R package NAM [43], described as: where y i is the phenotype, µ is the mean, g i (i = 1,…, number of genotypes) is the random genotype effect with (1) y i = µ + g i + e i Fig. 1 Overview of data collection and processing to acquire average canopy coverage (ACC) phenotypes a where A is the relationship matrix calculated using pedigrees that traced back to PR founders and σ 2 a is the additive genetic variance, e i is the residual term with e i ∼ N(0, Rσ 2 e ) where R is a field correlation matrix considered to account for spatial variation in the field calculated as the average phenotypic value of neighbor plots [44] and σ 2 e is the residual variance. For yield, an additional model was fitted in order to adjust for ACC (Yield|ACC), where the fixed ACC effect (aka covariate), β i (i = 1,…, number of genotypes), was added to the previous model. Yield|ACC is considered a different trait than yield. The solutions for g i for each trait here are defined as best linear unbiased predictors (BLUP). To estimate phenotypic correlations, we calculated Pearson's correlations among BLUPs for the different traits. Narrow-sense heritability ( h 2 ) was calculated using the formula: where σ 2 a and σ 2 e are described previously. For the selection experiment, the selection categories or traits used in this study were yield BLUPs, as the traditional selection method, ACC BLUPs, and Yield|ACC BLUPs. Lines were selected based on BLUPs rankings within each selection category. For PR 2015 we selected approximately 9% of progenies for each selection category. Since some lines were selected by more than one selection category, the total lines selected was 523. In 2016, since we had more progeny lines, we decreased the selection to 7.5%. Due to the overlap of lines selected among the selection categories, we selected 705 lines. There was some deviation from the intended selection intensities due to seed limitations, field space, or logistics in the breeding pipeline. Figure 2 shows the summary of lines selected by each selection category for PR 2015 and 2016. As described above, selected lines were divided into early and late PYT. Evaluation of PYT and AYT To evaluate PYT line performance, yield and R8 phenotypes across locations were fitted using restricted maximum likelihood (REML) approach, implemented in the R package lme4 [45]: where y ijkl is the phenotype, µ is the mean, g i (i = 1,…, number of genotypes) is the random genotype effect with g i ∼ N 0, σ 2 g where σ 2 g is the genetic variance, loc j (j = 1,…, number of environments) is the random location effect with loc j ∼ N 0, σ 2 loc where σ 2 loc is the location variance, r k(j) is the random effect of kth replication nested within jth location with r k(j) ∼ N 0, σ 2 r where σ 2 r is the replication within location variance, b l(k(j)) is the random effect of the lth incomplete block nested within the kth replication and jth location with where σ 2 b is the block variance, (g*env) ij is the random genotype by location interaction effect with g * loc ij ∼ N 0, σ 2 gxloc . where σ 2 gxloc e is the genotype by location variance, and e ijkl is the residual term with e ijkl ∼ N 0, σ 2 e where σ 2 e is the residual variance. Adjusted values for yield and R8 were calculated as µ + g i , to express the phenotypes with units. Maturity is a confounding factor that influences yield, which may lead to misinterpretation of the yield potential of a line; therefore, we also calculated yield adjusted to R8 including R8 as a covariate in Eq. 3. In a breeding program, the method that increases the population mean the most from one generation to the next is the preferred method; therefore, to evaluate the performance of the lines in the selected classes we performed two-sample t-tests to compare the adjusted yield means of lines in each selected class. The best selection category is the one producing the highest yield mean within an early or late trial, considering that all lines came from the same original populations. Although AYT was not part of the selection experiment, we wanted to evaluate how the top-ranked lines were selected. Lines were selected from PYT using rankings of yield BLUPs and advanced to AYT. For AYT data summary Eq. 3 was used with the change of genotype to fixed effect. AYT lines were classified as early and late from R8 phenotypes. Table 1 shows the estimated narrow-sense heritability and phenotypic Pearson's correlations for yield, ACC, Yield|ACC, and R8 for 2015 and 2016 PR. Positive correlations were observed among all traits with Yield, with the highest observed with Yield|ACC. ACC showed low (0.01) or negative (− 0.1) correlation with R8 and negative correlation with Yield|ACC in both years. R8 and Yield|ACC (3) y ijkl = µ + g i + loc j + r k(j) + b l(k(j)) + (g * loc) ij + e ijkl were positively correlated. Narrow-sense heritability for Yield|ACC and R8 was higher than for Yield in both years. Narrow-sense heritabilities were low for ACC and Yield, but the heritability of ACC was higher than yield in 2017. PYT selection category performance The box plots presented in Fig. 3a show the distributions of adjusted yield values for lines in each selected class and adjusted R8 means are summarized in Additional file 1: Table S1. For PYT early 2016 the yield mean was not significantly different among the lines from different selected classes. For PYT late 2016 the lines selected by Yield had a statistically significantly higher mean yield, and there were no statistically significant differences in mean yield among the lines selected by ACC and Yield|ACC. The mean yield of the lines selected by ACC and Yield was not statistically significantly different in PYT late 2016 when considering yield adjusted by R8 (Fig. 3b). For PYT early and late in 2017, the mean yield among lines from different selected classes was statistically significantly different, and the lines selected by Yield had a higher mean yield. Table 2 summarizes the ten top-ranked lines in AYT 2017 and 2018. In both years, the lines were mostly selected by two selection categories. None of the ten top-ranked lines in the AYT early 2017 were selected by Yield alone in the PR stage. In the AYT late 2017 only one line was selected by Yield alone in the PR stage, in rank position ten. In AYT 2018 early and late the Yield selection category alone selected just three and two of the ten top-ranked lines, respectively. Considering both years, the number of topranked lines selected using only ACC and/or Yield|ACC was greater (14 lines) than the lines selected by Yield alone (6 lines). Discussion The positive phenotypic correlation found in this study among yield and ACC in PR 2015 (Table 1) is in agreement with other studies [12,33,34]; however, this result was not repeated in PR 2016. Phenotypic correlation depends on genetic and environmental correlations, thus even when no phenotypic correlation can be estimated the traits may still be correlated genetically and environmentally [1]. Considering that some studies showed a strong positive genetic correlation between ACC and yield, the lack of phenotypic correlation in PR 2016 may be the reflection of the genetic and environmental correlations acting in opposite directions between the two traits, as well as the interaction between genotype and environment [1,33,46,47]. We observed none to negative phenotypic correlations between ACC and R8 in PR 2015 and PR 2016, respectively, indicating that selection on ACC should not lead to indirect increases in maturity. In both years, ACC and Yield|ACC were negatively correlated, which is expected since adjusting yield for ACC will correct the yield data to a baseline value of ACC, thus, simplistically, yield decreases for higher ACC and increases for lower ACC. For PR 2015 and 2016 ACC heritabilities (Table 1) were lower when compared with other studies [33,47], but these studies used multiple environments of replicated data, and we observed comparatively lower yield and R8 heritabilities as well. Generally, low heritabilities in PR trials are expected given unreplicated single row plot trials leading to challenges in the estimation of the genetic parameters of the tested lines. It is generally accepted that maturity confounds yield estimates in soybeans and later maturing cultivars will generally out-yield earlier maturating cultivars. In soybean breeding, yield phenotypes are sometimes corrected for R8 to better estimate yield potential per se and avoid indirect selection for later maturity. In our study, PYT early 2016 was the best scenario to compare the selection categories due to the lack of statistically significant differences in R8 among the selected classes (Additional file 1, Fig. S1). For this trial, the mean yield among the selection categories was not significantly different (Fig. 3), indicating that indirect section for yield based on ACC or using Yield|ACC would result in the same yield gain than direct selection on yield, considering that they derived from the same base population. Using ACC as a selection criterion in early stages of soybean breeding pipelines would provide advantages not only in the reduction of the time for selection but also in the cost associated with the trait measurement. For the other three trials, PYT late 2016 and PYT 2017, there were differences in the mean R8 between at least among two of the selection categories (Additional file 1, Fig. S1). Therefore, differences in the mean yield among the selection categories may be associated with the differences in days to maturity. The yield correction for R8 changed the comparison among the selection categories Yield and ACC in PYT 2016 late, making them similarly efficient for selection (Fig. 3). Although ACC selection did not produce higher gains than Yield selection, both PYT in 2016 confirm findings from Xavier et al. [33] that assuming identical selection intensities indirect selection for yield using ACC would have a relative efficiency for selection comparable to yield direct selection. In general, the findings from PYT 2016 did not hold in 2017 trials (Fig. 3). Even after adjusting for R8 the lines selected by Yield had a higher performance than the lines selected by the other selection categories; however, the differences among the yield mean from lines selected by Yield and Yield|ACC was small for both early (~ 120 kg/ha) and late (~ 150 kg/ha) trials (Additional file 1: Table S1), which may indicate that Yield|ACC is a valuable trait for selection. This contrasting results in trait selection efficacy observed in 2016 and 2017 may be explained by differences in canopy coverage development in PR 2015 and PR 2016, as showed in the comparison of canopy coverage development over time of the common checks among years (Additional file 1, Fig. S2). In 2015 at around 53 days after planting (DAP) we observed an average of canopy coverage of 35% in the checks, while at the same DAP in 2016 the checks had an average of almost 80% canopy coverage. This abnormal growth in 2016 produced tall plants and increased lodging (data not shown), which has a great effect in unreplicated single row plot trials where every genotype is competing with both neighbor rows. Considering that taller and bigger plants do not result in higher yields when ranking the top BLUPs, several lines that were selected based on ACC may have had poor yield potential. In addition, the lack of correlation of yield and ACC in PR 2016 may have been a result of this unusual canopy growth. Therefore, despite the evidence that one trait can be used to indirect select for yield, the breeder needs to consider the environmental influence on the trait phenotypes at the time of selection. In our case, we could have used a threshold for ACC before doing the selections, avoiding the very high values of canopy coverage, or restricted selection dates to earlier points in development. If we consider the top 40 lines from AYT in 2017 and 2018, direct selection for yield alone selected only 6 lines from the PR trials, compared to 14 lines selected using ACC and/or Yield|ACC. Thus, despite the difference in mean performance among the selection categories in the PYT stage, we have demonstrated that ACC alone or combined with yield (Yield|ACC) are valuable secondary traits for selection in the PR stage. Yield|ACC had the best selection result in the top 10 lines for the AYT. Poor yield measurements due to harvesting errors, weather, and plot damage, lead to inaccurate representations of yield potential. Adjusting yield for early season ACC compensates for these inadequacies and is a better predictor of the real yield potential. This is in agreement with Jarquin et al. [12] results showing that early season canopy coverage increased the predictive accuracy of yield in genomic predictions models. Additionally, digital canopy coverage has a one to one relationship to LI, which in turn is an important factor for yield potential equation [32,33,48]. Therefore, up to a certain point, increases in LI, through ACC, will result in increases in yield when the other parameters in the yield equation are kept the same. In this study, we have shown that the efficiency of selecting high yielding soybean lines can be improved by taking advantage of an HTP trait. Field-based HTP using UAS is robust, simple, and cost-effective and can measure a wide range of phenotypes that can be converted into useful secondary traits [2,49]. Breeding teams need to evaluate carefully the value of these secondary traits in increasing genetic gain either in a phenotypic selection or as part of pedigree or genomic prediction schemes [2,14]. In addition, we recommend testing different scenarios to ensure if the greater response is using the secondary trait alone or in combination with yield. However, if not in the literature, an investigation of heritability and genetic correlation to yield should be carried out to evaluate the potential of the trait. Conclusions One of the most important tasks of a plant breeder is to find among the available selection criteria a combination that can promote the desirable genetic gain for the traits of interest within their breeding program. Field HTP must be integrated into a wider context in breeding programs than trait estimation, evaluation of platforms, and genetic association studies. We examined three different ways to select soybean lines from PR trials: Yield, ACC and Yield|ACC. We compared their performance in advancing selected lines in the following generations common in a soybean breeding program. We have demonstrated that the secondary trait ACC measured using an aerial HTP platform can be used for selection, alone or in combination with yield, in early stages of soybean breeding pipelines. This method may offer even more advantages when yield is low quality or can't be phenotyped due to the high cost or extreme weather events. Further studies are needed to assess environmental effects on canopy coverage phenotypic variation in order to have optimized recommendations on the use of ACC for selecting high yielding lines in different scenarios. Supplementary information Supplementary information accompanies this paper at https ://doi. org/10.1186/s1300 7-019-0519-4. Additional file 1: Table S1. Adjusted mean and standard deviation for yield (Kg/ha) and R8 (days to maturity) by selection criteria for preliminary yield trials (PYT) early and late in 2016 and 2017. Figure S1. Box plot of adjusted R8 (days to maturity) distribution for lines selected by each selection categories (Yield, ACC and Yield|ACC) for preliminary yield trials (PYT) early and late in 2016 and 2017. Diamond indicates mean for each selection categories. The line crossing the box plots are representing the median for each class. No significative (ns); p > 0.05; *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001; ****p ≤ 0.0001. Figure S2. Distribution of average canopy coverage of the checks by days after planting for progeny rows 2015 and 2016.
5,972.4
2019-11-19T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Grid-enhanced X-ray coded aperture microscopy with polycapillary optics Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10–100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy. experiment, object details at a resolution of approx. 8 microns could be reconstructed using an optics with a 40 μm spot 15 . However, XCAMPO cannot be directly extended to submicron imaging, i.e., to imaging at a resolution comparable to the spacing of the individual capillaries, which is always much smaller than the size of the focal spot. Very recently, to overcome this problem, a special variant of XCAMPO, namely, defect-assisted microscopy 26 , was proposed. This approach takes advantage of natural point defects in polycapillary structures, such as missing, crushed or slightly larger capillaries. It was demonstrated that such intrinsic point defects, by breaking the periodicity of capillary arrays, lead directly to the formation of multiple X-ray images of an object placed inside the focal spot of a polycapillary optics. Such multiple images can be analysed using the coded aperture principle and provide a spatial resolution at the level of 0.5 μm. Defect-assisted XCAMPO is very promising, but for practical applications, it requires the challenging fabrication of tailored X-ray optics with intentionally introduced defects. In this work, we demonstrate submicron coded aperture microscopy using an external periodic grid placed at the output surface of the polycapillary optics. In contrast to previous approaches, in which the internal microstructure was used as the coded aperture, grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific structure of a polycapillary optics. However, grid-enhanced XCAMPO still profits from the small size of the focal spot of the optics and from the high flux inside the focal spot. Therefore, submicron X-ray imaging can be realized with standard polycapillary optics or existing set-ups for micro X-ray fluorescence spectroscopy (μXRF) 27,28 . In this work, we used a standard grid for transmission electron microscopy as the coding aperture. In future, the shape of the coded aperture could be optimized using existing microand nanofabrication technology. Results Image formation in grid-enhanced XCAMPO. The principle of grid-enhanced XCAMPO is presented in Fig. 1(a). A periodic aperture or grid with a pitch larger than the size of the focal spot is placed at the output surface of the optics. The object to be imaged is placed inside the focal spot, and a magnified image of the grid is recorded by an X-ray camera. For a simplified description of the image formation principle in grid-enhanced XCAMPO, a slightly modified version of a formula presented in previous works can be used 15,29 . For an object placed in the focal plane of the optic, the intensity I(r) of the X-rays recorded by the camera can be approximated by the following formula: where ⊗ denotes the convolution operation; T M is the transmission of the object, magnified by a factor of (1 − M); and the magnification factor is given by M = (f − D)/f, where D is the detector-to-optics distance. F M describes the spatial distribution of the radiation in the focal spot (with an approximately Gaussian shape) at a magnification of M. Note that the polycapillary optics transmits radiation in an incoherent way. Hence, to a first approximation, the shape of the focal spot is not modified by the presence of the external grid at the output surface of the optics. S m represents the spatial distribution of the radiation behind the grid at a magnification of m. This second magnification factor is defined as m = (f − g − D)/(f − g), where g is the distance between the grid and the output surface of the optics. The prefactor S 0 is the total number of photons in the focal spot. Equation (1) means that an object inside the focal spot of the optics "distorts" the image of the grid recorded using the X-ray camera. In other words, the distorted image of the grid encodes information about the object. Proof-of-principle experiment. A proof-of-principle experiment for grid-enhanced XCAMPO was performed using a laboratory set-up. A gold transmission electron microscopy grid with a pitch of 12.5 μm was placed close to the output surface (g ≈ 0.5 mm) of a focusing polycapillary optics with an exit diameter of 1.1 mm and a focal length f ≈ 2.5 mm. The FWHM of the focal spot of the optics was Δ x ≈ 11 μm. X-rays were recorded by a scintillator that was lens coupled to an sCMOS camera. The scintillator was placed at a distance of D ≈ 40 cm from the optics. Figure 1(b) shows an X-ray image of the grid that was recorded without any object between the grid and the camera. The image of the grid is blurred because of the finite divergence of the microbeams exiting from the capillaries. When a pinhole was placed inside the focal spot of the optics, the image of the grid became much sharper, as demonstrated in Fig. 1(c). A pinhole approximates an object with point-like or δ-like transmission, for which equation (1) becomes I ≈ const × S m . Hence, in the image shown in Fig. 1(c), the grid is resolved at a resolution comparable to the size of the pinhole. This is evidenced in Fig. 1(e), which shows the Fourier transform of the grid image recorded with the pinhole present in the focal spot. One can observe peaks corresponding to the 7th harmonic of the grid period. Note that the sixfold symmetry corresponding to the capillary microstructure is not visible in Fig. 1(e). First, the resolution of the camera was slightly too poor to resolve individual capillaries. Second, the peaks corresponding to the superstructure of the capillary bundles were much weaker than the peaks induced by the presence of the grid. The grid image recorded in the presence of the pinhole was subsequently used in the decoding procedures as the coding pattern, S. When an object (set of vertical slits with widths of 2 μm) was placed in the focal spot, the image of the grid become sharper and also distorted. This distorted image encodes high-resolution (not limited by the focal spot size) information about the object. Spatial resolution. To test the resolution of grid-enhanced XCAMPO, we used a JIMA RT RC-02B resolution chart consisting of a set of slits of various widths in an absorbing tungsten material with a thickness of 1 μm. Figure 2(a) shows an X-ray projection recorded without the coding grid and with the chart placed far from the focal plane (exposure of 600 s). The image was normalized with respect to data recorded without the chart. In the projection geometry, the focal spot of the optics acts as a secondary X-ray source, and the half-pitch resolution is limited by the size of the focal spot to approx. 5 μm (i.e., to ~Δ x/2). Using projection imaging, slits of various widths were moved into the focal spot of the optic to record grid-enhanced XCAMPO images (exposures of 600 s). Decoding was performed through the deconvolution of equation 1. The image recorded in the presence of the pinhole was taken as an approximation of the coding pattern S. A direct deconvolution of equation 1 using the coding pattern S yields the object's transmission T, multiplied by the shape of the focal spot F. However, the focal spot F was a smooth Gaussian and its shape did not affect the details of the reconstruction. Images decoded from single grid-enhanced XCAMPO images are shown in the top row of Fig. 2(b). To improve the image quality, we performed 1D scans. Each object was scanned across the focal spot, and several grid-enhanced XCAMPO images were recorded and combined, as explained in the Methods section. The resulting data are plotted in the middle row of Fig. 2(b). The bottom row of Fig. 2(c) shows the averaged profiles of these data. These data indicate that the half-pitch spatial resolution is better than 0.9 μm. The highest resolved spatial frequency (7th harmonic) of the grid image shown in Fig. 1(e) is f max = 7/12.5 μm −1 . This corresponds to a half-pitch resolution of ~ 0.89 μm. This means that the decoding procedure uses all frequencies up to f max and is robust against noise. However, please note, that the spatial resolution is slightly anisotropic. For horizontal lines (data not shown), a half-pitch resolution between 1 μm and 1.5 μm was achieved. The observed anisotropy is most probably due to thermal drifts, which were much larger in the vertical direction than in the horizontal one, as directly observed during the system warm up phase. While a polycapillary optics is a non-imaging device, the position of the optics focal spot weakly depends on the position of the X-ray source 30 , which may change during long exposures. Depth resolution. XCAMPO offers the capability of 3D layer-by-layer laminographic imaging 19 . To estimate the depth resolution of grid-enhanced XCAMPO, coded images were recorded with the object at various displacements with respect to the focal plane. For this experiment, we imaged the slits with a width of 1.5 μm, for which the maximal visibility at the focal plane i.e. at Δ z = 0 was nearly 100% (cf. Fig. 2(b)). For finer slits, the maximal visibility was affected by the finite lateral spatial resolution. The visibility of the slits is plotted in Fig. 3 for various distances Δ z between the object and the focal plane. The FWHM of the fitted Gaussian is ~ 34.9 μm, and the depth of field (which is usually evaluated at 80% of the maximum intensity 31 ) is ~ 19.9 μm. XCAMPO shares many properties with laminography 32 . For example, the depth resolution depends on the spatial frequency of the imaged features. Hence, one can estimate that for details at the finest half-pitch resolution of 0.9 μm, the depth of field is at the level of 12 μm. This value is much larger than the resolution in the lateral plane. This results from the finite (though very large) aperture of the optics. Note, however, that such a resolution is comparable to the resolution of confocal methods that require two "crossed" polycapillary lenses 33 . Image quality and decoding artefacts. Figure 4(a) shows an image of a text label reading "μm" on the JIMA chart obtained via grid-enhanced XCAMPO at a submicron resolution. For comparison, Fig. 4(b) shows a standard X-ray projection image recorded with the optics but without the coding grid. The size of the object is much larger than the size of the focal spot. Therefore, to extend the field of view (FOV) of grid-enhanced XCAMPO, the object was laterally scanned in 2 μm steps in both directions (31 × 21 scan points). In Fig. 4(a), the scan step and the FOV of a single exposure are illustrated by the position and diameter, respectively, of the red circles. The step size was intentionally chosen to be much smaller than the FOV. The use of partially overlapping images has been shown to improve the data quality of XCAMPO 15 . In essence, this procedure reduces the errors that arise from the imperfect Fourier sampling of individual exposures. To examine the decoding artefacts, exposure times of 15 seconds were used. At shorter exposures, the decoded images were still dominated by artefacts resulting from noise, and the weak artefacts arising from the decoding procedure were barely recognizable. The artefacts resulting from the camera noise and discrete Fourier sampling are the cause of the periodic grainy pattern observed in Fig. 4(a). This noise can be drastically reduced by using a more efficient X-ray camera (for example, a camera with fibre-optic coupling of the scintillator). The decoding artefacts are visible as weak replica or "ghost" images of the "μm" symbol that are shifted relative to the real image. The extent of this shift is directly related to the period of the coding grid. A perfect reconstruction of XCAMPO images requires that the period of the coding grid p be larger than the focal spot size. However, this condition was not fully met in the experiment. Although the pitch of the coding grid p was greater than the FWHM of the focal spot, i.e., p > Δ x, the weak intensity at the tails of the focal spot led to the formation of the visible "ghost" images. Also, note that similar artefacts can be observed in Fig. 2(b), where the spurious signals are indicated by small arrows. In future experiments, this effect can be overcome by using apertures with a perfectly matched period or non-periodic masks. Sensitivity. The process of imaging a weakly absorbing object using grid-enhanced XCAMPO was tested using an AFM cantilever (MLCT, Bruker). A conventional X-ray image of the cantilever is presented in Fig. 5(a) and (b). The cantilever was made from silicon nitride and had a nominal width of 15 μm and a nominal thickness of 0.55 μm. It was coated with a Ti/Au film of 45 nm in thickness. The tip had a skewed pyramidal shape with a height of 5 μm. The X-ray absorption of the cantilever was less than 1%, and its width was comparable to the size of the focal spot. Hence, in the conventional X-ray projection image, the cantilever is very blurry, and the AFM tip is not visible. In the grid-enhanced XCAMPO scan, the tip can be easily localized even in the initial coarse, poor-fidelity scan shown in Fig. 5(c) (13 × 13 scan points with a 60 s acquisition time and a scan step of 3 μm), which was performed to guide the movement of the tip into the focal spot. A finer scan (5 × 5 scan points with a 600 s acquisition time and a scan step of 0.8 μm) clearly shows the tip. Moreover, the skewed pyramidal shape of the tip can be recognized in Fig. 5(d) and (e). Discussion The novelty of the proposed grid-enhanced XCAMPO technique is that it does not rely on the specific microstructure of the polycapillary optic. Previous XCAMPO approaches have used either a periodic superstructure of capillary bundles 15,19 or the intrinsic defects in polycapillary arrays 26 . Tailored polycapillaries with a defined capillary distribution are difficult to fabricate. By contrast, coding grids can be routinely prepared and optimized. As demonstrated in Fig. 2(b), grid-enhanced XCAMPO can achieve a submicrometer half-pitch resolution. This value is lower than the resolution of the recently proposed defect-assisted imaging technique 26 , that is limited mainly by the size of a single capillary aperture. In the present experiment, the resolution was mainly limited by the detector resolution. In fact, the lens coupling of the scintillator limited the signal to approx. f c /4, where f c is the Nyquist frequency of the detector. Hence, in principle, a resolution at the level of 0.5 μm could be possible with the standard lens used in this work that was designed for micro x-ray fluorescence spectroscopy. Based on the data presented in Fig. 3 and the frequency dependence of the depth resolution in laminographic-like imaging, one can expect that for 0.5 μm features, the corresponding depth resolution could be at the level of 5 μm. It is worthy to note, that in an idealized grid-enhanced XCAMPO experiment, the grid should be located exactly at the exit surface of the optics. Is such a case, the resolution is not limited by the the size of a single capillary aperture, but only by the "sharpness" of the grid or by the highest resolved spatial frequency of the grid image. This effect is somehow analogous to the lack of penumbra blur in contact imaging. However, such an idealized geometry will be hardly possible to realize in practice. Precise cutting of the optics is not straightforward and the surfaces of polycapillary devices are usually not perfectly flat. In addition, the grid thickness cannot be infinitely thin in order to provide enough stopping power of X-rays. Taking into account these limitations, one can estimate that grid-enhanced XCAMPO using a tailored optics with a shorter focal length and/or using a more efficient detector placed at a farther distance could achieve spatial resolution at the level of 250 nm-300 nm. The obtained sub-micron spatial resolution is comparable to the resolution of indirect detection systems that use thin scintillators and which are frequently employed in synchrotron experiments 34 . However, in grid-enhanced XCAMPO, the object to be imaged is located inside the focal spot of the optics and submicron X-ray imaging could be realized simultaneously with μXRF scans. Since the coding grid absorbs only approx. 50% of the primary X-ray photons, a polycapillary optics (with a grid in front of it) could be still very efficiently used for element specific μXRF scans. Grid-enhanced XCAMPO could provide high-resolution transmission images of the same sample. A similar solution has been previously described as a very useful for μXRF systems, that are based on monocapillary optics 27 . The main drawback of grid-enhanced XCAMPO is related to a small field-of-view, which requires scanning and recording of many overlapping images. However, scanning is inherent to μXRF and the optimization of the overlap of the adjacent scan spots may shorten the data acquisition and/or improve the decoding procedure 35 . Grid-enhanced XCAMPO could be also adopted for X-ray phase-contrast imaging 36,37 or dark-field imaging 38 that is based on coded aperture masks. Also, a general concept of structural illumination 39,40 could be very useful for obtaining multimodal images with polycapilary optics. In this work, a standard TEM grid was used as a coding aperture. The use of a grid with a pitch that is better matched to the size of the focal spot will eliminate "ghost" artefacts. Due to the robustness against noise, the use of a periodic aperture and an optics with a small focus is most probably the best solution for laboratory based experiments. For a periodic mask, the deconvolution procedure is limited to intense harmonic peaks in the reciprocal space that are characterized by a high signal-to-noise ratio. In addition, the coded aperture is especially efficient when the object is small 41 i.e. when the size of the focal spot does not exceed much the spatial resolution. However, grid-enhanced XCAMPO could be also realized with focusing polycapillary half-lenses at synchrotrons. In such a case, the signal-to-noise ratio is not a major concern and it would be interesting to perform experiments with non-periodic masks 23,22,42 . Non periodic-mask would permit a more uniform sampling of the object's Fourier space and permit high-resolution imaging with optics that have much larger focal spots. Methods Experimental details. The experiments were performed using a tungsten anode X-ray tube (XTG5011 Apogee, Oxford Instruments) with a 40 μm spot that was operated at 50 kV and 1 mA. The optics (microlens for X-ray fluorescence spectroscopy, IfG) consisted of an array of approx. 3 × 10 5 capillaries with an intercapillary spacing of ~ 1.2 μm 26 . It had an exit working distance f ≈ 2.5 mm and an exit aperture of 1.1 mm and produced a focal spot with a FWHM of Δ x ≈ 11 μm. The total number of photons per second in the spot was ~ 3 × 10 8 . X-rays were recorded using a cooled sCMOS 2 k × 2 k pixel camera (Photonic Science) that was lens coupled to a Gadox (16 mg/cm 2 ) scintillator with dimensions of 82 × 82 mm 2 . The effective pixel size was 42 μm, but the lens-limited resolution of the system was at the level of 150 μm. Note that for large scintillators, lens coupling is inefficient, and the sensitivity can be drastically improved by using fibre-optic coupling. The scintillator was placed at a distance of D ≈ 40 cm from the optic. The optic and the test objects were placed on piezo XYZ stages (MX35 and MS30, Mechonics). For the grid, we used a gold transmission electron microscopy 2000 mesh grid (pitch of 12.5 μm, bar width of 5 μm) with a thickness of 4 μm. The grid was placed on a holder that could be moved by means of three stepper-motor-driven translation stages (Standa, 8MT173). The grid was placed at a distance of g ≈ 0.5 mm from the lens output surface (the minimum distance allowed by the presence of the beryllium window of the optic). The pinhole used in the experiments had a diameter of 1 μm and was made of Mo absorbing material. Motion control, data acquisition and data analysis were performed using home-developed MATLAB procedures. Data analysis. The grid-enhanced XCAMPO data were decoded based on equation 1. For a single exposure, decoding was performed by means of the convolution theorem. The reconstructed image in (ν x , ν y ) Fourier space was calculated as follows: x y x y x y p x y x y 0 where the hat symbol denotes the Fourier transform; I and I 0 are images of the grid recorded with and without the object present, respectively; I p is an image of the grid recorded with the pinhole placed in the focal spot, which approximates the coding aperture S; and Π (ν x , ν y ) is a mask that limits the effects of the decoding procedure to sharp peaks in Fourier space (cf. Fig. 1(e)). The real-space reconstruction U was obtained from the reciprocal-space image Û via the inverse fast Fourier transform. To increase the FOV, the sample was scanned in the lateral xy plane in steps of Δ , which were smaller than the lateral size of the focal spot, similar as in ptychography 43 . The final image V was composed of partially overlapping images U according to V(x, y) = ∑ i,j U ij (x′ , y′ ) G(x′ , y′ ), where x′ = x − iΔ , y′ = y − jΔ , x and y are the non-magnified real-space coordinates, and i and j enumerate the scan positions. G is a Gaussian function with approximately the same shape as the focal spot. It was used to suppress the reconstruction of spurious signals in the region outside the focal spot.
5,401.2
2017-03-21T00:00:00.000
[ "Materials Science", "Physics" ]
Are News Effects Necessarily Asymmetric? Evidence from Bangladesh Stock Market The primary objective of this paper is to empirically examine the nature and statistical significance of the news effect on conditional volatility of unpredictable components of stock returns. Daily stock return data of 12 local and multinational companies on Dhaka Stock Exchange Ltd., Bangladesh, for the period 1990 to 2011 were used in this study. The likelihood of asymmetric effects of news on conditional volatility was tested using a set of diagnostics under the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) framework. The results fail to reject the null hypothesis of symmetric effects, thereby suggesting that the conditional volatility of unpredictable components of stock returns is affected equally by positive and negative news. The robustness of the results was further checked by using three widely used asymmetric models, namely exponential GARCH (EGARCH), Glosten, Jagannathan & Runkle (GJR)-GARCH, and a partially non-parametric Autoregressive Conditional Heteroscedastic (PNP-ARCH) models. Yet again, the results do not provide any evidence of significant asymmetric effects in the volatility process. In addition, the descriptive results confirm the stylized facts of unpredictable return series such as non-normal distribution, time variant conditional volatility, and persistence in return volatility. Collectively these findings, perhaps, indicate the adequacy of the GARCH (1,1) model in representing the data generating process. A number of regulatory and behavioral factors that are anticipated to be accountable for the absence of asymmetric news effects are underlined. Finally, some policy implications of the results and possible extensions of the present paper are also conveyed. JEL codes: G10, G12, G14 Introduction With the growing interest in financial markets worldwide, empirical investigation of the manifestation and statistical significance of the effects of bad and good news on conditional volatility of stock returns is of practical importance for policy makers and financial market participants. With particular reference to Bangladesh stock market, the importance of such investigation cannot be overlooked as the effective functioning of the stock market is pivotal to the country's economic growth. The inherent link between a well-functioning stock market and the country's economic growth is not only stipulated in the vision and mission statements of the Dhaka Stock Exchange Ltd. (hereafter, DSE), Bangladesh (https://www.dsebd.org/, accessed July 28, 2021) but its statistical significance is also observed by Banerjee et al. (2017). While the asymmetric effects of bad and good news were documented in many studies, disagreement on the results also prevails in the financial market literature (see the review of relevant literature below). The asymmetry is stemming from the assertion, as indicated by Zivot (2009), that negative news generates more volatility in stock returns than that of its equal-sized counterpart. Engle and Ng (1993) noted that this asymmetric phenomenon is popularly labeled as "leverage effect"-a term first coined by Black (1976) and subsequently by Christie (1982). The selection of Bangladesh stock market as a case study is prompted by the country's economic potential depicted through its highest average GDP growth record (about 6.04% with SD = 1.18) in South Asia during 2000-2020 (International Monetary Fund & World Economic Outlook Database, April 2021) which is projected to be even higher (7.1%) in 2023 by the Asian Development Bank (Bangladesh and ADB, Asian Development Bank accessed 12 April, 2022). Furthermore, the recognition as a frontier market by the International Finance Corporation (IFC) since 1997 (Gilbert, 2019), its ambition to achieve the status of a higher middle-income country by 2031 (Bangladesh Overview: Development news, research, data, World Bank, accessed April 16, 2022) and government's concerted efforts to make the country as an investment destination for foreigners by making necessary regulatory changes and offering various incentives, etc. (Islam, 2014) make the present case study selection very much compelling. However, to the best the of authors' knowledge, there is a genuine dearth of empirical research in the country on the empirical assessment of news effects on stock return volatility using sophisticated econometric models. A recent study by Arefin and Ahkam (2017) used asymmetric GARCH models to ascertain volatility transmission between commercial banks, non-bank financial institutions and insurance companies of Bangladesh and found asymmetric effects only in case of non-bank financial institutions. However, the present paper differs from Arefin and Ahkam (2017) in terms of both data range and empirical approach. It considers traded stocks from various industry categories, includes firm-level data with longer time span, and more importantly, ensures robustness of empirical results by employing a set of diagnostic tools and sophisticated econometric models following Engle and Ng (1993). Furthermore, the investigation of the asymmetric relationship between returns and implied volatility for DSE has been lacking from global studies on stock market of emerging economies. For instance, DSE was not considered in their comparative global study of developed and emerging markets by Bekiros et al. (2017). In comparison with other studies of emerging markets (see the review of literature section below), the novelty of the present paper lies with the use of firm-level data rather than aggregated stock index. Furthermore, in evaluating the extent and nature of news effects in the emerging markets, the present paper complements the existing asymmetry volatility literature of emerging markets by adopting an alternative non-parametric modeling procedure proposed by Engle and Ng (1993) to estimate the news impacts function directly from the data. The present paper has relevance from academic, policy and industry perspectives. From an academic perspective, the examination of the nature of news effects on stock return volatility involving DSE would not only fill the existing knowledge gap in the context of financial market research in the country but would also complement the existing regional and global literature, specifically the emerging markets. The results from the empirical approach adopted in this paper would also provide useful hint to the choice of appropriate techniques in carrying out further research on return volatility for DSE. From a policy perspective, the present paper is useful for regulatory authority as the sensitivity of DSE to global shocks was observed by Kumar and Dhankar (2009). Additionally, the provision for an open investment opportunity for foreign investors (https://www.dsebd.org/, accessed July 28, 2021) necessitate frequent scrutiny of return volatility for taking appropriate actions to stabilize the market. This study also contributes to the practical realm as it provides both local and foreign traders with the knowledge of returnvolatility relationship and, hence, enables them to take appropriate portfolio decisions to minimize risks. In view of the above discussion and with particular reference to Bangladesh capital market, the present paper examines the following two null hypotheses: Hypothesis 1: The individual return series and their unpredictable component are normally distributed with timeinvariant conditional variance. Hypothesis 2: The volatility of unpredictable stock returns is affected equally by positive and negative news (i.e., symmetry). Brief Market Profile The Dhaka Stock Exchange Ltd. (DSE)-formerly known as East Pakistan Stock Exchange Association Ltd. in 1954, East Pakistan Stock Exchange Ltd. in 1962and Dacca Stock Exchange Ltd. in 1964 been guided by three main undertakings such as the adoption of technological innovation, contribution to economic growth, and introduction of corporate governance. As of July 28, 2021, there were 610 companies in DSE. In 2020, the market operated for 208 days and the highest market capitalization and total turnover were 3,395,510.63 million Taka . With reference to the structure of market capitalization, it is observed that equities represent about 85% of the total with government treasury bond accounting for about 14% while mutual funds and corporate bonds playing a very negligible role. Chowdhury and Rahman (2004), Basher et al. (2007), and Hassan and Chowdhury (2008) provided detailed synopsis of the Bangladesh capital market and the DSE. Further details on DSE can be obtained from DSE website (at http://www.dsebd.org/). Review of Relevant Literature A number of studies, involving both developed and emerging financial markets, find evidence of asymmetric effects of news on return volatility. Examples of such studies include Pagan and Schwert (1990) and Dzieliński et al. (2018) for US market, Engle and Ng (1993) for Japanese stock market, Rabemananjara and Zakoian (1993) for French stock market, Henry (1998) for Hong Kong stock market, Koutmos (1999) for six emerging Asian stock markets namely Korea, Malaysia, Philippines, Singapore, Taiwan, and Thailand, Yeh and Lee (2000) for Taiwan and Hong Kong stock markets of greater China, Hou (2013) for Chinese stock markets, Apergis and Eleptheriou (2001) for the Athens stock market, Blasco et al. (2002) for Spanish stock market, Alberg et al. (2008) for the Tel Aviv stock exchange, Charles (2010) for France, Germany, USA and Japan stock markets, Tanha and Dempsey (2015) for Australian stock market, and Banumathy and Azhagaiah (2015) for Indian stock market. In examining the presence of volatility feedback and leverage effects for a set of 16 developed and developing country markets for the period January 2001 to October 2014, Jin (2017) claimed that the inverse association between stock returns and volatility is primarily caused by the leverage effect and the association was found to be intensified during market unrest. Nevertheless, non-uniformness in the extent of and the absence of asymmetric effects are also evident in the empirical literature. For instance, Brooks (2007) found that the extent of asymmetry in conditional volatility vary across the set of emerging markets, with the highest average (3.817) for the Latin American markets followed by Asia (2.179), Europe (1.806), and the Middle Eastern and African markets (1.197) and with no apparent link to market size measured by market capitalization. In ascertaining volatility transmission between commercial banks, non-bank financial institutions and insurance companies in Bangladesh Arefin and Ahkam (2017) observed a lack of asymmetric effects in case of commercial banks and insurance companies. In a global case study involving 20 developed and emerging markets categorized into advanced, Asian, Latin American and Europe and South African, Bekiros et al. (2017) noted the presence of asymmetric phenomenon (excluding Australia and China) and it varied in magnitudes across regions. Bekiros et al. (2017) also observed that market reaction to negative returns differed within Asian markets and they attributed such finding to cultural dissimilarities and a lack of institutional uniformity. In examining asymmetry in conditional volatility of unexpected stock returns of the greater China stock markets, Yeh and Lee (2000) found no evidence of asymmetry for the Shanghai and Shenzhen markets. Similar findings are also reported by Bahadur (2008) for Nepal, Mun et al. (2008) for Kuala Lumpur Composite Index, Alagidede and Panagiotidis (2009) for Morocco and Zimbabwe, Alagidede (2011) for Egypt and Tunisia, Jayasuriya et al. (2009) for Brazil, Chile, Indonesia, Pakistan, and Taiwan, Charles (2010) for UK, and Oskooe and Shamsavari (2011) for Iranian stocks. In analyzing volatility of Malaysian stock market Lim and Sek (2013) found that symmetric GARCH model performed better than their asymmetric counterpart models in normal economic period, while the opposite held true for crisis periods. In examining the impact of macroeconomic news (comprising inflation, unemployment and trade deficit etc.) on exchange rates (i.e., dollar-Mark and dollar-Yen) returns-volatility relation using a high-frequency data Pearce and Solakoglu (2007) found no persuasive evidence of asymmetric news effects. Various reasons have been cited in the literature to interpret the asymmetry phenomenon. Poterba and Summers (1986), Campbell and Hentschel (1992), and Awartani and Corradi (2005) provide indication of "volatility feedback" that asserts a positive association between an anticipated volatility and equity return. Wu (2001) found that the leverage effect and volatility feedback effect both contributed to asymmetric volatility. Bekiros et al. (2017) noted "behavioral heterogeneity" of economic agents (such as irrationality, extrapolation bias etc.), Lo and MacKinlay (1988) speak of non-synchronous trading, firm size effects are noted by Cheung and Ng (1992), and market size (measured by average market capitalization) by Brooks (2007). Based on the analysis of daily stock return from 49 countries, Talpsepp and Rieger (2010) observed that the extent of economic development, the ratio of market capitalization to GDP and investors' sentiment contributed to volatility asymmetry. In case of Bangladesh stock market Chowdhury et al. (2014) found evidence of significant positive influence of investors' sentiment on portfolio returns. Andrei and Hasler (2015) found positive association between investors' attention to news and stock-return volatility and mentioned that low attention to news generates low return volatility as low attention results in slow learning that leads to slow integration of new information into market prices. Dzieliński et al. (2018) found a positive association between asymmetry in stock return volatility and investors' asymmetric attention to good and bad news and heterogeneity of opinion. In analyzing reactions to local and global macroeconomic policy announcements (comprising GDP, trade balance, inflation, unemployment etc.) of mature (i.e., US and Germany) and emerging bond markets (such as Brazil, Mexico, Russia, and Turkey), Nowak et al. (2011) found close links between emerging and mature bond markets and evidence of asymmetric effects of macroeconomic announcements, both local and global, on volatility. They also pointed out that emerging markets are slow to integrate new information compared to mature markets. McKenzie (2002) documented asymmetry for exchange rates caused by the intervention of the Reserve Bank of Australia in the foreign exchange market. In addition, investors' overconfidence (Dumas et al., 2009) and their under-and over-reaction to good and bad news (Barberis et al., 1998) could also be offered as reasons for asymmetric volatility. For example, Boubaker et al. (2015) found evidence of short-term overreaction to specific events such as regional tensions and terror attacks in case of Egypt stock market. Influence of investors' sentiment on returns-volatility relation is also observed by Uygur and Taş (2014) and Bahloul and Bouri (2016). Data Data for daily average closing price for 12 selected relatively actively traded stocks with the trading record ranging from 172 to 245 days per annum from DSE covering the period January, 1990to December, 2011 are used in this study. It should be noted that random selection of stocks was not possible as majority of stocks were not actively traded and hence would fail to generate high-frequency data required for modeling volatility. The selected stocks comprise of various spectrum of companies (local as well as multinational) dealing with automobiles and related parts, electrical and other industrial products, pharmaceutical products, export-import industrial products, insurance, mutual fund, medical and industrial products supply, and chemical products. The list of selected stocks (with the serial number) is as follows: Aftab Automobiles Limited (S1), Ambee Pharmaceuticals Limited (S2), Atlas Bangladesh Limited (S3), Bangladesh Export Import Company Limited (S4), Bangladesh General Insurance Company Limited (S5), Bata Shoe Company (Bangladesh) Ltd. (S6), Boc Bangladesh Limited (S7), Eastern Cables Limited (S8), Kohinoor Chemical Company (Bangladesh) Ltd. (S9), Quasem Drycells Limited (S10), Sixth ICB Mutual Fund (S11), and Usmania Glass Sheet Factory Limited (S12). The selection of stocks is constrained by the need of high frequency data for GARCH modeling. With that in mind, the modeling exercise of the present paper set a minimum of 100 trading records per annum for stocks for the entire study period. The initial scrutiny of the data set revealed that in the first year (1990) only 26 stocks out of 109 had greater than 100 observations. Although the number of stocks in the collected data set increased over time (277 stocks in 2011) and the stocks demonstrated increasing trading activity (all but 14 having 100 or more observations in 2011), any selected stock that did not have at least 100 trading records per annum for the entire data period could not be included in the selection. These requirements are fulfilled by the 12 stocks used in the study. The sample period of 1990 to 2011 covers various economic episodes witnessed by DSE. In addition to normal period noted by existing studies (e.g., Bose & Rahman, 2015) there were two drastic price falls in 1996 and 2011 (Siddikee & Begum, 2016) and the global financial crisis in 2008 (Rastogi, 2014), covering some very significant events in the stock market. The sample period also spans over four consecutive 5-year plans in Bangladesh. It is believed that the coverage of various significant economic episodes in existing data provides a large enough sample period to allow the unfolding of any asymmetric behavior which is the subject matter of the study. However, the investigation of any potential effect of an extension of the sample period is left for future research. The return series (R t ) for each stock "i" is calculated using "average closing price" as follows: R it = 100*ln(Ρ it /Ρ it−1 ). Further characteristics of the calculated return series are presented in Table 1. Empirical Model While the ARCH model by Engle (1982) and Engle and Bollerslev (1986), and the GARCH model by Bollerslev (1986) are successful in capturing some of the empirical regularities (e.g., volatility clustering and thick-tailed characteristics) of stock returns (Bera & Higgins, 1993;Bollerslev et al., 1992) the failure of such models in capturing asymmetric feature of volatility in stock return is well documented (Engle & Ng, 1993;Glosten et al., 1993;Nelson, 1991;Pagan & Schwert, 1990). To rectify such failure various refinements are proposed that include exponential GARCH (EGARCH) by Nelson (1991), threshold GARCH (TGARCH) by Rabemananjara and Zakoian (1993) and Zakoian (1994), GJR-GARCH proposed by Glosten et al. (1993), and a partially non-parametric ARCH (PNP-ARCH) models suggested by Engle and Ng (1993), among others. In these models the impact of asymmetric or leverage effects are revealed through the use of so-called news impact curve. The conditions and specifics of estimating news impact curve under the above-mentioned asymmetric GARCH and PNP-ARCH models can be found in Engle and Ng (1993) and Zivot (2009). It should be mentioned that the performance superiority-measured by the predictive power-cannot be assigned to one particular asymmetric GARCH model as it is case specific. While superior performance of the GJR model of Glosten et al. (1993) was noted by Engle and Ng (1993) and Brailsford and Faff (1996), such superiority of the EGARCH model was observed by Pagan and Schwert (1990), Alberg et al. (2008) and Tanha and Dempsey (2015). Banumathy and Azhagaiah (2015) found TGARCH (1,1) to be the best fitted model to capture asymmetry. Apergis and Eleptheriou (2001) and Henry (1998) found support for quadratic GARCH (QGARCH) and generalized quadratic ARCH (GQARCH) models respectively. To this end, the empirical analysis adopted in this paper involves the following steps. First, a range of descriptive statistics of the return series of the selected stock is computed to examine the presence of ARCH effects. Second, the unpredictable component of the stock return series is derived by following the day-of the-week effects and autocorrelation adjustment procedures adopted by Engle and Ng (1993). The day-of-the-week effects adjustment involves the derivation of first-stage residuals by regressing each return series on a constant, and 5 day-of-the-week dummies. The adoption of this step apart from non-normal characteristics, daily stock returns may be influenced by the day-of-the-week effects as documented in several studies (see for instance, Charles, 2010;Yunita & Martain, 2012;Zhang et al., 2017, to name a few) comprising both developed and emerging markets. With particular reference to DSE, Bose and Rahman (2015) investigated the day-of-the-week effects and found some evidence of such effects. On the other hand, the autocorrelation adjustment involves the removal of potential autocorrelation problem by regressing the first-stage residual series on a constant and on the sufficient lag terms of the dependent variable. The residual series derived from the autocorrelation adjustment process is termed as the unpredictable return series (denoted as y t ) and is used to investigate the existence and significance of asymmetric effects on conditional volatility by employing a set of diagnostics comprising sign-and size-bias tests proposed by Engle and Ng (1993) under the basic GARCH (1,1) framework (see Bollerslev (1986) for the specification, necessary restrictions and condition to ensure positive variance and stationarity in such formulation). Finally, the results obtained from the sign-and size-bias tests are further checked for their robustness by using three widely used models namely, EGARCH, GJR, and a (PNP-ARCH) models that cater for asymmetric effects of news. Diagnostics for Asymmetry: The Sign and Size Bias Tests The sign and size bias tests proposed by Engle and Ng (1993) for the unpredictable stock return series ( y t )-obtained after the day-of the-week effects and autocorrelation adjustment procedures as mentioned above-involve the following regression model that can be estimated individually and/or jointly (see Laurent, 2009 for more details on the estimation procedure). where, ε t is the estimated residuals under the GARCH (1,1) data generating process represented by equations (1) and (2), ε are used for the sign bias test (SBT), the negative size bias test (NSBT), and the positive size bias test (PSBT), respectively. The variable S t − is a binary variable that equal unity when ε t < 0 , and 0 otherwise. The sum of the variables S and S equals unity t t + − . The symbol ϑ t represents the white noise error. The statistical significance of α 1 indicates the presence of asymmetric effects in conditional variance. For a joint test, z t −1 includes the three effects into the regression model (3) and examines the statistical significance of the three estimated coefficients jointly using a Lagrange Multiplier (LM) test that follows an asymptotic χ 2 distribution with three degrees of freedom. Further details on the diagnostic tests can be found in Engle and Ng (1993). As the sign and size bias tests are based on the residuals derived from the symmetric GARCH (1,1) process represented by equation (2), the robustness of such tests results was further checked by using the following conditional variance specifications of the EGARCH (1,1), GJR (1,1) and the PNP-ARCH models represented by equation (4), (5), and (6), respectively. Following Bera and Higgins (1993), the conditional variance function of EGARCH (1,1) model of Nelson (1991) can be expressed as follows: , h t is the conditional variance, ε t represents the unpredictable return series at time t, and µ, α , β , and γ are parameters to be estimated. For equation (4) the non-negative conditional variance is ensured by the logarithmic transformation (Pagan & Schwert, 1990). The absence of asymmetric effects in equation (4) was tested under the null-hypothesis: γ = 0. Zivot (2009) specified that the significant negative (positive) value of the parameter γ indicates that bad (good) news generates higher volatility than that of its counterpart of equal magnitude. Further characteristics of the EGARCH model can be found in Bera and Higgins (1993). The conditional variance function of the basic GJR (1,1) model can be expressed as follows (Laurent, 2009): In GJR (1,1) formulation represented by equation (5), the dichotomous variable S t − is generated as follows: , and "0" otherwise. Similar to the EGARCH (1,1) specification, the absence of asymmetric effects in model was tested under the null-hypothesis: γ = 0. Ling and McAleer (2002) and Rodriguez and Ruiz (2012) mentioned that to ensure positivity and stationarity of the conditional variance represented by equation (5), all estimated parameters should be positive and the condition "α + β + 0.5γ < 1" should be satisfied. Therefore, the inequality restriction was imposed during the estimation process of GARCH (1,1) and GJR (1,1) models by restricting the upper bound of the parameter values. For all series, the upper bound of parameter values were decided based on the difference in parameter value observed from the initial experimentation of unrestricted model. Following by Engle and Ng (1993) the conditional variance function of the PNP-ARCH model is represented by equation (6) where, h t and σ is the conditional variance and unconditional standard deviation, respectively, ε t −1 is the unpredictable return at time (t−1), and β, θ i , and i δ are parameters. The binary variables P it and N it are defined as follows (for , , , , 1 ε σ and "0" otherwise. It should be noted that the threshold effects that are related to both the sign and size of shocks are incorporated in the GARCH family such as TGARCH model of Zakoian (1994) which is similar to the GJR model used in the present study. More importantly, split sample idea of threshold model is captured through the sign-biased tests based on sign and size effects of shocks of the innovation series. The JGR and PNP-ARCH models used in the present paper already contain threshold property. The issue of endogeneity arises in regression analysis when contemporaneous covariates are included in the analysis. This issue does not arise in the present context as all covariates in the conditional variance equation are exogenous in nature as they represent lagged values. The maximum likelihood estimates of the variance equations along with the corresponding standard errors were obtained by using the Berndt, Hall, Hall and Hausman (BHHH) and / or Marquardt algorithm in G@rch and Eviews (version 10) programs. Table 1 presents a range of descriptive statistics for the daily return series of selected stocks. All series display the evidence of fat tails as the estimate of the moment coefficient of kurtosis is considerably higher than the normal value 3. For 8 out of 12 cases, there is evidence of negative skewness, while the remaining four demonstrate positive skewness. For one case (S6) the estimate of the moment coefficient of skewness is close to the normal value zero but has the highest estimate of the moment coefficient of kurtosis. The Lagrange Multiplier (LM) test results reject the null hypothesis of normality for all cases at the 1% level of significance. This infers that the distribution of all daily return series exhibits a clear departure from the Gaussian distribution. The finding is not only corroborating with the empirical evidence provided recently by Bose and Rahman (2015) in case of DSE, but is also consistent with the commonly observed phenomenon in global financial (Bera & Higgins, 1993;Bose, 1993;Jin, 2017, to name a few) and primary commodity markets (Baillie & Bollerslev, 1989;Bollerslev et al., 1992;Bose, 2004). Table 2 presents the results of the day-of-the-week effects adjustment process. The t-test results indicate that Sunday and Monday were significant (at the 5% level) for four (S2, S3, S8, and S10) and five (S1, S3, S5, S10, S12) cases, respectively. For two cases (S3 and S10) both Sunday and Monday were significant and for only one stock (S3), Sunday, Monday, and Tuesday were significant at the 5% level. The results of the F-test (a joint-test) reveal that the presence of day-of-the-week effects for five stocks (S1, S3, S5, S8, and S10). Therefore, the removal of the day-of-theweek effects, as proposed by Engle and Ng (1993), seems appropriate in deriving the unpredictable component of the stock return series. While the presence of day-of-the-week effects in case of DSE corroborates with the findings of other financial market studies comprising both developed and emerging economies (Engle & Ng, 1993;Yunita & Martain, 2012;Zhang et al., 2017), the pattern of the day-of-the-week effects is dissimilar to Bose and Rahman (2015). Also, the results revealed that for the significant cases the coefficient estimates were negative and greater in absolute value compared to insignificant cases. The results of the autocorrelation adjustment are presented in Table 3. Based on the Q(k)-test (Ljung-Box-Pierce test statistic which has an asymptotic chi-squared distribution of with k degrees of freedom) of lag-order up to 12, as reported in Table 3, the null hypothesis of no-autocorrelation is not rejected for all cases at the 5% level. This shows that the autocorrelation adjustment process did successfully remove significant autocorrelation from the unpredictable component of the stock return series. Results and Discussion However, the Q 2 -test up to 12 lags rejects the null hypothesis of conditional homoscedasticity for 9 out of 12 cases, indicating the presence of ARCH effects in those nine cases (Table 3). This signals time-varying nature of conditional variance of the daily returns of these nine stocks that causes volatility clustering as described by Engle and Bollerslev (1986). While the Q-test detected the presence of ARCH effects for nine cases, the high value of kurtosis was noted for the remaining (S1, S5, and S8) cases. Baillie and Bollerslev (1989) indicated that high value of kurtosis is likely to reduce the power of the Q-test used for detecting the presence of ARCH effects. This concern led to the application of the GARCH (1,1) model represented by equations (1) and (2) to examine the presence of ARCH effects in unpredictable return series. The results of this exercise are reported in Table 4. Table 4 presents the descriptive statistics of the individual unpredictable return series. The estimates of the third moment coefficient indicate asymmetry in the unconditional distribution of the individual unpredictable return series. The estimates of the fourth moment coefficient are considerably greater than 3 indicating that the unconditional distribution of the individual unpredictable return series is fat tailed. Jointly, these findings suggest non-normality in these series. In all cases, GARCH (1, 1) model was found to be parsimonious as its parameter estimates satisfy the essential parameter restrictions both individually and jointly as discussed earlier in section 4. From the results, significant ARCH and GARCH effects were evident in 11 out of 12 cases. In addition, the persistence of volatility measured by the magnitude of (α + β) ranges from 0.831 to 0.977, indicating the extent of volatility persistence. In sum, the findings of non-normality and timevariant characteristics of return series reject the null hypothesis 1 and provide support to the application of GARCH modeling approach to examine the behavior of the unpredictable return series in case of DSE. Table 4 also presents the test results for asymmetry comprising the sign bias, negative size bias, positive size bias, and joint tests. They are found to be statistically insignificant for 11 out of 12 cases, thereby indicating that the null hypothesis 2 of "symmetry" cannot be rejected at the 5% level. Tables 5 and 6 present the empirical results of EGARCH, GJR-GARCH, and PNP-ARCH models, respectively. It is interesting to note that the results of EGARCH and GJR models (Table 5) are consistent with the asymmetry tests results based on GARCH (1,1) model for all cases, thereby failing to reject the null hypothesis 2 of "symmetry." In addition, the statistical insignificance of a majority of parameter estimates in the PNP-ARCH model (Table 6), signals the same conclusion of no asymmetry. This finding is not unusual as seen earlier in the review section. Kumar and Dhankar (2009) observed a lack of asymmetric volatility in case of emerging markets in South Asia like India, Bangladesh and Sri Lanka. Bahadur (2008) makes similar observation in the case of Nepal. With reference to the asymmetric effects, the consistency in results from the models represented by equations (4), (5), and (6) seems to give credence to the results from the sign and size bias tests. The possible explanations for the absence of asymmetry in return volatility discussed below can be broadly classified into two categories-regulatory and behavioral. Understandably, these two categories are interconnected as traders' behavior is influenced by regulatory measures and vice versa. Regulatory intervention through measures such as the circuit-breakers system, prohibition of short sale under the short sale regulation of 2006, and communication with stakeholders through online news bulletin with the verified price sensitive corporate news and rumor and daily scrutiny of press articles etc. under the surveillance functions of price and position monitoring are exercised in DSE (https://www. dsebd.org/ accessed July 28, 2021; Basher et al., 2007;Sochi & Swidler, 2018). These measures may have contributed to curb the asymmetric effects of news. With reference to DSE, Basher et al. (2007) found that the imposition of circuit breaker measure resulted in significant decrease in daily equity returns volatility. Oskooe and Shamsavari (2011) alluded to the price limit as one of the reasons for the observed lack of asymmetric effects in Iranian case. A survey on various aspects of circuit-breakers can be found in Sifat and Mohamad (2020). With reference to DSE, the prohibition of short-selling, as argued by Sochi and Swidler (2018), hinders the price discovery process, and therefore, market efficiency. The prohibition also has the potential to influence overconfident investors and traders and negatively affects trading volume as envisaged by the over-confidence literature (Daniel & Hirshleifer, 2015;Glaser & Weber, 2007). Glaser and Weber (2007) found evidence in favor of this prediction for market participants who rank themselves above others. In case of DSE, the empirical finding of the presence of informational inefficiency in Sochi and Swidler (2018) and Mobarek et al. (2008), perhaps, signals the impact of such restrictions on traders' confidence. See Reed (2013) for more details on short-selling and its role in financial markets. If the news bulletin by the regulatory agency maintains the balance between the positive and negative characters of the news, then it could facilitate to level the effects of news (Dzielinski, 2012). Furthermore, irregularity in the release of such news bulletin can have a dampening effect on volatility response of traders. In a Russian case study, Nowak et al. (2011) observed investors' slow reaction to domestic news caused by the irregularity in news release that resulted in the weakest volatility response from economic agents. On the other hand, the attitude of economic agents toward news and their assessment of market environment they face could be another factor. The absence of asymmetric effects, perhaps, indicates that the DSE market environment is either treated as unambiguous by market participants or their decision-making is uninfluenced by ambiguity. Williams (2015) claimed that in an ambiguous market environment, investors' asymmetric reaction to good and bad news is caused by their pessimistic approach to decision-making. However, in the absence of such ambiguity, Williams (2015) found that good news and bad news were treated equally by investors as signaled by the statistically indifferent magnitude of estimated coefficients .6503 and .6508, respectively. Other factors such as traders' reluctancy to take short position in the market and their under-reaction to bad news may also have contributed to curb the extent of volatility response to negative news. With reference to DSE, Joarder et al. (2014) pointed out that investors are reluctant to take short positions due to knowledge deficiency or behavioral biases generated by the manifestation of information inefficiency in the market. Delay in information absorption into market and behavioral bias of traders were also mentioned as probable causes of informational inefficiency in Mobarek et al. (2008). Furthermore, Maher and Parikh (2011) found evidence of under-reaction to negative shocks in all periods excluding post-crisis period in case of medium and small size indices in Indian stock market. The research on volume-volatility relations examines news effect on conditional variance by incorporating trading volume as a proxy for the arrival of news into market (see e.g., Carroll & Kearney, 2012;Girard & Biswas, 2007;Lamoureux & Lastrapes, 1990, to name a few). This line of research assumes, as pointed out by Andersen (1996), that there exists a high degree of positive contemporaneous correlation between trading volume and return volatility. This implies that news causes change in trading volume (Scott et al., 2003). However, the empirical evidence is not universal. With regard to DSE, Bose and Rahman (2015) found no significant influence of trading volume (both current and lagged) to reduce ARCH and GARCH effects in daily returns. Furthermore, according to Hong and Stein (2007) and Banerjee and Kremer (2010) major disagreements among economic agents on the interpretation of news encourage generating positive link between trading volume and change in returns over time. The failure of trading volume in reducing volatility persistence as noted by Bose and Rahman (2015), perhaps, signals a lack of influence of public information on trading volume due to transactional barriers caused by the imposition of regulatory measures as discussed earlier, or a higher degree of agreements among market participants in the interpretation of the news that arrives in the market, or both. Finally, if asymmetric reactions of traders are linked to the day-of-the-week effects as argued by Chang et al. (1998) and Charles (2010), then the removal of the dayof-the-week effects from the unpredictable stock return series in the case in hand may have dampening effect on asymmetric effects. Policy Implications and Conclusions One of the important policy implications with regard to the absence of asymmetric news effects is that it, perhaps, signals the efficacy of government intervention through regulatory measures as discussed above. Shleifer (2005) indicated the insufficiency of market discipline in effectively controlling disorders and fraudulence behavior in emerging market cases. In case of DSE, symptom of inefficiency in market functioning is observed by Basher et al. (2007) and Joarder et al. (2014). This is not surprising. For DSE, Chowdhury et al. (2014) points out that investors' sentiment is probably influenced by investors' unawareness, unreliability of earnings information, inferior information, lack of expert services etc., rather than market fundamentals. In addition, the size of non-institutional investors in DSE is identified by Chowdhury and Rahman (2004) as another reason for the failure of the stock market to predict macroeconomic volatility. Furthermore, Kumar and Dhankar (2009) observed association between Dhaka stock market with Pakistan stock market and global counterparts, and sensitivity of DSE to economic and non-economic shocks of global origin. Given the empirical evidence of market inefficiency in DSE together with its connection with the regional and global stock markets, government intervention would not only be desirable to correct market functioning but would also help protect DSE from the potential influence of regional and global financial crisis by reducing the negative effects of foreign investors' reaction. The desirability of government intervention for DSE is also resonated in Chowdhury and Masuduzzaman (2010). While a case for government intervention as a means of controlling market anomalies and restoring stability can be made for DSE, the analysis of trade-off associated with such intervention should be pursued to determine efficient regulatory choices. This is because excessive intervention may generate inefficiency in the decision-making process of economic agents involved in the market and consequently upset their confidence. The finding of a lack of asymmetry in returns volatility and its potential reasons identified in the paper should be of practical relevance to decision-makers and market traders. If the regulatory interventions, as described in the paper, contribute to dampen the asymmetric news effects, decision makers should regularly monitor their potential negative effects on market efficiency and the behavior of market traders. Xiong and Bharadwaj (2013) suggested that the asymmetric effects of news can be effectively tackled by adopting appropriate marketing approach by individual firms that aids to amplify positive effect of good news and vice versa. This case study has examined the presence of significant asymmetric effects of news on conditional volatility of stock returns using daily stock return data from Dhaka Stock Exchange. While the empirical results confirm the stylized facts such as the presence of non-normal distributional properties of the daily stock return series, time variant conditional second moment and presence of volatility persistence, the results fail to reject the null hypothesis of symmetry in 11 out of 12 cases. This signals, perhaps, that the data-generating process in DSE can be adequately represented by the symmetric GARCH model. This is also echoed in Kumar and Dhankar (2009) with reference to DSE. Although the potential reasons for the absence of asymmetric effect of news are itemized, the present paper lacks the empirical determination of the extent of their influence. This is simply because this paper does not investigate the empirical determination of the extent of their influence as such undertakings are beyond its scope. In this regard, further empirical research might prove worthwhile to examine the impact of such policy measures on market performance, price discovery process, and traders' confidence using other econometric models such as quantile regression (see for instance, Cappiello et al., 2014;Engle & Manganelli, 2004;Mensi et al., 2014) and threshold autoregression (see e.g., Hansen, 2011;Hossfeld & MacDonald, 2015). Additional research on the behavioral aspects of market participants and the factors influencing their response to news would be useful to consider Finally, future work can also be pursued to validate (or otherwise) the empirical findings of the present paper by extending the time period and/or selecting additional stocks.
9,305.8
2022-10-01T00:00:00.000
[ "Economics" ]
Development of Barenblatt’s Scaling Approaches in Solid Mechanics and Nanomechanics The main focus of the paper is on similarity methods in application to solid mechanics and author's personal development of Barenblatt's scaling approaches in solid mechanics and nanomechanics. It is argued that scaling in nanomechanics and solid mechanics should not be restricted to just the equivalence of dimensionless parameters characterizing the problem under consideration. Many of the techniques discussed were introduced by Professor G.I. Barenblatt. Since 1991 the author was incredibly lucky to have many possibilities to discuss various questions related to scaling during personal meetings with G.I. Barenblatt in Moscow, Cambridge, Berkeley and at various international conferences as well as by exchanging letters and electronic mails. Here some results of these discussions are described and various scaling techniques are demonstrated. The Barenblatt- Botvina model of damage accumulation is reformulated as a formal statistical self-similarity of arrays of discrete points and applied to describe discrete contact between uneven layers of multilayer stacks and wear of carbon-based coatings having roughness at nanoscale. Another question under consideration is mathematical fractals and scaling of fractal measures with application to fracture. Finally it is discussed the concept of parametrichomogeneity that based on the use of group of discrete coordinate dilation. The parametric-homogeneous functions include the fractal Weierstrass-Mandelbrot and smooth log-periodic functions. It is argued that the Liesegang rings are an example of a parametric-homogeneous set. INTRODUCTION Scaling methods may be applied wherever there is a need in studying a phenomenon across many scales. The rescaling techniques include dimensional analysis, renormalization groups, intermediate self-similar asymptotics, incomplete similarity, fractals, parametric-homogeneity and other techniques. Many of these techniques are described in books by Barenblatt [14]. Professor Grigory Isaakovich Barenblatt (1927Barenblatt ( 2018) was a remarkable applied mathematician who worked in many areas of mechanics, physics, and engineering. I will not list his numerous titles, awards and prices as well as his memberships in various prestigious scientific organizations because such a list would be too long. In 1994 G.I. Barenblatt told me the follow-ing story. When he was considered for the position of G.I. Taylor Professor of .luid Mechanics at the University of Cambridge, K.L. Johnson said that the position is in the area of fluid mechanics and why this famous expert in solid mechanics was under consideration? Thus, D.G. Crighton had to explain that solid mechanics is just one of the fields of Barenblatts expertise. His famous fracture criterion [5] will not be discussed here either. There are papers where this criterion is discussed in detail (see, e.g. [68]). Here I will discuss several problems related to similarity methods of solid mechanics and nanomechanics where I have got some new results and these results were influenced either by our discussions with G.I. Barenblatt In particular, I consider here the self-similarity techniques developed by Barenblatt and Botvina in application to damage accumulation (see, e.g. [911]) and scaling of mathematical fractals [1,3,1217]. I will discuss also the concept of parametric-homogeneity [1821] and its application to contact problems for smooth and fractal punches and nanomechanics. All the above mentioned problems will be discussed through the prism of my personal reminiscences. I started to enjoy my studies at .aculty of Mechanics and Mathematics (Mekhmat) of Moscow State University (MSU) in 1974. .irst I heard about G.I. Barenblatt from my teacher Askold Georgievich Khovanskii in 1975. Because A.G. Khovanskii is a pure mathematician [22], he could not be my official supervisor in Solid Mechanics. Hence, I asked him about researchers who could supervise my studies in this field. A.G. Khovanskii said that he heard only about Prof. Barenblatt as a researcher of very high reputation working in solid mechanics. Unfortunately Prof. Barenblatt did not collaborate with MSU that time. Later being the second year student, I found a comment by Timoshenko, that the questionable aspect of the infinite stress at the end of the crack has been removed by Barenblatt, who introduces instead large but finite stress to represent atomic cohesion [23]. Because our lectures in mechanics of continuum media were delivered by Prof. Yurii Nikolaevich Rabotnov who liked to say all of us are pupils of Timoshenko, I asked him about this comment. Prof. Rabotnov was very kind and discussed with me the comment in detail. In 1976 I continued my studies at Mechmat with narrow specialization at Theory of Plasticity Division led by Prof. Rabotnov. Thus, I heard about Prof. Barenblatt from my teachers and only much later I learnt that our beautiful sporty classmate Nadezhda Kochina was the oldest daughter of G.I. Barenblatt. I had enjoyed by several remarkable seminars delivered by G.I. Barenblatt before I was introduced to him personally in 1991. Prof. Barenblatt was a remarkable lecturer. Each of his talks in some way was structured like an Agatha Christie mystery novel. The lecture started by showing a mystery of a natural phenomenon (that could often obey a power-law behaviour at intermediate stage of its development). Then he presented a list of suspects (a list of possible approaches to the problem) and showed the investigation process. The denouement was made in the final part of his lecture, when G.I. Barenblatt revealed a proper explanation of the initially mysterious phenomenon. I had also studied some of his papers and models before our first meeting. On the other hand, I knew that in 1987 he and Prof. R.L. Salganik discussed my models of discrete contact between uneven surfaces [24] where I used the mathematical techniques and ideas of the Ba-renblattBotvina damage accumulation model [911]. During our first personal meeting, Prof. Barenblatt presented me the Russian version of his book [1] and a reprint of his remarkable paper [12]. Both presents had a great impact on some areas of my studies. In 1983 I prepared my PhD thesis [25] that I defended a year later at Mekhmat. In this thesis I developed a rather awkward self-similar model of discrete contact between uneven layers of multilayer metallic stacks. Modelling of nonlinear deformation of such stacks subjected by high pressure was important for understanding work of thick multilayer vessels of high pressure. I found that contact between layers exists just in discrete points and the number of these points increases as the compressing pressure. The problems of discrete contact is an active field of research because these problems are very important for tribology [26]. However, the problem I studied differs from the traditional ones because the number of contact spots increases us the pressure grows. Later I read the paper on damage accumulation by Barenblatt and Botvina [9,10] where a usual lemma of dimensional analysis (the dimension function is always a power-law monomial [1,3]) was used in a very unusual and elegant way. In fact, they showed that the growth of damage in fatigue tests is statistically self-similar. Hence, when one looks at the images of the points of damage at an initial moment 0 t and at an arbitrary moment t, we cannot distinguish them statistically. Hence, one can write where . is a function of the dimensionless time. It follows from the lemma of dimensional analysis that ( ) , I understood that this is an universal approach to statistical self-similarity of discrete sets. Indeed, data in the form of a set of points distributed in an irregular way within a planar region, arise in many disciplines. The BarenblattBotvina scaling law means that the transformation of a set of discrete points is a steadystate process and it transforms with the process time statistically in a self-similar way. In other words, if the distribution of the points of the set normalized by the average distance then the distribution is the same for dimensionless time. Let us write this in a more formal way. One of popular techniques of statistical analysis of spatial point sets is the so-called distance method or the theory of the nearest neighbour. The method considers a point as the basic sampling unit and the distances to neighbouring points are recorded, i.e. the distances to the first, second, and to the kth nearest point. This technique converts a list of point coordinates to a unique data set relevant to study of the population density. It is known that if the spatial pattern is characterized by some one-dimensional probability distribution function ( ) X f x for the distances to the nearest point then the distribution function can be completely represented by its mean , X µ i.e. its expected value E(X), and its higher central moments ( ) Using the mean, the higher central moments can be made dimensionless ( ) . n n X X µ µ Hence, the statistical properties of the point set can be characterized by a single quantity with the dimension of length, namely the average distance ( ) X l t 〈 〉 =µ between points, and by an ensemble of dimensionless statistical characteristics. I applied this statistical scaling to describe the evolution of spots of multiple contact between uneven layers in multilayer stacks and vessels loaded by external pressure [24]. Due to imperfections of the layer surfaces, there are gaps between layers. These interlayer gaps and the field of points of interlayer contacts develop statistically in a self-similar manner, and the volume of the gaps P V is described as a power-law function where P is the current pressure, 1 P is the initial pressure, and α is the self-similar exponent [24,25]. The same approach was used to describe the abrasiveness of modern hard carbon-based coatings such as diamond-like carbon or boron carbide. Indeed, S.J. Harris and his co-workers discovered a remarkably simple power-law relationship observed between the abrasion rate of an initially spherical slider by hard carbon-containing films and the number of sliding cycles n that the film has been subjected. The power-law relationship is valid up to 4 orders of magnitude in n. We mo-deled this phenomenon by connecting it with nanoroughness of the coatings and explained the phenomenon by a statistically self-similar variation of the pattern of relatively sharp nanometer-scale asperities of the films [27,28]. Later this approach was combined with classical scaling [2933] to describe wear and abrasiveness of hard carbon-containing coatings under variation of the load [34]. The pin-on-disk tests in which the disks are coated and the counterparts (sliders) are steel balls, were analyzed. It was assumed that the dominant mechanism for slider wear by these nominally smooth coatings is mechanical abrasion of the slider by nanoscale asperities having relatively large attack angles, i.e. by sharp asperities. We propose a model in which we assume that (i) the abrasiveness of a contact is proportional to the number of asperities in the contact; (ii) the areal contact density is uniform; and (iii) the effect of increasing the load is to enlarge the initial apparent contact region between the ball and the disk. Using this model, the observed dependence of the wear rate on load follows relationships that are similar to Hertzian relationships. The average abrasion rate 〈A(n)〉 is defined as the average of the instantaneous abrasion rates i A during the first n cycles. Let M be the total volume of steel removed and d be the total distance travelled 2 . d Rn = π Here R is a pin-on-disk radius. Then we have The region of initial contact depends on the external load and its size can be found from formulae of the Hertz-type contact. Assuming self-similarity of the BarenblattBotvina type and considering as the internal time t the number of cycles t = nT, where the initial time 1 t is equal to the period of the cycle T, i.e. 1 t = T, we obtain that after each cycle, the number of sharp asperities N(1) within this initial contact region is reduced according to the power-law We will call the region of initial Hertzian contact G(1) as the central part of the slider. The ball cannot move down until the material of the central part has not been worn away, while the material outside this part is worn away by new sharp asperities. Hence, the total amount of material slider c ( ) M n removed from the central part of the slider is where | (1)| G is the area of the central part, h(n) is the thickness of the removed slice of the ball after n cycles, and m is the amount of material removed by each sharp asperity. .or an initially spherical slider of radius b , R the total volume of steel M removed during n cycles is .inally, we observe the power-law of abrasion that agrees with experimental observations. The above examples showed that scaling in nanomechanics and solid mechanics is not restricted to just the equivalence of dimensionless parameters characterizing the problem under consideration. Even self-similarity of Hertz-type contact problems [2932] cannot be described by a simple dimensional analysis. The same is related to scaling arguments in problems of nanoindentation [33]. .RACTALS My regular scientific communications with G.I. Barenblatt started in 1994. This was the last year Prof. Barenblatt worked at Department of Applied Mathematics and Theoretical Physics (DAMTP) of University of Cambridge and I arrived there by an invitation of Prof. John R. Willis who got support for this visit from the Royal Society. I would like to write several words about inspirational surrounding at the old small building of DAMTP in 19941995. Twice per day you could meet all famous professors at meetings at tearoom on the ground floor. Sometimes these tea-breaks were a bit noisy and I guess the noise could disturb Professor Stephen Hawking who had an access to his office directly from the tearoom. I benefited enormously by the care shown by J.R. Willis and personal discussions with him. As an example of his remarkable hospitality, I can mention the following. The main focus of my research was on contact problems and sometimes I wanted to discuss contact problems with Prof. K.L. Johnson who had retired from Department of Engineering by that time. To organize each meeting, Prof. Willis phoned to Prof. Johnson; then K.L. Johnson walked to DAMTP at Silver street and J.R. Willis walked to the DAMTP library to provide his office for our meeting. It looks incredible but it is a fact, two famous .ellows of the Royal Society of London cared about a modest young professor from Moscow like he was their adopted child. In such friendly atmosphere, I could start to write an extended paper of fractal approaches to fracture and I had a considerable progress on theory of parametric-homogeneous functions. One of the topics I have discussed with G.I. Barenblatt was fractals. .irst, I learnt about fractals in 1983 when my classmate Mikhail Ermakov shared with Dmitry Onishchenko and me his impression about seminar on pelagic animals delivered by G.I. Barenblatt at Institute for Problems in Mechanics in Moscow. I learnt that there is a new branch of mathematical analysis of highly irregular objects, however I started to study applications of fractals to mechanics seven years later. In summer 1990, A.B. Mosolov encouraged me to study contact problems for fractal bodies. I borrowed a book .ractals in Physics [35] from my classmate Irina Petrova. Three days later she was quite surprised when I returned the book saying that I intend to write a paper. In fact, I modified the CantorLiu profile [36] that was infinite to B C profile having bounded size. Sometimes B C profile is called the Cantor set model [37], and the CantorBorodich profile, structure or fractal (see, e.g. [3840]). I found two kind of contact problems that can be solved for a punch described by the B C profile. However, that time I was not experienced in fractals and Alexey Mosolov expressed these models using fractal terminology. We published two papers on this topis [41,42]. Later D.A. Onishchenko and I have introduced a multilevel hierarchical profile that allowed us to confirm our statement that fractal dimension of a rough surface alone does not characterize contact properties of the surfaces [43,44]. I define fractals as sets with non-integer fractal dimension and emphasize that we need to split the term in two: mathematical and physical fractals (see, e.g., [14,15]). Confusion of these two kinds of fractals led often to various erroneous or at least unjustified conclusions (see discussion in [15]). The former term may be fixed for sets with non-integer mathematical fractal dimension. While the latter term is related to irregular physical objects which being covered by some small units (balls, cubes, yardsticks and so on) obey the power law relation between the number of covering units N and the scale of consideration δ. The main distinction between these two meanings is that the power law of natural objects (physical or empirical fractals) is observed on a bounded region of scales only, while mathematical fractals consider limits when the scale of consideration goes to zero. Modelling of physical objects by means of mathematical fractal sets encounters various difficulties. .or example, it is not a simple task to include scaling properties of a fractal object in its mathematical model. In 1992 thinking about modelling fracture surface as a mathematical fractal surface, I formulated the following paradox: the Griffith criterion in its classical formulation leads to the conclusion that no fractal cracking is possible [13,14]. Indeed, during the crack propagation, the amount of the released elastic energy U is finite and the area of a mathematical fractal surface is infinite. Then the use of the Griffith surface energy γ leads to the above paradox. Therefore, we should introduce some new concepts to consider fractal cracks. I resolved the paradox using the following Barenblatt idea. Barenblatt (see, e.g. [1,3]) stated that the surface of the respiration organ of pelagic animals is a fractal and, therefore, the specific absorbing capacity of the organ has to be related not to its area, which is infinite, but to its Hausdorff measure. I developed the Barenblatt idea and suggested to apply this approach not only to sets with the non-integer Hausdorff dimension but also to fractal sets with other definitions of dimensions. I introduced the s-measure s m of a set that is an alternative to the Hausdorff measure. I took into account that the Hausdorff s-measure not always has a bounded positive value for s = D where D is the Hausdorff dimension of a fractal and introduced the concept of D-measurable sets, i.e. sets whose s-measure has a finite positive value D m for s equal to the fractal dimension D. .inally, I suggested to employ explicitly the scaling properties of . s m Baant and Yavari [45] suggested to name this approach as the Barenblatt Borodich idea. Note that in the general case the s-measure s m does not satisfy all restrictions of a mathematical measure. .eng and I [17] presented the concepts of upper and lower box-counting quasi-measures that should be used in models employing sets with box-counting fractal dimension. The scaling properties of the physical objects should be reflected through the scaling properties of the fractal measures. Roughly speaking it has been proposed to refer various physical quantities to a unit of the fractal measure of a fractal object. Reading [1,12] presented to me in 1991, I proposed to extend the Barenblatt idea and to introduction of the so-called specific energy absorbing capacity of a fractal surface Because the length of a fractal curve is infinite, we consider its projection on the x-axis. Let us consider a fractal crack of projected length l and its advance a. It is assumed that for some fixed value of crack advance of the projection length 0 , L the surface energy 0 ( ) Π L is bounded and we can write L where t is the thickness of a sample. Then for the surface energy of an advance a of a fractal crack, we obtain (1) If we rewrite the Griffith criterion using * ( ) D β concept, then the critical stress c σ is [14] ( 1) 2 * 0 1 E is an elastic modulus of material and 1 k is a dimensionless coefficient. We see that initially the fractal crack propagated in a perfect brittle solid is stable. Indeed, c σ grows with a. Note that the above formula is valid until the upper cutoff of the fractal crack is not reached. The term fractal was introduced by B.B. Mandelbrot who published a book concerning fractals in 1975 [48]. Surprisingly, one could not find any definition of the term in the book. He gave numerous examples of sets which are more irregular than sets considered in common textbooks on Euclidean geometry. I met rather often an opinion that the self-similar sets were almost forgotten and just a very small number of mathematicians knew about them. So, the interest in these sets was resurrected only in 70th. Obviously, this is not true. Starting my studies of fractal sets, I realise soon that I met these sets for the first time in 1969 reading the second edition of a popular book about sets written for school children by Vilenkin [49]. Of course, that time the main term was not yet introduced. After I studied fractals in detail, I obtained new results for both mathematical and physical fractals. However, I was rather disappointed by these techniques. Mandelbrot [50] noted that natural objects do not have pure shapes of classical mathematical objects and said that coastlines are not circles, clouds are not spheres, and mountains are not cones. D. Onishchenko added [44] roughness of real bodies is not a mathematical fractal. We argued that all these geometrical objects: spheres, cones, circles as well as fractals are only mathematical idealizations of complex shapes of real physical bodies. Usually mathematical fractals possess such properties that do not allow researchers to use classical tools of investigation. .or example, fractal surfaces are nowhere differentiable and, therefore there is no normal vector to such surfaces. Hence, even to give a rigorous formulation of the GaussOstrogradskii theorem or the divergence theorem for fractal surfaces one needs to use rather complicated mathematical tools (for details see [51]). Many statements about fractals are either not justified mathematically or simply wrong. Mathematical fractals are often not appropriate tool for studying physical phenomena, while physical fractals suffer from the lack of proper mathematical justification of the approach (see, for details [52]). Barenblatt and I had discussed this question many times because he disagreed with my opinion that Vilenkins book for children is more serious than Mandelbrots books on fractals. PARAMETRIC HOMOGENEITY In 1992 I introduced a class of parametric-homogeneous functions and started to study them. Eventually, the concept of parametric homogeneity was developed. This topic studies parametric-homogeneous (PH) and parametric quasi-homogeneous (PQH) functions, PHand PQH-sets, and corresponding transformations [18 21]. I started to develop the concept under influence of a paper on the WeierstrassMandelbrot function [53]. I thought that the concept and the theory had been developed for a long period of time. However, my guess was wrong. I found in literature only particular cases of PH-functions. In 1994 I started to discuss the concept with G.I. Barenblatt, however he noted that Zababakhin introduced a kind of such sets and he and Zeldovich discussed this case of self-similarity in a couple of review papers. PH-functions and PQH-functions are natural generalizations of concepts of homogeneous and quasi-homogeneous functions when the discrete (discontinuous) group of coordinate dilations (PH-transformation) ( ) l α = = α To avoid a non-unique definition, the least p: p > 1 is taken as the parameter. The graphs of these functions can be both continuous and discontinuous, they can also be smooth, piecewise smooth, with singular points of growth, fractal, non-fractal nowhere differentiable (see for details [20] Contrary to classical scaling when λ is arbitrary positive number, rescaling parameter p of the PH-transformation is fixed. Thus, the PH-sets and PH-scaling can arise in systems having a fixed scaling parameter p. If the fundamental domain is somehow filled, then one can obtain the whole set by applying a PH-transformation to the fundamental domain. If the filling is fractal then the whole set is also fractal. The concept of PH-tansformatios was applied to contact problems [18,21]. Borodich and Galanov [54] presented numerical simulations of contact stresses between a PH-punch and an elastic half-space in the case when the profile of the punch was described by a smooth log-periodic function. It was found that the Hertz type contact problems have some features of chaotic systems: the trend of Ph curve (the global characteristic of the solution) is independent of fine distinctions between functions describing roughness, while the stress field (the local characteristic) is sensitive to small perturbations of the indenter shape. .ractal dimension of roughness alone does not characterise the properties of the contact problems. As G.I. Barenblatt mentioned, the first physical model, where sine log-periodic functions arose, was a model of shock waves in layered systems constructed by Zababakhin in 1965 [55]. The nested self-similar model of turbulent flow constructed by Novikov [56,57] contains log-periodic modulations as well. Barenblatt and Zeldovich [58,59] considered the Zababakhin model as a very interesting development of self-similarity. Then numerous authors who studied scaling at critical points of phenomena noted that in many cases the critical behaviour is modulated: instead of observing as a leading singularity a pure power law, one finds a power law multiplied by a periodic function of the logarithm of the distance to the critical point (see, e.g., [60,61]). I believe PH-functions are very useful for studies of self-similar phenomena exhibiting threshold behaviour. Clearly, real natural phenomena do not exhibit the pure mathematical PH-properties. However, PHfeatures can be exhibited by some processes on their intermediate stage when the behaviour of the processes has ceased to depend on the details of the boundary conditions or initial conditions. The self-similar approximation theory developed by Yukalov (see, e.g. [62]) is based on the use of log-periodic functions. In particular, he described the oscillatory behaviour of the energy release in the vicinity of fracture using log-periodic models. However, he noted that he could use also other parametric homogeneous functions. The idea, that various processes possess an intermediate self-similar stage of their development, was successfully used in the study of continuous self-similarity [1,3]. We can expect that some self-organized processes possess the PH-features. Indeed, log-periodicity, which is a particular kind of parametric homogeneity, was considered in various papers. As an example, let us consider Liesegang rings. The Liesegang ring consists of diffusion of silver nitrate which is placed as a drop of aqueous solution onto a gel matrix containing potassium bichromate [63]. During the experiment there arises a fragmented pattern consisting of bands (rings) of different widths. The formation of the rhythmic bands (the Liesegang rings) are explained by self-organization (internal rhythms) and since the discovery of the Liesegang rhythmic pat-terns, numerous experimental studies have been done. It was found that the Liesegang ring patterns possess PH-features, namely the distances n b between the successive nth and (n + 1)th Liesegang rings increase following the law of geometric series or Jab³czyñski law Here 0 x and 0 t are some values of the arguments for a pattern point, n x and n t are the equivalent values of the arguments. This means that we obtained both the Jablczynski law and the so-called time law n x~. n t CONCLUSIONS The paper presented a very personal review of application of modern scaling techniques to solid mechnaics and nanomechanics. Using BarenblattBotvina techniques, I described scaling of increasing number of discrete points of contact between uneven layers subjected to external pressure; and scaling of decreasing number of discrete sharp nanoscale asperities in pin-ondisk wear tests for some modern carbon-based coatings. Then I discussed Barenblatt idea of attributing scalar physical quantities to fractal measure. This idea provide a proper mathematical tool for studies scaling of mathematical fractals. I developed this idea and demonstrated it in application to fracture mechanics. .inally, I described some aspects of theory of parametric-homogeneous functions. I believe that it can be useful for studying discrete self-similarity and phenomena having oscillatory behaviour on their intermediate stage. I have also discussed briefly the Liesegang rings arguing that this is a PH-phenomenon. All these ideas were described through the prism of my personal reminiscences. Currently my favorite PHYSICAL MESOMECHANICS Vol. 22 No. 1 2019 topic is nanomechanics, in particular mechanics of adhesive contact. I had many discussions with G.I. Barenblatt on nanomechanics. He agreed with my definition of nanomechanics as a scientific discipline that studies (i) application of mechanical laws and solutions of problems of mechanics to objects of nanotechnology; (ii) interactions between physical objects using mechanical equations adjusted to the specific character of the interactions at nanometer length scale; and (iii) influence of nanometer scale objects and processes on meso/macroscale phenomena. We discussed some results of B.V. Derjaguin whom I consider as one the founding fathers of nanoscience. He agreed with me that molecular adhesion is a crucial feature of nanoscale interactions. I explained him why I consider the classic JohnsonKendallRoberts (JKR) theory of adhesive contact as a very beautiful theory and told him about my extensions of the JKR theory [29]. In turn, G.I. Barenblatt argued that not only the chemical reactions take place at the Ångström length scale, but this scale has a fundamental physical meaning in nanomechanics and should be included in the list of governing parameters when the scaling laws are derived. He sent to me a draft of his paper on this topic that was later published [66]. After one of the discussions at University of California, Berkeley, in 2007, the following sentence was born: Mechanics is like a phoenix. Many times it was declared as dying. However, we see its rebirth again and again. Nanomechanics is one of its new beautiful reincarnations. G.I. Barenblatt knew that I like very much his ideas. I discussed them in many papers and I applied his ideas to various problems. I am very proud that Z.P. Baant put our names together [45] as a recognition of my modest contribution to development of one of Barenblatts brilliant ideas. Certainly, the results of G.I. Barenblatt have earned a place in the Hall of .ame of modern mechanics and applied mathematics. ACKNOWLEDGMENTS I am very grateful to Professors Lyudmila R. Botvina and Nikita .. Morozov for inviting me to contribute to this Special Issue.
7,068.6
2019-01-01T00:00:00.000
[ "Materials Science" ]
Mechanical arm teleoperation control system by dynamic hand gesture recognition based on kinect device : The research achieved to control the mechanical arm by using real-time dynamic gesture recognition based on Kinect. It uses the unmarked gesture segmentation algorithm based on the palm neighbourhood and the threshold detection algorithm based on the palmar contour to identify the operator's gestures and moving trajectories, and to convert it into the specific action of the mechanical arm. Through wireless network, this system sends control instructions to the mechanical arm to realise teleoperation control. The system also provides video feedback of mechanical arm operation site. It improves the teleoperation telepresence and interactivity and avoids operation error such as grasp nothing or fall. The results of the experiment indicate that the operation of the gesture control system is simple and easy, the response of mechanical arm is quick and accurate and the human–machine interaction is intuitive and friendly. Introduction With the rapid development of machine vision technology, mechanical arm interaction control system based on gesture recognition has become a research hotspot in the field of humancomputer interaction. In the research of human-machine interaction for gesture recognition at home and abroad, it is mainly divided into gesture recognition technology based on data glove and gesture recognition technology based on visual information. Wu Jiang qin and Li Zhen improved the gesture recognition rate of data glove by combining the algorithm of ANN and HMM, but it has not solved the problems of high cost and inconvenience of wearing [1,2]. Lu Xiao min et al used the Kinect sensor to obtain the depth data, through the HMM, SURF, and other algorithms for hand gesture recognition, used to control autonomous mobile robots, mechanical arm, and intelligent wheelchair and other equipment [3][4][5][6][7]. Wang Yi et al used the application of Kinect sensor in augmented reality, the mechanical arm's motion path planning and teaching learning are completed [8,9]. Xiong You jun and Liu Jun et al proposed that the machine, people, and target object can be operated in different space by the teleoperation technology of connecting the mechanical arm to the network [10][11][12]. The execution ability and security of the machine in the special environment of deep sea exploration, explosion, and radiation can be improved to a great extent. Tang Wei cai et al proposed a scheme that increased the video feedback information for the teleoperation of the mechanical arm, it improved the performance of the mechanical arm operation [13]. Here, we design a teleoperation control system for mechanical arm based on Kinect sensor. It uses the unmarked gesture segmentation algorithm based on the palm neighbourhood and the threshold detection algorithm based on the palmar contour to identify the operator's gestures and moving trajectories. At the same time, it uses embedded system and wireless network to build a field monitoring auxiliary system. This system can provide video feedback for mechanical arm operation to increase the accuracy. The system not only has the advantage of natural body sense interaction but also realises the integration of virtual operation and real scene, making the control process more natural and real ( Fig. 1). System structure This system mainly includes hand gesture segmentation and feature recognition, mechanical arm, operation instruction wireless transmission, video capture, and transmission system and so on. First, we divide the depth image of the operator's gesture and map the behaviour characteristics and movement trajectories of the gesture to the coordinate space of the mechanical arm. Then, through the wireless network, we send the upper computer operation instructions to the mechanical arm to carry out the specific operation. Finally, we use video acquisition and transmission system to collect video images from the visual angle of the outside scene and the target plane, and display and store the video image on the operator's terminal device based on the streaming media technology. Gesture segmentation At present, the OpenNI and NITE middleware provided by the third party has provided the Kinect developer with a more accurate location of the palm of the hand. Mai Jian hua et al. have proposed a depth threshold gesture segmentation algorithm [14]. Based on this algorithm, we have designed an unmarked gesture segmentation algorithm based on the neighbourhood of the palm. The implementation process of the algorithm is as follows: using NITE middleware to obtain the location of the current palm, and caching all points within the distance from the palm of the hand, traversing all points in the depth image pixel by pixel. We make the grey value of the point with a distance greater than the threshold value of the palm is 0, and the grey value of the rest points is 255. As shown in formula (1): handpoint (index) is the palmheart depth image after the segmentation, realpoint (index) is the full depth pixels of the cache. D is represents the current position of the palm, and T represents the neighbour threshold of the palm. Through a number of tests, a relatively stable threshold value of 95-105 mm was determined. The test contrast is shown in Fig. 2. Figs. 2a-d are hand-segmented images with thresholds of 50, 100, 150 and 200 mm, respectively. Experiments show that when the threshold value is 100 mm, the segmentation effect is the best, which meets the system requirements. Gesture recognition This paper designs an algorithm based on palm-contour threshold detection. The realisation of the algorithm is that when the palm is fully opened, the total width and total height of the external contour of the hand will increase -the horizontal interval between the most right and the leftmost point (the tip of the thumb and the tip of the finger) in the hand point cloud will increase, and the vertical distance between the highest and lowest hand points in the hand point cloud (the middle finger tip and the wrist root) will also increase. Conversely, when the palm is closed in a 'grasp' state, the total width and total height of the external contour will be reduced. Therefore, when the palm contour is greater than or less than the threshold value, the 'release' or 'grab' gestures can be determined. As shown in formula (2) HandVec represents the external contour of the hand. handRight, handLeft, handTop, and handBottom represent the right, left, highest, and lowest positions of the hand obtained from the hand depth image after the segmentation. Boolean () represents the decision function of the contours of the opponent. When the total width of the contour exceeds D1 and the total height exceeds D3, the 'release' gesture is determined. In the same way, the 'grab' gesture is determined when the total width of the external contour of the hand is less than D2, and the total height is less than D4. Through many experiments, the width threshold of the 'release' gesture is about 165-200 mm, and the height threshold is about 155-180 mm. The width of the grabbing gesture is about 55-70 mm, and the height threshold range is about 80-95 mm. Gesture tracking and coordinate transformation The mechanical arm has 3 degrees of freedom. The base moves in the X-axis plane, the big arm moves in the Y-axis plane, the clamper moves with open and close in the X-axis plane, and the clamping range is 0-100 mm. We adopt a trajectory tracking algorithm based on the recognition centre point. The algorithm takes the position of palm as the origin when the system first recognises the palm of the operator. By the relative displacement of the operator's current hand point and the position of the origin, we calculate the motion of the gestures. The line drawn from the origin to the palm of the palm is displayed on the terminal. As shown in Fig. 3. In Fig. 3, point O represents the position of the palm when the system first recognises the operator's palm -the origin point, point A represents the position of the current palm, the line OA represents the line of the origin to the hand point, D represents the straight line length. Here, the Kinect cone is drawn on the operator's screen through the third party library to provide the operator with accurate gesture tracking view. On this basis, the relative motion vector of the foremind is calculated and mapped to the relative displacement in the direction of X and Y axes of the mechanical arm, and sent to the mechanical arm. It completes the transformation between the Kinect sensor coordinate system, the screen coordinate system, and the mechanical arm coordinate system, as shown in Fig. 4: Operation instruction transmission After the upper computer completes the computation from gesture recognition and tracking to the movement command of the specific mechanical arm, the transmission and reception of operation instructions are realised through wireless network system. The transmitter and receiver adopt the connection of point to point and the pairing of one to one. It sends and receives data according to the transparent serial interface protocol. It is shown in Fig. 5. In this system, the transmitter of the wireless network is connected with the host computer, it receives the motion instruction by the serial port and sends the motion instruction to the wireless receiver based on ZigBee. The wireless receiver is connected to the mechanical arm, and the hardware composition is the same as the sending end. The difference is that the receiving end parses the received motion instructions and sends them to the mechanical arm for specific execution. Video acquisition and transmission In the position of the shoulder joint of the mechanical arm, we set a camera which provide the video information concerned with the view of the outside scene. At the end of the holder of the mechanical arm, we set a camera which provide the video information of the position and posture of the target plane and the grasping object directly. At the same time, we also build an embedded video image processing and transmission platform. Finally, we transmit video images to the operator's terminal equipment for display and storage based on streaming media technology and WiFi. It is shown in Fig. 6. Experimental test results First, we use gesture to guide the mechanical arm in four directions: upper, lower, left, and right movements to observe the movement. Then, the gripper grabs the object through the 'grab' and 'release' gestures. Fig. 7 shows the experimental results of the motion of the mechanical arm when tracking the operator's gestures. Fig. 8 shows the gesture tracking and field operation video displayed on the operator's terminal. In Figs. 7 and 8, the system accurately identifies the gestures and moving trajectories of the operator and converts them to the moving trajectory of the mechanical arm. For gesture control commands, the mechanical arm responds in sequence, the movement process is smooth, the action is accurate, and there is no malfunction and other control anomalies. It is proved that the host computer conveys the operation instruction to the mechanical arm through the wireless network and realises the teleoperation control. This shows that the system has a good interactivity. Conclusion Here, the dynamic gesture information collected by the Kinect sensor is used to control the mechanical arm in real time. It uses the unmarked gesture segmentation algorithm based on the palm neighbourhood and the threshold detection algorithm based on the palmar contour to accurately identify the operator's gestures and moving trajectories, and convert it to the specific action of the mechanical arm. The mechanical arm has the characteristics of quick response, smooth movement, and accurate action. It is no malfunction and other control abnormality. At the same time, this proves that the host computer sends instructions to the mechanical arm in real-time and accurately through wireless network and realises teleoperation control. The system also provides the video feedback of the mechanical arm operation, which helps the operator to make the accurate judgment and decision on the remote operation scene in the process of teleoperation. It improves the teleoperation sense of field and interactivity and avoids the operation errors such as grab nothing and fall.
2,801.4
2019-03-25T00:00:00.000
[ "Computer Science", "Engineering" ]
Electric field engineering and modulation of CuBr: a potential material for optoelectronic device applications I–VII semiconductors, well-known for their strong luminescence in the visible region of the spectrum, have become promising for solid-state optoelectronics because inefficient light emission may be engineered/tailored by manipulating electronic bandgaps. Herein, we conclusively reveal electric-field-induced controlled engineering/modulation of structural, electronic and optical properties of CuBr via plane-wave basis set and pseudopotentials (pp) using generalized gradient approximation (GGA). We observed that the electric field (E) on CuBr causes enhancement (0.58 at 0.0 V Å−1, 1.58 at 0.05 V Å−1, 1.27 at −0.05 V Å−1, to 1.63 at 0.1 V Å−1 and −0.1 V Å−1, 280% increase) and triggers modulation (0.78 at 0.5 V Å−1) in the electronic bandgap, leading to a shift in behavior from semiconduction to conduction. Partial density of states (PDOS), charge density and electron localization function (ELF) reveal that electric field (E) causes a major shift and leads to Cu-1d, Br-2p, Cu-2s, Cu-3p, and Br-1s orbital contributions in valence and Cu-3p, Cu-2s and Br-2p, Cu-1d and Br-1s conduction bands. We observe the control/shift in chemical reactivity and electronic stability by tuning/tailoring the energy gap between the HOMO and LUMO states, such as an increase in the electric field from 0.0 V Å−1 → 0.05 V Å−1 → 0.1 V Å−1 causes an increase in energy gap (0.78 eV, 0.93 and 0.96 eV), leading to electronic stability and less chemical reactivity and vice versa for further increase in the electric field. Optical reflectivity, refractive index, extinction coefficient, and real and imaginary parts of dielectric and dielectric constants under the applied electric field confirm the controlled optoelectronic modulation. This study offers valuable insights into the fascinating photophysical properties of CuBr via an applied electric field and provides prospects for broad-ranging applications. Introduction Wide band gap semiconductors renowned for strong luminescence in the visible region of the spectrum produce high concentrations of excessive charge carriers and have become prospective materials for optoelectronic applications. [1][2][3] The majority of our daily lighting systems are based on InGaN optical emitters, 4,5 while, ZnO, 6 ZnS, 7 ZnTe 8 and ZnSe 9 frequently used in at panels and lasing applications suffer serious challenges. For example, the larger lattice mismatch with substrates, such as sapphire, Si, or SiC, direct to the intrinsic structural defects or distortions, such as mist dislocations, 10 stress in deposited lms, 11 in-phase (IPB) and out-ofphase grain boundaries (OPBs), 12 and secondary impurity phases 13 to the parent phases causes the hindrance in the light emitting efficiency of the devices. Additionally, excessive and continuous in use in household and luminescent optoelectronic devices causes scarcity, necessitating alternative novel materials with better or closer properties to in-based systems. 14,15 I-VII Cu halides (CuX; X = Cl, Br, I, etc.) are well thought-out potential materials to replace In-based optoelectronic systems because of their structural diversity, direct band gap (3.0-3.5 eV at 300 K), transparency throughout the visible region (above 420 nm), large binding, smaller bulk modulus, large ionicity, diamagnetic behavior, non-linear optics, large excitonic binding (around 100 to 110 meV) energies (in the UV/visible region), negative spin orbit and rich excitonic photophysical/ chemical optoelectronic properties (such as solid state lightening). [16][17][18] These compounds are tetrahedral-coordinated compound semiconductors that crystallize into the zinc blend lattice (space group F43m), where Cu atoms are located at (0, 0, 0) and halide (F, Cl, Br, and I) atoms are located at (1/4, 1/4, 1/4) positions transformed from zinc blend (B 3 ) to several intermediate low symmetry phases and rock salt (B 1 ) structure under application of higher pressure (usually in the region of the 10 GPa). 19 The lled d 10 -shell in addition to the s 2 p 6 rare-gas valence shell originate valence band results in a deep core level with almost no dispersion apart from the spin-orbit splitting, which is part of the uppermost valence band, resulting in signicant optoelectronic properties, as revealed recently that CuCl deposited on the Si substrate is a wideband gap (WBG) material compatible with the photoluminescence industry. 20, 21 Koch et al. 22 briey outlined the UV/vis emission spectral for copper-based halides and demonstrated that CuBr extends the probable range of blue hues in the recognized emitter wavelength range and have excellent lattice match with a substrate such as Si, resulting in fewer structural defects in the deposited CuBr lms via high vacuum [23][24][25] or chemical solutionbased deposition (CSD) process [26][27][28] and yield efficient roomtemperature free-excitonic emission. These materials have been widely produced, but no thorough theoretical investigations have been conducted on intrinsic and extrinsic electric eld modulation and controlled engineered optoelectronic, mechanical, dielectric, elastic and photocatalytic properties, which may massively affect the emission, and conduction can potentially be useful in the controlled miniaturized optoelectronic industry that requires thorough and comprehensive investigations. Herein, we selected the CuBr compound, which is the rst ever thorough report of the electric eld modulation and engineering of the bandgap of CuBr with zinc blende cubic (a = 5.69 Å, b = 5.69 Å and c = 5.69 Å) sphalerite structure exhibiting F43m space group (where the Cu 1+ is bonded to four equivalent Br 1− atoms to form corner-sharing CuBr 4 tetrahedra; the Cu-Br bond lengths are 2.47 Å, and the Br 1− is bonded to four equivalent Cu 1+ atoms to form corner-sharing BrCu 4 tetrahedra) halides via plane-wave basis set and pseudopotentials (pp) using generalized gradient approximation (GGA) [32][33][34] in QUANTUM ESPRESSO. [35][36][37] Despite wide theoretical studies on semiconductors, very few studies are available on Cu-based halides to date. We rst ever thoroughly report the external electric eld modulation and engineering of the energy band gap and its effect on the structural, electronic and optical properties of CuBr, which may lead to potentially controlled novel miniaturized future optoelectronic and photoluminescence technology. Results and discussions The electronic band structure and density of states (DOS) provide information about the electronic properties of materials that may be modulated, controlled, and engineered upon the application of an external electric eld, offering a window of opportunity for potential utilization in micro/nanoelectronic technology. Fig. 1(a-i) shows the bandgap modulation and engineering for CuBr halides with (0.05 and without the application of an external electric eld (0.0 V Å −1 ) using GGA approximation. In the absence of an external applied electric eld (E) ( Fig. 1(a)), the top of the valence band maximum (VBM) and the bottom of the conduction band minimum (CBM) are not overlapping located and diverge at the G-G symmetry point of the Brillouin zone (BZ) responsible for the direct band gap, E g , (0.58 eV at 0.0 V Å −1 ), validating semiconducting behavior that is consistent with previously reported results. 17,19,20,41 The direct band gap at 0.0 V Å −1 appears owing to the main interaction between Cu metal d-states and Br p-states, which pull down valence bands near the Fermi level (E F ). It is clearly observed that the highest valence band (VB 1 and VB 2 ) states have d-state electron contributions, whereas the p-state electrons mainly contribute to the lowest (VB 3 ) bands. The band structures below the critical external eld (0.7 V Å −1 and −0.5 V Å −1 ) comprise the lower part, VB 1 , and at part VB 2 , appearing because of the hybridization of 3d Cu t2g + and 3p Br − orbital (VB 2 ) and 3p eg of 3d Cu + orbital (VB 2 ). The lower lying VB 3 band is mostly derived from the 3p (Br − ) orbital, whereas the s (Br − ) orbital coins the deeper VB 4 band entirely. The shapes of the lower VB 3 and VB 4 bands are the same for the zinc blende materials, in which the d bands are far away from the valence band region. Fig. 1(b-i) depicts that the application of an external electric eld (V Å −1 ) causes modulation and tuning in the valence and conduction band, resulting in a shi and widening of the direct bandgap at G symmetry point below the critical electric eld (0.05 V Å −1 E g = 1.58, −0.05 V Å −1 E g = 1.27 and 0.1 V Å −1 , E g = 1.63). The bandgap starts to shrink at 0.5 V Å −1 and −0.1 V Å −1 shis from 1.63 to 0.78 and 1.27 to 1.13; above the critical eld (0.7 V Å −1 and −0.5 V Å −1 ), conduction band crosses the Fermi energy level and overlaps valence band, resulting in a metallic response. The detailed computed values of the energy bandgap obtained as a function with and without an applied external electric eld (above and below the critical electric eld) for the CuBr system are listed in Table 1, conrming that CuBr responds sensitively and monotonously to the external applied electric eld (V Å −1 ). The bandgap expanded and tuned from 0.58 (at 0.0 eV) to 1.63 (at 0.1 V Å −1 ) via applied external electric eld (V Å −1 ) as stimuli provide a window of opportunity to develop, control, engineer and modulate future optoelectronic devices, possibly causing strong luminescence in the visible region of the spectrum because of the large bandgap producing a high concentration of excessive charge carriers. No previous theoretical and experimental studies have been conducted on electric eld engineering and modulation of CuBr, and we believe this study will be used as a reference and has the potential for further investigation of the engineering, controlling and modulating of modern optoelectronic devices. The total density of states (TDOS) and partial density of states (PDOS) for the CuBr compound are shown in Fig. 2(a-i), providing in-depth information about the role of orbital contribution and their effect on the bandgap modulation and engineering with and without an external applied electric eld (E). The dashed line between the valence band (VB) and conduction band (CB) represents the Fermi energy level. In the absence of an external electric eld (E = 0 V Å −1 ), the valence band is dominated by Cu-1d and Br-2p states, and there is less contribution from Cu-2s and Cu-3p orbitals near the Fermi energy levels (the dotted lines shown in Fig. 2a). However, in the case of the conduction band, the Cu-3p orbital contributes the maximum, while Cu-2s and Br-2p contribute less. There is a very insignicant contribution from Cu-1d and Br-1s orbitals. These results conrm that Cu-d and Br-p states mostly dominated the valence band, whereas in the conduction band (CB), the s and d states of CuBr are rarely involved. The direct bandgap starts to widen with the utilization of an external applied electric eld, and upon the application of 0.05 V Å −1 and 0.1 V Å −1 , the bandgap increases from 0.58 to 1.58 and 1.63. The increase in band gap at 0.05 V Å −1 and 0.1 V Å −1 may be caused mainly by the interactions and orientation of electron orbital states; the Cu-1d and Br-2p states have major contributions, Cu-2s and Cu-3p states have less contribution, and Br-1s has an insignicant contribution in the valence band. However, in conduction bands, Cu-3p, Cu-2s and Br-2p orbitals have the main contribution and Cu-1d and Br-1s have less contribution. The increase in bandgap upon application of electric eld (E) as external stimuli at 0.05 V Å −1 and 0.1 V Å −1 implies that more electron excitation energies are required from the valence to the conduction band; henceforth, the light of a higher frequency and shorter wavelength would be absorbed. The external electric eld (0.1 V Å −1 and 0.5 V Å −1 ) causes electron energy excitation to the electrons present at lower energy levels with the main contribution of Cu-3p states at E = 0.0 V Å −1 in conduction band shi to the Cu-3p, Cu-2s and small contribution in the same energy range from Br-2p. This indicates that the Cu-2s (main contribution) and Br-2p interplay a major role, causing an increase in bandgap depending on the contribution of electrons of the same or different spin states and crystal eld causing repulsion or attraction may lead to widening bandgap upon the external electric eld. However, upon a further increase in the applied electric eld (E) to 0.5 V Å −1 , we observed that all the energy bands started to expand, and the bandgap exhibited a sharp decrease from 1.63 eV to 0.78 eV, depicting the start of the transition from the semiconductor to metallic behavior. This decrease in bandgap generated by orbital orientations and interactions is conrmed; Fig. 2(a-i) shows that the valence band is formed because of the distinct hybridization of Cu-1d and Br-2p orbitals with less contribution from Cu-2s, Cu-3p, and Br-1s orbitals. However, in the conduction bands, Cu-3p, Cu-2s and Br-2p orbitals have the main contribution and Cu-1d and Br-1s have less contribution. In all these processes of electric eld modulation, tuning the bandgap of CuBr remains direct at the G-G symmetry point of the Brillouin zone. However, on further increase in the electric eld to 0.7 V Å −1 , −0.5 V Å −1 the valence band crosses the Fermi energy level, vanishing the bandgap, and CuBr turned into metallic from a semiconductor. Interestingly, the applied electric eld initially causes an increase in bandgap from 0.58 eV to 1.63, decreases the direct bandgap at the G-G symmetry point of the Brillouin zone from 1.63 eV to 0.78 and then overlaps conduction and valence band at E = 0.7 V Å −1 and −0.5 V Å −1 . Fig. 3(a-i) depicts the effect of the applied electric eld (E) on the engineering and modulation of the highest occupied molecular orbitals (HOMO) or bonding orbitals and lowest unoccupied molecular orbitals (LUMO) or anti-bonding orbital energy wave functions. In CuBr compounds, the HOMO orbitals help Cu and Br to form CuBr bonds, which are naturally lower in energy than the LUMO orbitals that cleave the bonds between Cu and Br. To achieve lower energy stable states for CuBr, the Cu electrons interact with Br electrons to form CuBr bonding. The energy difference between HOMO and LUMO states demonstrates the ability of electrons to jump from occupied to unoccupied orbitals, demonstrating the ability of the molecular orbital to participate in chemical reactions. The energy gap (DE) between the HOMO and LUMO orbitals (which is the difference between the energies of HOMO and LUMO states (E LUMO -E LOMO )) represents the chemical activity, and a shorter gap corresponds to stronger chemical activity, as illustrated in Fig. 3(a-i), and their values are listed in Table 1. In the absence of an applied electric eld (E) at 0.00 V Å −1 , the energy gap (DE) between the HOMO and LUMO states of CuBr is 0.78 eV. However, there is a tendency of a shi in increase and decrease in energy gap via applied electric eld (E), which may demonstrate the controlled tailored ability of chemical reactivity and electronic stability of CuBr. It is obvious from Fig. 3(a-i) that upon the increase in the electric eld from 0.0 V Å −1 / 0.05 V Å −1 / 0.1 V Å −1 , there is an increase in the energy gap (DE) from 0.78 eV to 0.961 eV, indicating more electronic stability and less chemical reactivity. However, the converse is the case upon a further increase in the electric eld to 0.5 V Å −1 , in which the HOMO and LUMO energy gaps shrink to −0.643 eV, demonstrating that electronic instability and high chemical reactivity conrm a major shi in response, which agrees well with the modulation of bandgap and PDOS calculations. The values of LUMO, HOMO energies and the energy gap between LUMO and HUMO energy levels are listed in Table 1. We observed that the overall shapes of the HOMO and LUMO energy states also changed ( Fig. 3(a-i)). This is because Table 1 Detailed values of the electric-field-induced electronic bandgap and engineering and modulating of the highest occupied molecular orbitals (HOMO) or bonding orbitals and lowest un-occupied molecular orbitals (LUMO) or anti-bonding orbital energy wave function properties of CuBr the electric eld (E) causes a shi in the interaction of various orbitals, such as Cu-1d, Cu-2s, Cu-3p, Br-1s and Br-2p. The interaction between metallic Cu and halide Br in CuBr with and without an applied electric eld (E) causing the change in charge transfer and hybridization is explored by electronic charge density distribution calculations, which is widely accepted for predicting charge density. Fig. 4(a-g) and 5(a-g) illustrate the electric eld modulation of charge density distribution in CuBr in 3D 2 × 2 × 2 extended boundary unit cells and primitive unit cells. We evaluated several patterns under various applied electric elds (0.00 V Å −1 , 0.05 V Å −1 , 0.1 V Å −1 , 0.5 V Å −1 , −0.05 V Å −1 , −0.1 V Å −1 , and −0.5 V Å −1 ) of the calculated charge density to validate the electronic modulation and tunability. It is evident that charges accumulate and share between the Cu metal and halide Br atoms, indicating the existence of directional shared bonding upon the application of an electric eld (E). The sharing of mutual cations and anions shows that covalent and charge transfer reveal an ionic bonding nature. It is clear from Fig. 4(a-g) and 5(a-g) that below the applied critical external eld (0.7 V Å −1 and −0.5 V Å −1 ), CuBr displays charge sharing and covalent bonding among the anion-anion (Br-Br) and anion-cation (Cu-Br) atoms. However, the charge density distribution changes upon the application of a higher electric eld (0.5 V Å −1 and −0.5 V Å −1 ), validating the shi in the nature of CuBr from semiconducting to metallic. In the absence of a eld at 0.0 V Å −1 , the charge distribution is more around the Br atoms whereas less around the Cu atoms. We observed an evident shi in charge distribution upon the eld; for example, at 0.05 V Å −1 , the charge density starts to transfer from Br to Cu atoms. The charge distribution somehow demonstrates a shi in behavior from semiconductor to metallic at higher elds of 0.5 V Å −1 and −0.5 V Å −1 . To understand the role of the electric eld in the bonding patterns, we performed electron localization function (ELF) calculations, which provide insight into (local) the distribution of electrons. Fig. 6(a-i) depicts the front view of the computed ELF with and without an applied electric eld. The colored regions around the Cu (red) and Br (purple) atoms represent the range (high or less) of electron localized density. In the absence of an applied electric eld (0/0 V Å −1 ), high charge density is observed in neighboring Br atoms, which may be attributed to the presence of strong s bonding. However, the applied electric eld (E) results in charge transfer or accumulation from Br atoms onto Cu atoms. We observed that Cu atoms are responsible for the low charge density although the electric eld causes modulation, accumulation and transfer, in which the low charge density may be due to the weakening of the s bonding between the Cu atoms and the neighboring Br atoms. We have investigated the optical properties of CuBr, including the optical reectivity R(u), refractive index (n(u)), extinction coefficient k(u), real 3 1 (u) and imaginary 3 2 (u) parts of dielectric and dielectric constants 3(u) for the various energy ranges under the applied electric eld using GGA techniques. Fig. 7(a-d) shows the real 3 1 (u) and imaginary parts of the dielectric constant 3 2 (u) for CuBr with respect to the applied external electric eld above and below the critical external electric eld (E C ) as a function of energy (eV). Table 2. It is clear from Fig. 7(a and b) that in the absence of an applied electric eld, the 3 1 (u) Fig. 4 (a-g) Electric field-induced engineering and modulation of the charge density distribution in the 3D extended 2 × 2 × 2 CuBr boundary unit cells calculated using the GGA scheme. increases as the energy increases. The magnitude of the peak of the real dielectric decreases as the applied electric eld (up to 0.1 V Å −1 and −0.1 V Å −1 ) increases, suggesting an increase in the band gap. There is a sharp increase in 3 1 (u) as the electric eld increases from 0.1 V Å −1 to 0.5 V Å −1 because the bandgap decreases, demonstrating the behavior shi from semiconducting to conducting with the applied eld. Fig. 7(c and d) illustrates the imaginary part of the dielectric constant 3 2 (u)as a function of the given photon energy (eV) above and below the critical applied electric eld (0. for the CuBr system. To explore excitons, we must consider the imaginary part of dielectric 3 2 (u) because it contains a signature of exciton energies. Imaginary dielectrics 3 2 (u) represent four major peaks in the energy ranges 3.8-4.1 eV, 4-4.4 eV, 5.7-6.0 eV, and 9.5-10 eV for the CuBr system at 0.0 V Å −1 , corresponding to the four absorptions. The imaginary dielectric absorption peaks decrease as the applied external eld increases and reach a minimum value above the critical eld (0.5 V Å −1 ). It is noteworthy that the absorption peak shis from lower energy to higher energy with the utilization of an external applied eld. Furthermore, the sharpness of the absorption peak increases as the electric eld increases. Few absorption peaks exhibit an increase in width, which could be due to the mixed transition. The major contributions to the optical transition at low eld (0.0 V Å −1 , 0.05 V Å −1 , −0.05 V Å −1 , 0.1 V Å −1 and −0.1 V Å −1 ) belong to the Cu-d and Br-p states. However, at the higher eld (0.5 V Å −1 , −0.5 V Å −1 , 0.7 V Å −1 and −0.7 V Å −1 ), the major contribution shis to the s-p-d states. The shi of absorption edge towards the higher energies with applied external electric eld (at 0.5 V Å −1 , 0.7 V Å −1 and −0.5 V Å −1 and above) illustrates a reduction in bandgap, as shown in Fig. 7(a-d). The optical absorption edge occurs at the G point of the Brillouin zone (BZ) between the conduction and valence bands, demonstrating a threshold for direct optical transition. The interesting fact to notice here is that 3 2 (u) decreases as the positive applied electric eld increases and even modulates over the negative applied electric eld. This is because with an applied electric eld, CuBr becomes polarized, but the eld produced due to the polarization of the CuBr minimizes the effect of the external eld. These results indicate that the dielectric response of CuBr with an applied electric eld (E) conrms its potential usage in controlled optoelectronics. Maximum energy loss functions above and below the critical eld with the negative eld are demonstrated in Fig. 7(e and f), which are conned to the energy regions where the electrons are not typically restricted to their lattice site and execute oscillation upon light exposure. It is obvious that the maximum energy loss function (ELS) minimum value refers to the higher value of the imaginary part of the dielectric, depending on the energy band gap, which varies with the applied external electric eld. For example, the maximum energy loss function increases as the band gap decreases with the application of a higher eld, resulting in a transition from semiconducting to a metallic behavior. The complex refractive index (ñ) is a crucial optical property of material given by the following formula: 44,45 where n represents the real refractive index and k represents the attenuation index or extinction coefficient. We evaluate both n and k for CuBr by utilizing the imaginary part of the dielectric function using the following equation: 44,45 nðuÞ ¼ 1 ffiffi ffi 2 p The calculated values of the refractive index n(u) and extinction coefficient k(u) as a function of the given photon energy (eV) above and below the critical applied electric eld (0.00 V ) for the CuBr system are shown in Fig. 8(a-d) and listed in Table 3. The refractive index n(u) increases as the band gap decreases upon the utilization of the applied external electric eld, reaching the maximum value at the critical eld (3.8 at 0.7 V Å −1 and 5.8 at −0.7 V Å −1 ) and following the trend of the real part of dielectric 3 1 (u) that conrms semiconducting to metallic band transition. Fig. 8(c) demonstrates that the optical reectivity spectra with respect to the applied electric eld at given phonon energies provide basic information about various critical points of transition. We extracted the electric eld tailored, manipulated and controlled optical reectivity for CuBr below, above and in a negative polarization applied electric eld using the fundamental relation: 46 The complex refractive index n(u) demonstrates an optical response to electromagnetic waves or light in two major parts: the refractive index, n(u), and extinction coefficient, k(u). These are energy and frequency dependent parameters, respectively. The optical reectivity R(u) with and without an applied electric eld for the CuBr system is calculated and represented in Fig. 8(c). We observed that optical reectivity decreases and shis toward lower values as the applied electric eld increases. Such a shi and decrease in optical reectivity R(u) with a higher applied eld (E) conrm the transformation from semiconducting CuBr to metallic. The maximum optical reectivity R(u) values at zero photon energy (0 eV) are associated with the highest absorption energy. Table 2 Detailed individual and average values of electric-field-induced real part of dielectric 3 1 (u), imaginary part of dielectric 3 2 (u) and dielectric constant 3(u) of CuBr S. no Electric eld (V Å −1 ) 1 st peak 2 nd peak 3 rd peak 4 th peak 5 th peak Avg imaginary Average real Dielectric const Conclusions In summary, we rst successfully reported the effect of the external applied electric eld (E) on electronic bandgap engineering and modulation, causing changes and shis in structural, electronic and optoelectronic properties of CuBr via plane-wave basis set and pseudopotentials (pp) using the generalized gradient approximation (GGA) based on density functional theory (DFT). We observed that when the external electric eld was applied, there was a signicant increase in the bandgap, such as from 0.58 at 0.0 V Å −1 to 1.63 at 0.1 V Å −1 (about 280% increase). This modulation in the electronic bandgap via the applied electric eld (E) results in a behavioral shi from semiconducting to metallic. The partial density of states (PDOS), charge density and electron localization function (ELF) calculations reveal that the applied electric eld (E) modulates the orbital contribution and leads to the main contribution of Cu-1d, Br-2p, Cu-2s, Cu-3p, and Br-1s orbitals in the valence band and Cu-3p, Cu-2s and Br-2p, Cu-1d and Br-1s orbitals in the conduction band, signicantly conrming the controlled optoelectronic properties. Additionally, we found that the chemical reactivity and electronic stability of CuBr may be controlled by tuning and tailoring shis in HOMO and LUMO states with an increase or decrease in gap via an applied electric eld (E). The increase in the electric eld from 0.0 V Å −1 / 0.05 V Å −1 / 0.1 V Å −1 causes the increase in the energy gap to lead to electronic stability and less chemical reactivity. However, the converse is the case upon a further increase in the electric eld to 0.5 V Å −1 , where the gap shrinks to 0.78, leading to electronic instability and high chemical reactivity that indicate a major shi in response. This is further conrmed by observing modulation in the bandgap and PDOS calculations. Optical properties, such as optical reectivity R(u), refractive index (n(u)), extinction coefficient k(u), real 3 1 (u) and imaginary 3 2 (u) parts of dielectric and dielectric constant 3 1 (u) for various energies ranging in eV under the applied electric eld, conrm the controlled optoelectronic response in CuBr. This work offers valuable insights and an in-depth study of the fascinating photophysical properties of CuBr lms via an applied electric eld, and it will open a prospect to their utilization in various applications. Conflicts of interest I hereby conrm that the work reported in this manuscript is novel and has no conict of interest. Data availability The raw/processed data required to reproduce these ndings cannot be shared at this time as the data also forms part of an ongoing study. Furthermore, the data may be provided on request. Table 3 Detailed individual and average values of electric-field-induced the refractive index n(u), extinction coefficient k(u) and optical reflectivity R(u) of CuBr S. no. E (V Å −1 ) k 1st (u) k 2nd (u) k 3rd (u) k 4th (u) k 5th (u) k avg (u) n avg (u) R avg (u)
6,774.6
2023-03-01T00:00:00.000
[ "Physics" ]
Topological surface states on Bi(111) based on empirical tight-binding calculations The topological order of single-crystal Bi and its surface states on the (111) surface are studied in detail based on empirical tight-binding (TB) calculations. New TB parameters are presented that are used to calculate the surface states of semi-infinite single-crystal Bi(111), which agree with the experimental angle-resolved photoelectron spectroscopy results. The influence of the crystal lattice distortion is surveyed and a topological phase transition is found that is driven by in-plane expansion. In contrast with the semi-infinite system, the surface-state dispersions on finite-thickness slabs are non-trivial irrespective of the bulk topological order. The role of the interaction between the top and bottom surfaces in the slab is systematically studied, and it is revealed that a very thick slab is required to properly obtain the bulk topological order of Bi from the (111) surface state: above 150 biatomic layers in this case. I. INTRODUCTION Topological materials classified by unconventional parity eigenvalues of three-dimensional (bulk) bands is one of the topics of high interest in solid state physics in this decade.The insulators classified in the non-trivial (topological) group are called topological insulators (TIs), and hold metallic and spin-polarized surface states that continuously disperse between the bulk valence band maximum and the conduction band minimum [1][2][3].Because these topological surface states are spin-polarized and robust against any perturbations that do not change the parities of the entire bulk band structure, these states are regarded as a promising element for future spintronic devices [4]. In the early stages of TI research, first-principles calculations based on density functional theory (DFT) achieved great success in predicting the topology of many materials and in proposing new TI candidates [5][6][7][8][9].Most of these predictions were soon proven experimentally and there was excellent agreement between the predicted topological surface states and the observations [10][11][12].However, despite these great successes, there remains an open question on the topological order of the very simple material of single-crystal Bi. Bismuth is widely used as a component of TIs, such as Bi 2 Se 3 [5,10] and TlBiSe 2 [7][8][9]12], because it is the heaviest non-radioactive element that also possesses a strong spin-orbit interaction (SOI).This is critical because the SOI plays an important role to realize the unconventional parity eigenvalues.The topological order of single-crystal Bi has also been extensively studied together with its relative alloy Bi 1−x Sb x , which is the first material experimentally detected as three-dimensional TI [13,14].According to the DFT [1,13] as well as empirical tight-binding (TB) [15,16] calculations, the topological order of Bi is trivial, and alloying with Sb causes the topological phase transition to a TI to occur at x ≈ 0.04 .However, unlike most of the other cases, this prediction is not consistent with the experimental results. Figures 1(a) and 1(b) show the experimental surface-band dispersion of Bi(111) obtained by angle-resolved photoelectron spectroscopy (ARPES) [17].In these dispersions, the two spin-split surface bands S1 and S2 merge into the same projected bulk valence bands (BVBs) at Γ.However, at M , these spin-split surface bands separately merge into two different projected bulk bands, with S1 merging into the bulk conduction band (BCB), while S2 merges into the BVB.The overall dispersion is also schematically depicted in Fig. 1 on this surface-state dispersion, S1 continuously connects the projected BVB and BCB between two time-reversal-invariant momenta (TRIM), which is the behavior that is expected for topologically non-trivial materials.Indeed, a recent ARPES report on Bi (1−x) Sb x (x ∼ 0.1) has demonstrated an almost identical surface-state dispersion [18], with the sole difference being that the projected BCB is above the Fermi level at M in Bi (1−x) Sb x .It should be noted that the topological classification of the band structure is not only valid for semiconductors, but also for semimetals whose finite projected bulk band gap opens in any k in the surface plane.Interestingly, the surface-band dispersions observed by ARPES are qualitatively the same in various experiments on single-crystal Bi(111) [19] as well as thin films possessing a few tens of biatomic layers (BL) grown on Si(111) [18,20].Specifically, all the observations show that S1 merges into the BCB while S2 merges into the BVB at M .The surface-state dispersions simulated by theoretical calculations depend upon the computational methods, however.The DFT calculations based on the local density approximation (LDA) with SOI, using slab geometry to mimic the crystal surface, obtain the surface bands depicted in Fig. 1(e) [20,21].In this case, both S1 and S2 merge into the BVB at M , and hence the surface band dispersion is trivial.This is consistent with the theoretical prediction mentioned above, but disagrees with the ARPES experiments.The other major method used to calculate the surface state is the so-called transfer-matrix (TM) method with the empirical TB model for bulk electronic states [16,22].Based on this method, however, the calculated surface state exhibits an additional crossing between Γ and M, as shown in Fig. 1(f).This unexpected crossing can be understood as the influence arising from an incorrect mirror Chern number via the empirical TB parameters [16].Further, another TM calculation based on a different set of TB parameters [23] has resulted in surface bands without this unrealistic surface-state crossing, and is shown in Fig. 1(g).In this case, the Dirac point lies in the projected bulk band gap at M .The result in Fig. 1(g) is also a topologically trivial surface-band dispersion because no surface band continuously connects the BVB and BCB. (d) . Based The main reason for the difficulty in calculating the electronic structure of single-crystal Bi is that its bandgap is very small.The bandgap for single-crystal Bi is ∼15 meV at L in the bulk Brillouin zone, corresponding to M on the surface Brillouin zone as shown in Fig. 1(c).One of the well-known weaknesses of DFT is its inability to estimate the accurate size of the bandgap.Actually, LDA overestimates the size of the bandgap at L [21].A recent study based on the quasiparticle self-consistent GW method including SOI has improved the size of the bandgap [24].Even using such state-of-the-art computational methods, however, the topological order of single-crystal Bi is still calculated to be trivial, which disagrees with the experimental results.In the TB calculation, the size of the bandgap agrees with the experiments because the TB parameters are empirically tuned to reproduce these experimental results, although the topological order was also calculated to be trivial. The tiny bulk bandgap at L leads to "flagile" topological phase of single-crystal Bi, because various perturbation from strain, ultrathin film thickness and so on can invert the energetical order of the bulk bands at L. Actually, a DFT calculation taking the intersurface interaction in the slab into account showed non-trivial surface-band dispersion as shown in Fig. 1(d) [20].Recently, a TB calculation using the slab model also showed the surface states which disperses from BVB at Γ to BCB at M continuously [25].Since the bulk topological order based on DFT and TB with known parameter set is trivial, this result suggests the topological phase transition driven by ultrathin film thickness of Bi.However, the magnitude of such finite-size effect, in other words, how thick the slab should be in order to calculate the surface states of Bi obeying the bulk topological order, has not been studied yet.Structural strain is also claimed as a source of topological phase transition.It is claimed that the in-plane structural strain in Bi(111) ultrathin film causes the topological phase transition from trivial to non-trivial phase [23,26].A recent theoretical study supports this results [24].However, there is still a discontinuity to the bulk case without strain: based on the ARPES experiments, Bi without strain should be topologically non-trivial and hence it is not clear where the topological phase transition takes place with lattice distortion. In this work, we present the new set of TB parameters for calculating the surface states of single-crystal Bi(111), which agrees with the experimental ARPES results.Two sets of parameters were obtained that generate topologically trivial and non-trivial surface states for Bi, where neither exhibit any artificial crossing such as that generated previously (Fig. 1(e)) [15,16].Based on the new TB parameters, the surface-state calculations were performed by means of both the TM method and slab geometry.In both cases, the calculated surface states agree well with previous experimental results, except for in the proximity of M. Around M , the surface-state dispersion changes depending upon the topological order of the bulk bands, where only the topologically non-trivial case agrees with the experiments.The influence of crystal lattice distortion was surveyed and a topological phase transition was found that was driven by in-plane expansion for the non-trivial bulk bands, which was opposite to the distortion suggested in a previous report [23].In contrast with the semi-infinite crystal, the slab calculation generated non-trivial surface-state dispersions irrespective of the bulk topological order, as has been the case for previous DFT calculations using slab geometries.The role of the interaction between the top and bottom surfaces in the slab was systematically studied, and it was found that a very thick slab is required to obtain the bulk topological order of Bi from the (111) surface states properly.Specifically, the slab must be greater than 150 BL in our model. A. Tight-binding parameters for bulk states The main framework used to calculate the bulk electronic structure was the same as that in Ref. 15 , and is briefly explained herein.Single-crystal Bi has an A17 rhombohedral lattice, but is also characterized by hexagonal lattice parameters, a and c, together with an additional parameter µ (see Fig. 2).The primitive (rhombohedral) unit cell contains two atoms and the relative position of the second atom is (0, 0, 2µc), where µ = 0.2341.The hopping parameters of the sp 3 orbitals between first-, second-and third-nearest-neighbor atoms were taken, and the SOI was included via the spin-orbit coupling parameter λ = 1.5 eV.The resulting (16×16) matrix is shown in the appendix of Ref. 15 . Next, the TB parameters were modified so that the calculated surface-state dispersion between Γ and M could be reproduced without any artificial crossing such as was obtained in Ref. 16 .In addition, the new parameters were tuned to maintain the energies of the electron/hole pockets and the size of the bandgap at L at nearly the same value as the original parameter, which agrees with the experimental values.II, the major difference between TBP-1 and TBP-2 is the difference of topological order: trivial for TBP-1 but non-trivial for TBP-2. B. Transfer-matrix (TM) method The TM method is used to calculate the surface electronic states on a semi-infinite crystal from the bulk Hamiltonian and the transfer matrix T (k , E), as proposed in Ref. 22 .In this TM, k is the in-plane wavevector and E is the binding energy .The procedure reported in a previous paper [16] was followed, as described below, but the bulk TB parameters were and the Z 2 topological invariants (ν 0 ; ν 1 ν 2 ν 3 ) calculated with the tight-binding parameters shown in Table I. Because the Bi crystal can be regarded as a stack of BLs, as depicted in Fig. 2(b), the bulk electronic states of a semi-infinite Bi crystal can be written as where φ n,a is a basis of the states in the BL plane localized on the a = 1, 2 monolayer of the nth BL.In addition, each φ n,a has eight components associated with the eight atomic orbitals.The transfer matrix, T (k , E), is given by Equations C. Finite slab calculation Slab geometry is widely used for DFT calculations of surface electronic structures, such as used in Refs.20, 21, 23, 25 .This model can mimic the surface without breaking the three-dimensional periodicity and can therefore be easily applied toward various surface systems.However, sometimes this model generates artificial states owing to the interaction between the top and bottom surfaces.In this work, we followed the method reported in the recent paper [25], as is described below.Varying from the previous work, however, we used a new set of bulk TB parameters and assumed no surface hopping term. The total Hamiltonian of the slab H is represented as 12 H 11 H (1) 12 where H 11 is the hopping term for inter-monatomic-layer hopping (i.e., in the same monatomic layer) and H (1) 12 (H 21 ) are the intra-BL (inter-BL) hopping terms.The H 12 , H 21 , and H 11 terms are given in the bulk TB Hamiltonian [15] as the first-, second-and thirdnearest-neighbor hopping terms, respectively.The size of the matrix is therefore 16n×16n, where n is the number of the BLs in the slab. In the following, the obtained states were plotted with Gaussian broadening (F W HM = 5 meV) to mimic the ARPES intensity plots produced with a typical instrumental energy resolution. III. RESULTS AND DISCUSSION A. Surface states generated by TM method The same as (a) and (b), respectively, but calculated using the new TB parameters generating topologically trivial bulk bands (TBP-1 in Table I).(e,f) The same as (a) and (b), respectively, but calculated using the new TB parameters generating topologically non-trivial bulk bands (TBP-2 in Table I). band dispersion qualitatively agrees with the ARPES results except for the k region around M , as shown in Figs.3(c) and 3(d).The two branches of the surface bands degenerate with each other at M , and this surface-state dispersion is therefore topologically trivial, as is expected from the topological order of bulk bands.This surface-state dispersion agrees with that reported in a previous paper [23].Around Γ, the TB parameters generating nontrivial bulk bands (TBP-2 in Table I) does not significantly alter the surface-state dispersion from the trivial dispersion, as shown in Fig. 3(e).Around M ,however, the surface-state dispersion is different from those calculated with the other TB parameters.Specifically, the upper branch merges into the BCB while the lower merges into the BVB, showing a good agreement with the experimental results [17][18][19][20] (see Fig. 3(f)). These good agreements of surface-state dispersions with the previous experimental and theoretical results except for the proximity of M implies that the TB calculation can no longer be the evidence of topological order of single-crystal Bi.In order to judge such balancing two possibilities, one requires the experimental data.Based on the ARPES results, it should be topologically non-trivial, while ARPES does not provide any direct information about the parity eigenvalues of bulk bands at L. The other, bulk-sensitive and accurate experimental method would be helpful to make a firm conclution on this controversial issue, topological order of single-crystal Bi. B. Topological phase transition driven by lattice distortion In order to examine the topological phase transition driven by structural distortion based on Refs.23, 24, 26, we surveyed the topological order and surface-state dispersion around M using the TM calculation and the new TB parameters. Figure 4(a) shows the bulk band evolution at L with an in-plane lattice distortion inserted for calculations using the two TB parameter sets in Table I, TBP-1 and TBP-2.Irrespective of the topological order generated with zero lattice distortion, a topological phase transition occurs in both cases.The only difference observed is whether the topological phase transition occurs when one uses a lattice strain (TBP-1, trivial at zero lattice distortion) or a lattice expansion (TBP-2, non-trivial at zero lattice distortion).Figures 4(b-e) show the surfacestate dispersions around M obtained using the TM calculation as Fig. 3.Note that the M position of each plot changes according to the in-plane lattice constant.The plots in Figs. 4(b) and 4(d) show that the surface-state dispersion is topologically non-trivial with lattice strain (−5 %) for both TB parameter sets.In addition, with an in-plane lattice expansion (+5 %), the surface states for both TB parameter sets exhibit a trivial dispersion, as seen in Figs.4(c) and 4(e), that form a Kramers-degeneracy point at M in the projected bulk I. bandgap.It should be noted that the non-trivial surface-state dispersion observed by ARPES experiments with the presence of in-plane lattice strain [23,26] exhibits no conflict with the calculated results from both of the new TB parameters, TBP-1 and TBP-2. Based on these results, we propose that to trace the bulk bandgap of single-crystal Bi with in-plane lattice distortion in to conclude the order of Bi.If the bandgap closes with the in-plane lattice expantion (strain), it would be the smoking-gun evidence of non-trivial (trivial) topological order. C. Surface states on a finite slab Figure 5 shows the electronic structure calculated with the slab geometry, where the slab thickness is 20 BL and the dashed lines represent the edge of the projected bulk bands.The surface-state bands dispersing in the projected bulk bandgap are obtained together with the discrete quantum well (QW) states corresponding to the bulk bands in the projected bulk-band region.Around Γ, the surface-state dispersions are almost identical to those calculated by the TM method (see Figs. 3(c) and 3(e)).Note that no additional surface hopping terms are needed to obtain these surface bands, which is in contrast with a previous study [25].The fact that no additional hopping terms are needed is possibly owing to the different TB parameter set used in this study; however, the surface-state dispersions are quite different around M. Even when using the TB parameter that generates a trivial topology of bulk bands (TBP-1 in Table I), the surface-state dispersion obtained by the slab geometry suggests a non-trivial topological order wherein the upper branch merges into the BCB while the lower merges into the BVB.Such behavior is the same as that reported in previous studies [20,25], wherein it has been explained as the influence of the interaction between the top and bottom surfaces owing to the finite slab thickness.Even when using the trivial TB parameter set (TBP-1 in Table I), the inter-surface interaction would re-invert the bulk bands at L, where the bulk bandgap is the smallest, and thus cause the topological phase transition.Similar topological phase transitions driven by the film thickness have been observed in the QWs of HgTe [27,28]. To explore the finite-thickness effect in more detail, we calculated the electronic states around M at different thicknesses.Figures 6(a) and 6(b) plot the electronic structure around M in a 200 BL slab calculated with the TB parameters TBP-1 and TBP-2, respectively, which are given in Talbe I.The number of the QW states in these plots are much larger than those of the 20-BL slab (cf. Figure 5), reflecting the increased slab thickness.In the proximity of M , however, one can find the difference between Figs. 6(a) and 6(b).The two surface-state branches dispersing out of the projected bulk bands in Fig. 6(b) enter the projected bulk bands and become QW states.However, in Fig. 6(a), these branches do not enter the projected bulk bands but remain in the bulk bandgap at M , as is the case for the semi-infinite TM calculation (cf.Fig. 3(d)).Figure 6(c) plots the energy evolution of these two surface states at M , which are connected to the surface states away from M , together with the energy positions of the projected BVB and BCB.As shown in Fig. 6(c), the surface states calculated with TBP-1 (trivial bulk bands) disperse out of the projected bulk bands at a slab thickness greater than ∼150 BL.In contrast, with TBP-2 (topologically non-trivial bulk bands), the surface states never appear out of the projected bulk bands at M .This result suggests that a much thicker slab is required to accurately measure the topological order of single-crystal Bi than has been used in previous studies [23,26].With a slab that is insufficiently thick, the surface-state dispersion always exhibits the topologically non-trivial behavior irrespective of the topological order of the bulk Bi. Recently, two groups have reported ARPES experimental results of the Bi(111) surface states using a thickness above 100 BL [29,30].In both cases, the surface-state bands disperse as that seen in Fig. 6(b), and hence these experimental results indicate the nontrivial topological order of Bi, which is in good agreement with the bulk single-crystal case [17]. IV. SUMMARY In summary, new TB parameters are presented with which to calculate the surface states of single-crystal Bi(111) that agree with the experimental ARPES results.Two sets of TB parameters were obtained that make Bi topologically trivial or non-trivial, wherein neither parameter set exhibits any artificial crossings such as those generated by previous TB parameters (Fig. 1(e)) [15,16].Based on the new TB parameters, surface-state calculations were performed using both the transfer-matrix method and slab geometry.In both cases, the calculated surface states agree well with the previous experimental results, except for in the proximity of M .Around M , the surface-state dispersion changes depending upon the topological order of the bulk bands, wherein only the topologically non-trivial case agrees with the experiments.We surveyed the influence of crystal lattice distortions and found a topological phase transition driven by in-plane expansion for the non-trivial bulk bands, which is opposite to the distortion found in a previous report [23].In contrast with a semi-infinite crystal, the slab calculation generated non-trivial surface-state dispersions irrespective of the bulk topological order, as is the case suggested in previous DFT calculations using slab geometries.The role played by the interaction between the top and bottom surfaces in the slab was systematically studied and it was revealed that a very thick slab (i.e., greater than 150 BL in our case) is required to accurately obtain the bulk topological order of Bi from the (111) surface states.These detailed calculations of the electronic structure and topological order of the simple, well-known material of single-crystal Bi will be helpful in the further research of topological materials. FIG. 1 : FIG. 1: (a,b) Angle-resolved photoelectron spectroscopy intensity plots along Γ-M taken at 7.5 K. Dashed lines indicate the edge of projected bulk bands.These figures are reproduced from Ref. 17. (c) Schematic drawing of the three-dimensional Brillouin zone (solid line) of the Bi single crystal and its projection onto the (111) surface Brillouin zone (dashed line).(d-g) Schematic drawings of the surface-state dispersion and projected bulk bands along Γ-M , obtained by these various methods: (d,e) Based on local density approximation with slab geometry including (c) and excluding (d) the interaction between the top and bottom surface.(f) Based on empirical tightbinding (TB) parameters and transfer-matrix method.(g) The same as (e) but using different TB parameters. FIG. 2: Crystal structure of Bi.The lower contrast circles (light blue) represent the second atoms in the primitive unit cell (see text) in the (a) top and (b) side view of Bi(111). (3.1) to(3.3) in Ref.16 , together with the appendix in Ref.15 .Any bulk states are the eigenstates of the 16×16 TM with unimodular eigenvalues.For each E in the projected bulk bandgap, T (k , E) has eight eigenvalues with moduli larger than 1, which correspond to the electronic states whose amplitude decays in the −z direction.With the boundary condition φ 0,1 = 0, the surface states should also decay outside the crystal.These surface states are determined by forming an 8×8 matrix M(k , E) composed of the eight components of φ 0,1 for each of the eight decaying states.The detailed procedure to generate M(k , E) is shown in Ref.22.Finally, the surface-state band dispersion (E(k )) is determined by solving det[M(k , E)] =0. Figure 3 FIG. 3 : Figure 3 plots the surface states and the edge of the projected bulk bands generated using the TM method with three different sets of bulk TB parameters.To plot the surface-state dispersion, we plotted ln(1/det[M(k , E)]), so that the locations (k , E) where surface states lie possess much smaller intensity values (i.e., darker) than the others .The surface bands calculated with the TB parameters given in Ref. 15 (Figs.3(a) and 3(b)) agree with those given in the previous paper exhibiting an artificial crossing of the surface bands between Γ and M owing to an incorrect mirror Chern number [16].This artificial surface-band crossing disappears when the new TB parameters are used.Using the parameters that generate topologically trivial bulk bands (TBP-1 in TableI), the surface- FIG. 4 : FIG. 4: (a) Band evolution at L. Solid (dashed) lines are the energies of bulk bands just above and below the bandgap at L calculated using the tight-binding (TB) parameter set TBP-1 (TBP-2) in Table I that give topologically non-trivial (trivial) topological order .Vertical lines indicate the position where the topological phase transition occurs, corresponding with each TB parameter set.(b-e) Surface-state band dispersions calculated by the transfer-matrix method with in-plane lattice distortions of (b,d) −5 % lattice strain and (c,e) +5 % lattice expansion.The dispersions are obtained using the TB parameter set (b,c) TBP-1 and (d,e) TBP-2, given in TableI. FIG. 5 : FIG. 5: Band dispersions calculated for a 20 biatomic-layer (BL) slab using the new tight-binding parameters that generate the (a) trivial (TBP-1 in TableI) and (b) non-trivial (TBP-2 in TableI) bulk topological order.Dashed lines represent the edge of the projected bulk bands.Intensities are obtained from the eigenfunction amplitude localized in the topmost surface BL. FIG. 6 : FIG. 6: The electronic structure around M in a 200 biatomic-layer Bi slab using the new tightbinding parameters that give (a) trivial (TBP-1 in Table I) and (b) non-trivial (TBP-2 in Table I) bulk topological order (cf.Figs.5(a) and 5(b), respectively).(c) Energy evolutions of the quantum-well-like states at M connected to the surface-state bands in the projected bulk band gap.Thick horizontal lines indicate the energies of the bottom of the projected bulk conduction band and the top of the projected bulk valence band. Table I represents two sets of the new TB parameters, labeled as TBP-1 and TBP-2 in the following, obtained by the TABLE I : The new sets of tight-binding (TB) parameters for single-crystal Bi tuned so that the they could reproduce the surface states which agree with the experimental results.The definitions of each parameter are the same as in Ref.15. methods above.TableIIrepresents the parity invariants at each TRIM in the Brillouin zone of the bulk Bi crystal, calculated with the parameters given in TableI.As shown the ν 0 values in Table
6,015.8
2016-08-08T00:00:00.000
[ "Physics", "Materials Science" ]
Computational fragment-based drug design of potential Glo-I inhibitors Abstract In this study, a fragment-based drug design approach, particularly de novo drug design, was implemented utilising three different crystal structures in order to discover new privileged scaffolds against glyoxalase-I enzyme as anticancer agents. The fragments were evoluted to indicate potential inhibitors with high receptor affinities. The resulting compounds were served as a benchmark for choosing similar compounds from the ASINEX® database by applying different computational ligand-based drug design techniques. Afterwards, the selection of potential hits was further aided by various structure-based approaches. Then, 14 compounds were purchased, and tested in vitro against Glo-I enzyme. Of the tested 14 hits, the biological screening results showed humble activities where the percentage of Glo-I inhibition ranged from 0–18.70 %. Compound 19 and compound 28, whose percentage of inhibitions are 18.70 and 15.80%, respectively, can be considered as hits that need further optimisation in order to be converted into lead-like compounds. Introduction Cancer is a renegade growth system caused by the accumulation of genetic or epigenetic alterations in human body cells 1,2 .Despite the fierce battle being waged against cancer, it is still a significant health issue throughout the world 2,3 .According to World Health Organisation (WHO) statistics, cancer is the leading cause of death worldwide, accounting for 8.2 million deaths in 2012, and its prevalence is expected to grow to 13.0 million by 2030.Due to limited selectivity and lack of sufficient efficiency, the classical chemotherapeutic drugs have a substantial number of undesirable effects 1 .As a result, tremendous efforts have been conducted to discover the effective cure for cancer therapy, notably through identifying new reliable targets 4 .Recently, the glyoxalase system has drawn the attention of researchers as a potential target for the development of novel anticancer drugs 5 . The glyoxalase system is an enzyme-based detoxification system, consisting of two GSH-dependent enzymes; Glyoxalase-I and Glyoxalase-II that were discovered by Dakin and Dudley in 19136, 7 .It plays a crucial function in the cellular metabolic pathways by converting the highly reactive, cytotoxic methylglyoxal into non-toxic lactic acid 7,8 .The methylglyoxal (MG) is a byproduct of non-enzymatic reactions with glyceraldehyde 3-phosphate (G3P) or dihydroxyacetone phosphate (DHAP) 9,10 .It is also a byproduct resulting from the metabolism of proteins, fatty acids, and glucose 9 .MG contributes to an increase in the oxidative stress due to inhibition of glutathione reductase 9 .Moreover, its cytotoxicity is attributed to its capacity to form proteins, lipids, and nucleic acids adducts, known as advanced glycation end-products (AGEs) 9,11 .These adducts are linked to a wide range of diseases, such as cardiovascular diseases 9 , Alzheimer's disease 12 , depression 13 , anxiety 13 , aging 14 , diabetes 15 , obesity 9 , and cancer 9, 16,17 .Hemithioacetal, which is produced non-enzymatically when methylglyoxal reacts with glutathione (GSH), is transformed by Glo-I enzyme into S, D-lactoylglutathione (SLG), which is then hydrolysed by Glo-II enzyme to produce D-lactic acid regenerating glutathione (Figure 1) 7 . Malignant cells are known for their high metabolic rate and high glycolytic activity as observed by Warburgin in the 1920s19.The primary defence strategy to counteract toxic effects, implemented through increased glycolytic activity, is activation of detoxification systems 18 .Numerous cancer forms, including lung 19 , breast 11 , bladder 20 , and prostate 21 exhibit overexpression of numerous detoxifying enzyme systems, particularly Glo-I. Moreover, Glo-I overexpression has also been connected to the multidrug resistance of many cancers, including prostate carcinoma 22 , lung carcinoma 19 , monocytic leukemia, 23 and erythro leukaemia 23 .Consequently, it attracted particular interest in drug discovery as a potential target for combating cancer 23 .Therefore, utilising the discovery of Glo-I enzyme inhibitors as a therapeutic strategy may be beneficial in combating cancer-related pathological processes and reversing resistance to anticancer drugs caused by apoptosis.The development of approved Glo-I enzyme inhibitors over the past few decades has resulted in a variety of strengths, ranging from a high micromolar value to a low nanomolar IC 50 or ki value 24 .These inhibitors are divided into two fundamental categories: GSH-based and non-GSH-based inhibitors. Glo-I is a homodimer, zinc metalloenzyme in which substrates and most inhibitors bind to the active site.Each monomer of Glo-I is made of 183 amino acids with molecular weight roughly 42 kDa.Structurally, Glo-I active site is located at the intersection of the two polypeptide chains and consists of three main regions for drug design: hydrophobic pocket, zinc ion region, and positive charge mouth.The Zn ion region, which coordinates with the amino acids Glu99, Gln33, Glu172, and His126 as well as one or two water molecules, mediates the catalytic transformation of Hemithioacetal.The positively charged mouth contains amino acids Arg37, Arg122, Lys150, and Lys156 making the entrance to the Glo-I active site very polar in nature.The water-impermeable hydrophobic pocket, which can hold up to two aromatic rings, is present deep inside the active site of Glo-I and contains amino acids Phe71, Phe62, Leu92, Leu69, Leu160, and Met179 (Figure 2) 24 . In the present study, an extensive fragment-based drug design (FBDD) approach was utilised to unravel potential Glo-I inhibitors with promising activity as potential anticancer agents using three different 3D crystal structures of the human Glo-I enzyme available in the Protein Data Bank (PDB).This discipline deciphers the binding pockets inside the active site by measuring their potential energy to bind certain areas, then the top fragments observed bound in its target binding site are identified as anchor hits, which can then subsequently be advanced into a lead compound using a variety of optimisation approaches, which is referred to as fragment evolution.Various DS methods such as multiple copy simultaneous search (MCSS), CDocker, LibDock, ligand efficiency, and binding energy calculation were used to screen the designed ASINEXV R fragment library in order to choose the core fragments that will eventually be evoluted into lead-like compounds.The evolved compounds were used as a guide to select similar compounds from the ASINEX V R commercial database.The similarity was based on both the 2D, and the 3D descriptors followed by performing extensive docking and energy binding calculations.Finally, the resulting compounds were purchased and evaluated biologically for their inhibitory activities against human Glo-I enzyme by in vitro assay. Preparation of the Glyoxalase-I enzyme Seven 3D solved crystal structures of the human Glo-I enzyme have been deposited in the PDB.The selection of appropriate Glo-I proteins was based on the following criteria: the nature of the co-crystallized ligand in terms of chemical structure, the resolution of the crystal, and the source of the papers that were published.The goal of the multiple selection was to investigate if the crystal structures have a high degree of similarity, and whether the resulting structures will also be roughly similar.Conversely, if the crystal structures are dissimilar, this will produce diverse compounds that cover a range of enzyme conformations.Therefore, three distinct crystal structures were chosen and employed as references to construct potential inhibitors, which are 7WT2, 3VW9, and 1QIP.The three crystal structures were prepared, solvated and energy minimised for further processing in three consecutive steps to gradually relax the protein models and remove any potential artefacts that could result from crystal packing as described in the method section. Structurally, the Glo-I active site is divided into three main regions as mentioned previously.Regarding the FBDD approach that we applied, only two regions of the active site will be used which are the zinc ion region and a deep hydrophobic pocket.These two regions were defined into a single Zn-hydrophobic region because the zinc atom is remarkably close to the hydrophobic pocket.The binding site for each crystal structure of 7WT2 and 3VW9 was defined by a sphere of 7.0 Å radius, while the 1QIP crystal structure was defined by a sphere of 7.3 Å radius to ensure inclusion all crucial important amino acids of the two desired regions of the enzyme (Figure 3 and Scheme 1(A)). Fragment library design The commercially available ASINEX V R fragments library was downloaded and calculated the 2D descriptors, which will later be filtered and then employed in the following FBDD steps.The filtration was performed based on four criteria, referred to as the "Rule of Three", namely molecular weight less than or equal to 300 Da, ALogP less than or equal to 3, hydrogen bond donors less than or equal to 3, and rotatable bonds to be less than or equal to 3. Despite the fact that the "Rule of Three" is universally acknowledged, different research teams' interpretations have resulted in various guidelines 25 .In this study, the modification was related to hydrogen bond acceptors to be less than or equal to 6.The filtered step has retained 10,897 different fragments.Then, the retained library was converted into a 3D database and was ready for fragment screening step. Primary screening of the fragment library A virtual screening step of the 3D database was conducted prior to fragment docking, in order to expedite the process of fragments docking and to concentrate on those that may develop favourable interactions with the zinc-hydrophobic region.A search hypothesis comprises of 3D structure features that are desired to be detected in fragments database.In this study, a 3D customised pharmacophore zinc binding feature was utilised as a search query to identify fragments that possess zinc binding groups.This search retained 1,926 conformation fragment hits.The retained hits were then docked into the zinc-hydrophobic region. Fragment docking and scoring Generally, the computational docking algorithms for FBDD within DS are classified into two major groups which are De Novo-based protocols 26 and CHARMm-based MCSS protocol 27 .Based on the nature of the active site of the protein target, the nature of the potential ligand, the type of effective interactions that are formed with this potential ligand, and the specific goals of the study can be guide the selection of the best appropriate approach to be employed in a FBDD study.At this stage of the study, the focus on the Glo-I enzyme was related to the central zinc ion, the hydrophobic pocket, and the crucial metal acceptor interaction that a fragment should establish within the active site in order to demonstrate high binding affinity.The retained hits from the virtual screening step were then docked into the Zn-hydrophobic sphere using MCSS protocol as it concentrates on calculating electrostatic interactions rather than hydrogen bonds and hydrophobic interactions as Ludi-based protocol does.The MCSS docking protocol involves the simultaneous searching of multiple copies of a fragment, each with a different orientation and conformation, against the target protein, allowing for a more efficient exploration of conformational space and better analysing of the binding pocket.CHARMm force field minimisation is performed to define energetically favourable fragment positions.The placed fragments are ranked according to energy function by MCSS score.The MCSS score is a negative energy value of the total Scheme 1.A flowchart summarising the computational results obtained regarding the identification of Glo-I inhibitors. interaction energies of the fragment-protein interaction energy and the fragment energy. The identification of 1,462, 718, and 873 different conformations in the crystal structures of 7WT2, 3VW9, and 1QIP, respectively, was achieved by a threshold cut off set of 80 or greater for the MCSS score of the retained 1,926 fragments for further investigation via calculating their free binding energy and ligand efficiency.The coordination geometries of the zinc atoms, which are square pyramidal in both the 7WT2 and 1QIP structures and tetrahedral in the 3VW9 structure, may explain the inconsistent results between the protein crystal structures.In addition, the properties of the amino acids located within the defined sphere of each structure and the nature of the co-crystallized ligand within the active site could also play a key role. Calculation of the total binding energy and ligand efficiency of the filtered docked fragments The total binding energies of the filtered fragments retained from docking into the Zn-hydrophobic region were calculated using the In Situ Ligand Minimisation and the Calculate Binding energies protocols.The In Situ minimisation step is performed prior to binding energy calculation to optimise the ligand in the binding pocket.The calculate binding energy step is performed to take into consideration DG of binding or solvation/desolvation energy and entropy 28 .It estimates binding free energy using CHARMm implicit solvation models, where the free binding energies were calculated from the free energies of the complex, the receptor, and the ligand using the following equation: The Poisson Boltzmann with non-polar surface area (MM-PBSA) model was selected because it is the most rigorous implicit solvent model 29 , while the ligand conformation entropy was set to true to be able to calculate the total binding energy, which is the sum of binding energy and ligand conformation energy, which represents the energy difference of the ligand to lowest energy conformation. Moreover, the ligand efficiency of docked fragments was also calculated, which is used to evaluate the binding affinity of small fragments to a target protein.It attempts to normalise a fragment's activity by its molecular size; as a result, it serves as an indirect indicator of the number of its constituent atoms interacting with the target protein 30 .A negative LE value is advantageous since docked fragments estimated binding energies are indicated to have a negative value.It is calculated from the binding affinity of the molecule and its molecular weight using the following equation: The total free binding energies of the filtered fragments were ranged from −58.1908 to 36.4152, −40.9643 to 52.5446, and −25.1623 to 35.3625 in the crystal structures of 7WT2, 3VW9, and 1QIP, respectively, and their ligand efficiency values were ranged from −0.25014 to 0.20894, −0.209834 to 0.26287, and −0.15122 to 0.17184 in the same order of the crystal structures as above. Selection of the core fragments Further inspection and filtration were conducted to select the most suitable fragments, which will be used to construct the final lead compounds.The alignment of the docked poses was visually inspected, and then the fragments were chosen based on a variety of factors, such as the coexistence of the desired hydrophobic functionalities and zinc coordination groups, the formation of favourable interactions by the fragments, the MCSS docking score, the binding interaction energy, and the ligand efficiency.For instance, when the Zn þ2 metal interacts with a negatively ionising functional group, particularly a carboxylic acid, through a coordination interaction, or when the hydrophobic functionalities are occupying the hydrophobic pocket (Figure 4).Based on these criteria, nine core fragments fitting the Zn-hydrophobic region were selected in the 7WT2 structure, six in the 3VW9 structure, and seven in the 1QIP structure .In all three systems, the selected fragments were identical.Three fragments in the 3VW9 structure and two fragments in the 1QIP structure were previously excluded since their MCSS docking values dropped below the cut-off threshold (Table 1 and Scheme 1B). Fragments evolution The final selected core fragments were evoluted towards generation of the final compounds using the De Novo Evolution protocol.This protocol generates full compounds in the binding site of the receptor based on an initial fragment.A collection of compounds with higher scores is created by covalently fusing fit fragments to the core fragment in a complementary way to the active site utilising the Ludi program 31 .The nature of the fragment selection and construction of new compounds depends on the evolution mode, with three distinct modes available in DS: full, quick, and combinatorial evolution. In this study, the evolution step was performed using two modes, namely full evolution, and combinatorial evolution modes in the three complex systems.The full evolution mode in which the fragments are fused to the core fragment in an evolutionary fashion and the survivors are selected from generations of compounds ranked by a scoring function for the next iteration of fusing.Whereas in the combinatorial evolution mode, all fitted fragments are enumerated at the specified link sites in the core fragment.The Glo-I TLSC702 reveal inhibitor (5ZO) complex was utilised to define the binding site of the 7WT2 crystal structure; the Glo-I N-hydroxypyridone derivative inhibitor (HPJ) complex to define the binding site of the 3VW9 crystal structure; the Glo-I S-P-nitrobenzyloxycarbonyl glutathione inhibitor (GNB) complex to define the binding site of the 1QIP crystal structure, and then performed the full evolution mode of the selected core fragments in each of the three complex systems. Toxic functional groups and compounds with more than ionisable groups were removed from the final compounds obtained from both evolution modes in each of the three systems.Subsequently, the duplicated compounds were removed from three complex systems and the two modes of evolution, resulting in the identification of 145 unique hits, which were chosen for molecular docking (Scheme 1C).As a consequence, there is a significant degree of similarity between the crystal structures; 20-25% of the resulting structures were identical, while the remainder was largely comparable. Molecular docking of evoluted compounds and scoring In order to assess the efficiency and accuracy of the docking algorithm in pose prediction, the co-crystallized ligand for the three complex system was retrieved from the complex and redocked into its defined Glo-I active site.Once the co-crystallized and redocked ligand pose are aligned, the docking algorithm can proceed for the evoluted compounds.The heavy-atom RMSD values were calculated for the native co-crystallized ligands (5ZO, HPJ, and GNB) and the corresponding redocked ligands using CDocker. In the crystal structures of 7WT2, 3VW9, and 1QIP, the RMSD values for the ligands were 0.4694, 1.6248, and 1.2690, respectively.Similarly, when employing LibDock, the RMSD values for the ligands were 0.3099, 0.5318, and 1.5257 in the same sequence as mentioned earlier. Following the validation step, the prepared 145 hits were docked into the defined active site using CDocker and LibDock algorithms, as presented in the methods section.In DS, a gridbased molecular dynamic docking method called CDocker was employed to execute molecular docking on the retained hits.Elevated temperature molecular dynamics is used in this approach, then random rotations, to account for full ligand flexibility while treating the protein as a rigid molecule.To further refine the docked poses, the algorithm then executes an additional phase of simulated annealing or minimisation.The created poses were then graded based on CDocker energy, which was determined based on the internal ligand strain energy and the receptor-ligand interaction energy, and CDocker interaction energy, which was determined based on the non-bonded interaction energy only that exists between the protein and ligand.At both energy grades, the larger -CDocker interaction energy and -CDocker energy indicate a very favourable interaction between the protein and the ligand 32 .The calculated -CDocker interaction energy scores of the top scoring pose of the docked evoluted compounds ranged from 39.6745 to 61.4423 kcal/mol, 35.6736 to 62.2188 kcal/mol, and 42.5957 to 76.0942 kcal/mol in the crystal structures of 7WT2, 3VW9, and 1QIP, respectively.Whereas LibDock is a site-feature docking algorithm developed by Diller and Merz that docks ligands into an active site under the guidance of binding hotspots.This docking drive considers the flexibility of the ligand while treating the receptor as a rigid molecule 33 .The calculated LibDock scores of the top scoring pose of the docked evoluted compounds ranged from 84.933 to 151.946 kcal/ mol, 80.274 to 141.137 kcal/mol, and 86.687 to 148.355 kcal/mol in the crystal structures of 7WT2, 3VW9, and 1QIP, respectively. Moreover, a more robust computational method available in DS was used to further estimation of the free binding energies of the evoluted compounds for both CDocker and LibDock docked ligands using Calculating Binding Energy protocol with MM-PBSA implicit solvent model.The calculated total binding energy scores of the best scoring pose of the CDocker docked compounds ranged from 30.7203 to −41.3385 kcal/mol, 18.7497 to −65.4105 kcal/mol, and 12.6023 to −73.4002 kcal/mol in the crystal structures of 7WT2, 3VW9, and 1QIP, respectively, and 16.8233 to −54.8923 kcal/mol, −6.3235 to −66.7446 kcal/mol, and −5.7062 to −89.0642 kcal/mol for the best scoring pose of the LibDock docked compounds in the same order of the crystal structures as above (Scheme 1D). Selection of the evoluted compounds Of the 145 docked hits, 15 hits were selected based on all scored energy parameters, favourable 2D interactions, and polarity.In full evolution mode, fragment number B was evoluted and let to generation the final compounds number 4, 9, and 10, fragment number C generated the final compounds number 12 and 13, fragment number D generated the final compound number 15, and fragment number E generated the final compounds number 1 (compound 1 in Figure 5A), 2, 3, and 8. Whereas, in combinatorial evolution mode, fragment number D was evoluted to generated the final compound number 11, fragment number F generated the final compound number 14, fragment number G generated the final compound number 7, and fragment number I generated the final compounds number 5 (compound 5 in Figure 5B) and 6.However, the evoluted compounds that emerged from both fragment A and H were excluded because they generated high polar compounds, which was unfavourable property in our research.Those 15 final hits were chosen as potential inhibitors of the Glo-I enzyme (Table 2 and Scheme 1E). Virtual screening of commercial database To assess the worthiness of synthesising evoluted compounds as the evolution approach has developed sophisticated and optimised compounds that possessing unique functionalities, exploration of similar compounds before embarking on a synthetic endeavour has become critical.Due to the fact that similar compounds virtually possess the same biological activity and features, it is possible to gain valuable insights into the potential benefits and drawbacks of synthesising evoluted compounds.This strategy encourages effectiveness, enhances functionality and assessment, and guarantees that resources are allocated towards compounds that have the greatest potential for scientific advancement and usability. The ASINEXV R screening collections database, which contains more than 500,000 compounds with prospective diversity metrics, was screened for potential hits.Several approaches were used to search for similar compounds in virtual screening in accordance with ligand-based screening method including, pharmacophorebased and fingerprint-based similarities.Both rely on the similarity between the 2D or 3D substructures of known ligands and the reference molecule and search a database for compounds that fit the query 34 .The final evoluted compounds were utilised as a guide for our selection of similar compounds. Firstly, a 3D customised pharmacophore zinc binding feature assigned to a carboxylic acid group only was utilised as a search query to concentrate on those that may develop favourable interactions with the zinc region and to identify compounds that possess a zinc binding group as the evoluted compounds.This search retained 14,839 conformation hits.Subsequently, a guided ligandbased pharmacophore model was developed from each of the evoluted compounds to obtain as similar compounds as possible in terms of the shape, position of the features and similar interactions between the ligand-receptor complex using Ligand Pharmacophore Mapping protocol (Figure 6 and Scheme 1F) (Different guided pharmacophore models and their validation are presented in Supplementary 1).Mapping compounds generated from each evoluted compound were used as input ligand in Find Similar Molecules by Fingerprints protocol.These approaches resulted in the selection of 269 compounds from all reference ligands (Scheme 1G). Selection of the final compounds Molecular docking of the 269 selected hits was performed using CDocker and LibDock protocols in each of three systems.Further visual inspection and filtration were conducted to select the most suitable similar compounds based on a variety of factors, such as the alignment of the docked poses, the coexistence of the desired hydrophobic functionalities, the formation of favourable interactions by the compounds, and the docking scores.This filtration led to the identification of 131 compounds and then the removal of the duplicates led to the identification of 73 different hits (Scheme 1H and 1I). To identify the most potential inhibitors of the Glo-I enzyme, additional investigation and filtering procedures were conducted.The optimal compounds were selected by considering numerous factors, including docking scores, binding energy, total binding energy, polarity, molecular weight, and forming good binding interactions with the binding site especially with zinc ion and the hydrophobic pocket.Moreover, the Cluster Ligands protocol was utilised to assist in the final selection and ensure the diversity of the chosen compounds.These criteria allowed for the identification of 14 compounds, which were purchased and evaluated for activity according to the developed biological assay in our research lab (Figure 7 and Table 3).A summary of the entire computational results obtained regarding the identification of Glo-I inhibitors in this project is illustrated in Scheme 1. Biological evaluation The in vitro inhibitory activities of the 14 selected hits were performed using the human recombinant Glo-I enzyme (rh Glo-I) as previously described.The formation of the Glo-I product, S-D-lactoylglutathione, upon interaction with the substrate is measured spectrophotometrically using a Syringe 2 UV microplate reader by monitoring the increase in absorbance at 240 nm for 200 s. at 25 � C. The percentage of Glo-I inhibition by the tested compounds was measured by initially determining the absorbance value for each well at 0 and 200 s, and then subtracting the average absorbance value of the blank wells from both the average absorbance values of the enzyme and the compound wells at 0 and 200 s.Afterwards, the percent inhibition was using the following formula: �100% The absorbance of the negative control, blank, corresponds to the activity of the enzyme without the interference of any inhibitor.By serial dilution of the hit concentrations from 50 to 0.195 mM, the percent inhibition values for the 14 hits in comparison to the positive control, myricetin, were calculated in three triplicate independent experiments.Subsequently, compounds with more than 60% Glo-I inhibition were regarded as active, whereas those with less than 60% were regarded as weakly active or inactive; therefore, their IC 50 values were not explored. However, none of the investigated compounds can be regarded as a lead as the highest percentage of inhibition reportedly obtained was about 18.70% at 50 mM.Compound 19 and compound 28, whose percentage of inhibitions are 18.70 and 15.80% respectively, can be considered as hits that need further optimisation in order to be converted into lead-like compounds (Table 4). The inactivity of similar compounds for which binding interaction modes were predicted using a structure-based drug design may be encountered by several potential factors such as inaccuracies and misinterpretations in predicting binding modes, insufficient understanding of target protein structure or dynamics since available structure information fails to capture critical aspects of protein's conformational changes or binding interactions, and inherent limitations and approximations of computational algorithms and scoring functions for organometallic complexes.It is highly postulated that the limited ability of the carboxylic acid to chelae the zinc atom is the major contributing factor for the compromised activity of compounds 17, 20, 25, and 27.The absence of constructive contact or favourable hydrogen bonds, particularly with amino acids such as Arg37, Arg122, Lys150, and Lys156 at the mouth of the active site, could contribute to further plausible explanation the inactivity in our tested compounds, exemplified compounds 17, 24, and 28.Furthermore, the inactivity observed in compounds 18 and 25 can be attributed to the humble role played by a small hydrophobic group that occupies the hydrophobic pocket and produces weak hydrophobic interactions.The complexes of the 14 compounds assessed are provided in Supplementary 2. The percentage of inhibition resulting from compound 19 could be attributed to the ability of the ionisable carboxylic acid at physiological pH to form a bidentate coordination with the zinc metal and the phenyl ring to occupy the hydrophobic pocket (Figure 7A).Despite being relatively small and showing some % inhibition, this compound suggests the presence of extra relative to the binding site that can be exploited as a starting point to derive more potent Glo-I inhibitors. Flexible docking of the most active compounds In order to obtain more information concerning the binding pattern of the 2 most active compounds, compounds 19 and 28 were further docked using the computationally intensive Flexible Docking protocol.This protocol involves the following sequential steps: generation of receptor conformations using ChiFlex; creation of ligand conformations; execution of ligand docking into each active protein conformation site using LibDock; clustering to eliminate similar ligand poses; refinement of selected protein side chains in the presence of the rigid ligand using ChiRotor; and finally, culminating in a last ligand refinement using CDocker. For the purpose of exhibiting the desired activity, varying the distance between the carboxylic acid and the phenyl ring gives the carboxylic acid the opportunity to manoeuvre freely within the active site with the hope to adopt better orientation while chelating zinc atom at the active site.Another modification in compound 19 involves replacing the relatively weak hydrogen bond acceptor, fluorine, with better hydrogen bond acceptor, such as a hydroxy group which is expected to perform an ion diploe interaction with Arg37.Additionally, in compound 28, the incorporation of a strong hydrogen bond acceptor into the phenyl ring of 2-isopropyl-4-methylphenoxy group aims to foster stronger interactions with the positively charged mouth of the Glo-I active site (Figure 8). Computational materials Sketching the fragments, evoluted and purchased compounds was performed using ChemBioDraw Professional 15.0 (PerkinElmer Inc., MA). Preparation of the Glo-I enzyme was performed using Discovery Studio (DS) 2022 from BIOVIAV R (formerly AccelrysV R ) Software Inc. (San Diego, CA, USA).The ASINEXV R fragments library was used in a FBDD strategy to develop novel potential hits, and ASINEXV R screening collections database was virtually screened for potential hits.Presentation quality images were generated using DS.GraphPad Prism 8 (GraphPad Software Inc., CA) was used in calculation and plotting of the % enzyme inhibition values. Experimental materials The human recombinant Glo-I (rh GLO-I) enzyme from (R&D SystemsV R Corporation, USA) was used to assess the inhibitory activity of Glo-I via an in vitro assay.The final selected compounds obtained from virtual screening of ASINEXV R database were purchased from ASINEXV R (Amsterdam, The Netherlands, EU) research area.All solvents and reagents, including a mono and Di-phosphate buffer, deionised water, DMSO, glutathione, and methylglyoxal was provided from Acros (Thermo Fisher Scientific, NJ, USA) and Sigma-Aldrich Co. with the highest purity via local vendors.The biological activities of the purchased compounds were performed in vitro using UV Synergi 2 microplate reader. Computational methods There are customarily three basic steps evolved into a very successful computational approach of FBDD which are: designing a good diverse fragment library as the initial step; computational docking or screening of the designed library as the second step; and computational fragment optimisation methods for growing, linking, or direct connection (joining or merging) the fragments to generate potential lead compounds as the third step. Preparation of the Glyoxalase-I enzyme The three different crystal structure models of the Glo-I enzyme were constructed for further preprocessing using Discovery studio (DS) 2022 from BioviaV R software Inc, where the baseline coordinates for the investigated systems were retrieved from PDB.Seven Glo-I X-ray crystal structures were available in the protein data bank with entry codes 7WT2, 3VW9, 3W0T, 3W0U, 1QIN, 1QIP, and 1FRO.In this study, three crystal structures were utilised as working models with PDB codes 7WT2, which correspond to human Glo-I in complex with TLSC702 reveal inhibitor (5ZO) at 2.0 Å lution; 3VW9, which correspond to human Glo-I in complex with N-hydroxypyridone derivative inhibitor (HPJ) at 1.47 Å resolution; 1QIP, which correspond to human Glo-I in complex with S-P-nitrobenzyloxycarbonyl glutathione inhibitor (GNB) at 1.72 Å resolution.The PDB files were checked for missing loops, incomplete residues, and alternative conformations using Protein Report tool within DS.Then, the Glo-I crystal structures were prepared using Prepare Protein protocol to protonate the residue at a pH equal to 7.4, standardise the atom's names, remove all of the missing residues and any additional conformations that might have been stimulated, and fixing the connectivity and order of the bonds.The prepared complexes were first solvated by immersing them in a cubic box of pre-equilibrated explicit water, whose minimum distance from the boundary is 7.0 Å, using the Solvate protocol.Then, the solvated systems were minimised in three sequential steps by applying CHARMm forcefield with Smart Minimiser Algorithm using the Minimisation protocol.In the first step of minimisation, the heavy atoms of the backbone and side chains were exposed to a harmonic restraint.Subsequently, only the backbone atoms were restrained, and finally the entire system was allowed to move and adjust their position. Fragment library design In the interest of designing a good fragment library, the ASINEX fragments library, which is composed of 20061 structurally diverse fragments, was utilised in this research.The library was downloaded as a sdf file format, imported into DS software, and then filtered to aid the selection of the fragments comprising a library based on a set of rules, known as the "Rule of Three", with the exception of one criterion, which is the hydrogen bond acceptor.The filtration was performed by calculating the 2D descriptors of fragments using Calculate Molecular Properties protocol within DS. Then, the selected fragments were converted into a 3D database using the Build 3D Database protocol, which uses the Catalyst algorithm 35 .The created 3D database is automatically indexed with substructure, pharmacophore features, and shape information to facilitate fast database screening.The protocol was utilised with default parameters except for Conformation Method, which was set to Best. Primary screening of the fragment library Once a fragment library has been built, it was followed by a virtual screening step to identify potential hit fragments using a pharmacophore-based screening step prior to fragment docking.A customised pharmacophoric zinc binding feature (ZBF) was utilised as a search query using the Search 3D Database protocol at default parameters except for the Search Method, which was set to Best, to identify fragments that possess zinc binding groups.The customised ZBF was generated using the Customise Pharmacophore Features within DS and then Add Features from the Dictionary.Then, the mapping fragments were collected for molecular docking. Fragment docking and scoring The Glo-I active site of the three distinct crystal structures was defined using the Define and Edit Binding Site tool into the zinchydrophobic pocket region prior to the fragment docking.The zinc-hydrophobic pocket region with a 7.0 Å sphere radius was defined in the crystal structures of 7WT2 and 3VW9, while a larger 7.3 Å radius sphere was defined in the crystal structure of 1QIP.Then, the CHARMm-based MCSS docking protocol was employed for the retained fragments in each of three investigated systems with default parameters except for Iteration Profile, which was set to Conjugate 27 . Calculation of the total binding energy and ligand efficiency of docked fragments The total binding energy (TBE) of the top-ranked docked fragments in the Zn-hydrophobic sphere, which was determined by a threshold cut-off, was calculated using the In Situ Ligand Minimisation and then the Calculate Binding Energies protocols 28 .The In Situ Ligand Minimisation protocol was conducted using the Adopted Basis Newton-Raphson (NR) algorithm.The Calculate Binding Energies protocol was performed at default parameters with the exception of the Ligand Conformation Entropy, which was set to True, and the Implicit Solvent System, for which the Poisson Boltzmann with non-polar surface area (MM-PBSA) model was chosen 29 .Then, by employing the Calculate Ligand Efficiency protocol and the fragments with their binding energies as input ligand, ligand efficiency was calculated for the top-ranked fragments in terms of binding energy. Fragment evolution There are different techniques for designing large, potent lead compounds from small initial fragment hits that have sufficiently high ligand efficiency and low binding affinity, including growing, linking, replacing, and merging/joining fragments.In this research, the selected fragments were evoluted in three complex systems using the De Novo Evolution protocol, which is a fragment growing strategy, in both Full and Combinatorial Evolution Modes.The De Novo Library Generation protocol, particularly the link library type was employed to create a fragment library in text or binary format to be used later in the de novo protocols. Before Full Evolution Mode was applied to the selected fragments, the Define and Edit Binding Site tool was used to define the Glo-I binding site for each complex system using a sphere that covered all significant amino acid residues.The cavity that houses the bound ligand was enclosed by the sphere.All residues in the binding site that might be relevant to ligand binding were included by expanding the sphere to 10 Å in each of three Glo-I systems.Then, the De Novo Evolution protocol was applied to the selected fragments with a Maximum of 20 Population Size per fragment, a Maximum of one Number of Generations, the Best Search Method, and with the Rejection of carboxylic acid groups within the Fragment Filters, while all other parameters were kept as default.Whereas the De Novo Evolution protocol was applied in Combinatorial Evolution Mode to the selected fragments with 7.0 Å sphere model created at each link point, a Maximum Alignment Angle of 20 � Link Points, and with the Best Search Method, while all other parameters were kept as default. Molecular docking and calculation of the TBE of evoluted compounds Prior to molecular docking, the evoluted compounds were prepared using Prepare Ligands protocol using default parameter values with the exception of the Ionisation Method, which was set to Rule Based, the Generate Tautomers in which the Enumerate What was set to Canonical, and the Generate Isomers which was set to False. Two molecular docking algorithms within DS, CDocker and LibDock, were used to dock the retained evoluted compounds acquired from each complex system into its defined Glo-I active site to assess active site binding in the same way and with the essential geometry as observed with each of the three complex systems.The same sphere model generated in Glo-I enzyme structure for the De Novo Evolution protocol was utilised for docking objectives by both CDocker and LibDock protocols.Furthermore, each complex system received its own docking of the entirety of the evoluted compounds collected from all complex systems. The CDocker protocol was implemented using default parameter values except for Using Full Potential which was set to true.While the LibDock protocol was implemented using default parameters with the exception of the Conformation Method which was set to Best, the Energy Threshold in Conformation Method which was reduced to 10 kcal/mol, and the Minimisation Algorithm which was set to Smart Minimiser.Afterwards, similar to the approach was used in calculating the binding energies of the docked fragments, the output docked poses of the evoluted compounds from both CDocker and LibDock protocols were subjected to energy minimisation using the In Situ Minimisation protocol.Then, the binding energies of the minimised ligands were calculated using the Calculate Binding Energies protocols. Virtual screening of commercial database The ASINEXV R screening collections database was virtually screened using a variety of strategies to identify similar compounds for potential evoluted hits including, ligand 3D pharmacophore mapping for compounds containing a carboxylic acid group as a zinc binding feature, a guided pharmacophore model for each potential evoluted hits, ligand 3D pharmacophore mapping of each resulting compounds obtained from its guided pharmacophore model, and finally a 2D similarity searching using 2D structural fingerprints. A carboxylic acid group was the only 3D pharmacophore query of zinc binding feature that was specified for the Ligand Pharmacophore Mapping technique from the outset.The protocol was applied at default parameters with Best conformation method.A guided ligand-based pharmacophore model was built utilising structural information for each evoluted potential hit.Then, it was used as a query in virtual screening of the resulting compounds that included a carboxylic acid group using a Ligand Pharmacophore Mapping protocol.The protocol was applied at default parameter with Best conformation and Flexible Sitting methods.Finally, 2D descriptors were used to encode the presence of 2D substructural fingerprints in a compound in order to find compounds similar to the reference ligands, which are the resulting compounds obtained from each guided pharmacophore model.This process was performed using Find Similar Molecules by Fingerprints protocol with the Tanimoto Similarity Coefficient and FCFC-4 as a Predefined Set of fingerprints.However, our selection of similar compounds was guided by the final evoluted compounds. Molecular docking and scoring of similar compounds Similar compounds were prepared, molecular docked, and scored in a manner similar to that used for the evoluted compounds.Further inspection and filtration were conducted to select the most suitable similar compounds, which will be selected for calculating the binding energy protocol. Selection of the final compounds In order to identify the most definitive potential inhibitors of the Glo-I enzyme, further visual inspection and filtration were performed.Then, the retained hits were subjected to the Cluster Ligand protocol to select the final 14 acquired compounds and to experimentally evaluate their inhibitory activities.The protocol was applied at default parameters with 14 Numbers of Clusters and FCFP-4 as a Predefined Set of properties.Synthetic strategies of the 2 most active compounds can be found in supplementary 3. In vitro enzyme assay Following the manufacturer's procedure (R&D SystemsV R , Inc., Minneapolis, MN, USA), the in vitro biological activities of the selected compounds were conducted by measuring their inhibitory activities against human recombinant Glo-I.In our laboratory, the enzyme was reconstituted by dissolving it to a concentration of 0.5 mg/mL in sterile deionised water, freezing at −79 � C, and then thawing it on the day of testing.Absorbances were measured using a double-beam UV-Vis spectrophotometer at a maximum wavelength of 240 nm for 200 s at 25 � C after dissolving the test compounds in dimethyl sulfoxide to make a 10 mM stock solution.The buffer solution was prepared using 0.5 M sodium phosphate dibasic stock solution and 0.5 M sodium phosphate monobasic stock solution.The substrate solution, containing 706 lL of freshly prepared glutathione and 706 lL of freshly prepared MG, was added to 25 ml of assay buffer and the obtained mixture was allowed to incubate in water bath at 37 ˚C for 15 min.The enzyme was prepared by mixing 17.75 ll of it with a 10 ml incubated buffer to be used later in the enzyme activity wells at a volume of 49 ll, plus to 150 ll buffer and 1 ll DMSO.The blank test was 10 ml of incubating buffer to be used later in the blank activity wells at a volume of 49 ll, plus to 150 ll buffer and 1 ll DMSO.The tested compounds were prepared by serially diluting them with DMSO to be used later in the compound activity wells at a volume of 1 ll, plus to 150 ll buffer and 49 ll enzyme.All tests were performed through three independent experiments to be conducted in triplicate.Then, the inhibitory activities assessed were followed by measuring their IC 50 values using GraphPad Prism 8. Myricetin was used as a positive control with an IC 50 equals to 3.38 ± 0.41 (mM). Flexible docking of the most active compounds The same docking sphere used previously in this work which was generated in the Glo-I enzyme structure (3VW9) was utilised for docking using the Flexible Docking protocol within DS.All parameter values in the protocol were set to default, with the exception of the Conformation Method in 'Generate Ligand Conformation,' which was set to Best.The flexible group that will be permitted to move throughout the docking process is defined as amino acids that correspond to the Glo-I active site: Gln33B, Glu172A, Glu99B, His126A, Phe162A, Met65B, Met183A, Met179A, Met157A, Leu69B, and Cys60B. Conclusion A fragment-based drug design strategy had been implemented for identifying potential Glo-I inhibitors.The resulting compounds were served as a benchmark for choosing similar compounds from the ASINEX V R commercial database.Being a FBDD technique is highly dependent on the quality of the protein structure, three crystal structures of Glo-I were utilised.Subsequently, 14 compounds were selected as potential candidates for biological evaluation.The screened compounds displayed weak activity, with Glo-I inhibition percentages ranging from 0 to 18.70%.Nevertheless, compounds 19 and 28 were identified as promising hits, that upon further optimisation could be converted into lead-like compounds.Thus, the identified hits serve as a starting point for further optimisation in attempts to identify potent Glo-I inhibitors.Collectively, through the application of this method, we are able to identify compounds that have a higher possibility of success and impact while gaining important insights into the potential advantages and disadvantages of synthesising evoluted compounds.This also allows us to explore strategies for modifying the structure of the evoluted compounds to improve the likelihood of exhibiting the desired activities and support the efficiency of FBDD approach.Additionally, the identified similar hits would be optimised in an effort to boost their prospective activities in order to produce a lead like compound with potent inhibitory activity. Figure 2 . Figure 2. Cartoon representation of a 3D crystal structure of human Glo-I (PDB code 3W0T).(A) Surface representation, in which monomer A is coloured beige, and monomer B is coloured light blue.The co-crystalized inhibitors are shown in the stick representation (B) The protein is represented as solid ribbon, in which a-helix are in red, b-sheets are in cyan, turns are in green, loops are in white, and zinc metals are shown in dark grey sphere.(C) Three key areas of Glo-I active site: hydrophobic pocket (deep brown), positive ionised mouth (blue) and zinc ion region (the zinc atom is shown in dark grey sphere). Figure 3 . Figure 3. Glo-I PDB enzyme Preparation codes (A) 7WT2 (B) 3VW9 (C) 1QIP.The proteins are represented as solid ribbon, and the Zinc metals as dark red spheres.The active sites are defined with transparent spheres of (A) 7 Å (B) 7 Å (C) 7.3 Å radius. Figure 4 . Figure 4.The docked poses of fragments binding the Zn-hydrophobic region of (A) fragment A and (B) fragment B, in the active site of Glo-I PBD code (A) 7WT2 (B)3VW9 along with a 2D interaction map using MCSS.The binding site is represented as a hydrophobic surface, and the Zinc metal as a grey sphere. Figure 5 . Figure 5. (A)The docked pose of full evoluted mode of compound number 1 in the active site of Glo-I (PBD code 7WT2) along with a 2D interaction map using CDocker.(B) The docked pose of combinatorial evoluted mode of compound number 5 in the active site of Glo-I (PBD code 3VW9) along with a 2D interaction map using CDocker.The binding site is represented as a hydrophobic surface, and the Zinc metal as a grey sphere. %inhibition ¼ 1 − abs: of the comp: at 200secÀ abs: of the comp: at 0 sec abs: of the enz: at 200secÀ abs: of the enz: at 0 sec � � Figure 6 . Figure 6.(A) A guided pharmacophore model generated from compound number 1 (B) The compound has been eliminated for clarification.The feature types in this pharmacophore are: HBA (green), NI (dark blue), HY (sky blue), and RA (brown). Figure 7 . Figure 7.The docked pose of final compounds of (A) compound number 19 (B) compound number 24 (C) compound number 28, in the active site of Glo-I PBD code (A) 7WT2 (B) 3VW9 (C) 1QIP along with a 2D interaction map using CDocker.The binding site is represented as a hydrophobic surface, and the Zinc metal as a grey sphere. Figure 8 . Figure8.The docked pose of compound number 19 and compound number 28, in the active site of Glo-I PBD code 7WT2 along with a 2D interaction map using Flexible Docking protocol.The binding site is represented as a hydrophobic surface, and the Zinc metal as a grey sphere. Table 1 . The selected core fragments binding the Zn-hydrophobic region with their scores in the three crystal structures. a TBE: Total binding energy in Kcal/mol, b LE: Ligand efficiency � Indicates that the MCSS score was less than the cut-off value Table 2 . The 15 selected evoluted final compounds with their top scores in the 7WT2, 3VW9, and 1QIP crystal structures. Table 3 . The 14 selected final similar compounds with their top scores in the 7WT2, 3VW9, and 1QIP crystal structures. Table 4 . Biological screening results of 19 and 28 compounds at 50 mM.
11,012.4
2024-01-22T00:00:00.000
[ "Medicine", "Chemistry" ]
A Focused Review of Ras Guanine Nucleotide-Releasing Protein 1 in Immune Cells and Cancer Four Ras guanine nucleotide-releasing proteins (RasGRP1 through 4) belong to the family of guanine nucleotide exchange factors (GEFs). RasGRPs catalyze the release of GDP from small GTPases Ras and Rap and facilitate their transition from an inactive GDP-bound to an active GTP-bound state. Thus, they regulate critical cellular responses via many downstream GTPase effectors. Similar to other RasGRPs, the catalytic module of RasGRP1 is composed of the Ras exchange motif (REM) and Cdc25 domain, and the EF hands and C1 domain contribute to its cellular localization and regulation. RasGRP1 can be activated by a diacylglycerol (DAG)-mediated membrane recruitment and protein kinase C (PKC)-mediated phosphorylation. RasGRP1 acts downstream of the T cell receptor (TCR), B cell receptors (BCR), and pre-TCR, and plays an important role in the thymocyte maturation and function of peripheral T cells, B cells, NK cells, mast cells, and neutrophils. The dysregulation of RasGRP1 is known to contribute to numerous disorders that range from autoimmune and inflammatory diseases and schizophrenia to neoplasia. Given its position at the crossroad of cell development, inflammation, and cancer, RASGRP1 has garnered interest from numerous disciplines. In this review, we outline the structure, function, and regulation of RasGRP1 and focus on the existing knowledge of the role of RasGRP1 in leukemia and other cancers. Ras Guanine Nucleotide Exchange Factors: Introduction Ras guanine nucleotide exchange factors (RasGEFs) are composed of three families of proteins: Ras guanine nucleotide-releasing proteins (RasGRPs), Son of Sevenless (SOS), and Ras guanine nucleotide-releasing factors (RasGRFs). The RasGRP family consists of four members, RasGRP1, RasGRP2, RasGRP3, and RasGRP4, the SOS family is composed of two members, SOS1 and SOS2, and the RasGRF family is also composed of two members, RasGRF1 and RasGRF2. The commonality is that they catalyze the removal of GDP from GTPases, such as Ras and Rap, and allow for its replacement [1] (Figure 1). While Ras itself possesses intrinsic GTPase and guanine nucleotide exchange activities, the basal activity is low. The activation of the canonical Ras pathway is characterized by the phosphorylation of Raf, Mek, and Erk. The active GTP-bound Ras has a wide range of downstream effects at the cellular level, such as a proliferation, differentiation, and apoptosis. Given these fundamental roles, numerous disease processes have been attributed to the dysregulation of Ras and RasGEFs, which range from autoimmune and inflammatory diseases to neoplasia. A broad review of all RasGEFs in various cell types is beyond the scope of this focused review of RasGRP1 in cancer; however, we direct the reader to previous reviews [2][3][4][5][6]. RasGRP1: Structure and Function The catalytic module of RasGRP1 is composed of the Ras exchange motif (REM) followed by the CDC25 subunit ( Figure 2). Upon the binding of the Ras to the catalytic module of RasGRP1, the helical hairpin of CDC25 removes the GDP from the GTPase. As the cellular concentration of GTP is about 10-fold higher than GDP, GTP occupies the free nucleotide-binding pocket of the enzyme. This hairpin is highly conserved in all GEFs [7]. A single amino acid change within this structure has been shown to abrogate its catalytic effect [8,9]. Despite the ability of the helical hairpin alone to convey the nucleotide dissociating ability of RasGEFs, the nucleotide exchange step is mediated by other regions of the catalytic domain [10]. RasGRP1: Structure and Function The catalytic module of RasGRP1 is composed of the Ras exchange motif (REM) followed by the CDC25 subunit ( Figure 2). Upon the binding of the Ras to the catalytic module of RasGRP1, the helical hairpin of CDC25 removes the GDP from the GTPase. As the cellular concentration of GTP is about 10-fold higher than GDP, GTP occupies the free nucleotide-binding pocket of the enzyme. This hairpin is highly conserved in all GEFs [7]. A single amino acid change within this structure has been shown to abrogate its catalytic effect [8,9]. Despite the ability of the helical hairpin alone to convey the nucleotide dissociating ability of RasGEFs, the nucleotide exchange step is mediated by other regions of the catalytic domain [10]. Adjacent to the catalytic module of RasGRP1 is a pair of EF hands ( Figure 2) with a calcium-binding capacity in vitro [11]. The evidence is conflicted on the importance of EF hands and calcium for the activation of RasGRP1. Some studies found the EF hands and calcium to be dispensable [12][13][14], while some found them to be necessary [15]. The current weight of evidence supports the idea that the EF hands do indeed bind calcium, induce conformational changes, and activate RasGRP1 [16,17]. The calcium source is from the endoplasmic reticulum stores, and its release is mediated by phospholipase C-γ-generated inositol-1,4,5-trisphosphate (IP3) [18]. The C1 domain of RasGRP1 binds DAG (Figure 2), generated by PLCγ, and its synthetic analog phorbol myristate acetate (PMA) and 12-O-tetradecanoylphorbol-13-acetate (TPA) [11,[19][20][21]. Upon the binding of the C1 domain to the DAG, RasGRP1 becomes anchored to the plasma membrane [11,13,[22][23][24][25][26] where its substrate, GDP-loaded GTPase, is present. Alternatively, RasGRP1 can also be trafficked to the endoplasmic reticulum (ER) and Golgi apparatus [23,24,[27][28][29]. While the ability of the C1 domain to bind DAG is well known, it alone is insufficient for membrane targeting and requires other domains on RasGRP1, which are discussed below. Adjacent to the catalytic module of RasGRP1 is a pair of EF hands ( Figure 2) with a calcium-binding capacity in vitro [11]. The evidence is conflicted on the importance of EF hands and calcium for the activation of RasGRP1. Some studies found the EF hands and calcium to be dispensable [12][13][14], while some found them to be necessary [15]. The current weight of evidence supports the idea that the EF hands do indeed bind calcium, induce conformational changes, and activate RasGRP1 [16,17]. The calcium source is from the endoplasmic reticulum stores, and its release is mediated by phospholipase C-γ-generated inositol-1,4,5-trisphosphate (IP3) [18]. The tail region of RasGRP1 possesses an approximately 140 residue-long coiled-coil (CC), later renamed as the plasma membrane-targeting (PT) domain, and the suppressor of PT (SuPT) domain [16,23] (Figure 2). In the inactive state, the SuPT domain of RasGRP1 attenuates the plasma membrane-targeting activity of the PT domain [23]. Upon the binding of the C1 domain to DAG, it also counteracts the SuPT domain and enables the PT domain to target RasGRP1 to the plasma membrane [23,30]. At the plasma membrane, the hydrophobic residues of the PT domain bind phospholipid vesicles containing phosphoinositides. The deletion of the hydrophobic residues prevents the PI3k-dependent plasma membrane targeting of RasGRP1 [30], and the deletion of the tail region entirely leads to a T cell dysregulation [31]. The PT domain additionally facilitates the dimerization of RasGRP1 in the inactive state [16]. While the C1 domain is well-recognized for its role mediating the RasGRP1 membrane targeting capacity and activation, it is now accepted that these effects are also dependent on the tail domain of RasGRP1. Ras Guanine Nucleotide-Releasing Protein 1: Regulation The translocation and activation of the RasGRP1 membrane are reliant on the binding of the C1 domain to DAG ( Figure 3). Logically, the catalyzation of DAG to phosphatidic acid (PA) by diacylglycerol kinases (DGK) should terminate RasGRP1 signaling. Indeed, The tail region of RasGRP1 possesses an approximately 140 residue-long coiled-coil (CC), later renamed as the plasma membrane-targeting (PT) domain, and the suppressor of PT (SuPT) domain [16,23] (Figure 2). In the inactive state, the SuPT domain of RasGRP1 attenuates the plasma membrane-targeting activity of the PT domain [23]. Upon the binding of the C1 domain to DAG, it also counteracts the SuPT domain and enables the PT domain to target RasGRP1 to the plasma membrane [23,30]. At the plasma membrane, the hydrophobic residues of the PT domain bind phospholipid vesicles containing phosphoinositides. The deletion of the hydrophobic residues prevents the PI3k-dependent plasma membrane targeting of RasGRP1 [30], and the deletion of the tail region entirely leads to a T cell dysregulation [31]. The PT domain additionally facilitates the dimerization of RasGRP1 in the inactive state [16]. While the C1 domain is well-recognized for its role mediating the RasGRP1 membrane targeting capacity and activation, it is now accepted that these effects are also dependent on the tail domain of RasGRP1. Ras Guanine Nucleotide-Releasing Protein 1: Regulation The translocation and activation of the RasGRP1 membrane are reliant on the binding of the C1 domain to DAG ( Figure 3). Logically, the catalyzation of DAG to phosphatidic acid (PA) by diacylglycerol kinases (DGK) should terminate RasGRP1 signaling. Indeed, DGKα is recruited to the plasma membrane after the TCR stimulation [32] and results in suppressed RasGRP1 activity and Ras signaling [33,34]. This mechanism of RasGRP1 regulation has been proposed to be a mechanism of T cell anergy [35,36]. Other DGK isoforms also regulate the RasGRP1 activity, specifically, DGKζ. Studies have found that the overexpression of kinase-dead DGKζ in Jurkat cells prolonged the Ras activation, and the overexpression of the wild-type DGKζ suppressed the ERK phosphorylation following the TCR ligation [37,38]. For in-depth reviews of the DGKs, we direct the reader to previous reviews [39,40]. The activity of RasGRP1 is regulated by multiple mechanisms in addition to endomembrane versus plasma membrane localization. RasGRP1 is also regulated by the phosphorylation (Figure 3), more specifically, of threonine 184 (T184) by protein kinase C α (PKCα) after the TCR engagement or PMA stimulation [41]. While the phosphorylation of T184 enhances the activity of RasGRP1, it is not completely required. A RasGRP1 Thr184Ala mutant did not exhibit a significant signaling defect [42]. DGKζ not only indirectly regulates the RasGRP1 activity via DAG, but it also physically associates with PKCα and inhibits the phosphorylation of RasGRP1 [43]. In unstimulated cells, RasGRP1 is believed to exist in an autoinhibited dimeric form, in which the EF domains of each monomer block DAG-binding sites on the C1 domain of the partner. It was also suggested that an invariant His 212 in RasGRP1, 2, and 3 functions as a pH sensor: lymphocyte receptor stimulation causes an increase in the intracellular pH and thus the deprotonation of His 212 [44]. The later causes the structural rearrangement of the linker between the CDC25 and EF domain and the destabilization of the autoinhibition [44]. We refer the reader to the review by Griner and Kazanietz for additional details on PKC and other DAG effectors [45]. DGKα is recruited to the plasma membrane after the TCR stimulation [32] and results in suppressed RasGRP1 activity and Ras signaling [33,34]. This mechanism of RasGRP1 regulation has been proposed to be a mechanism of T cell anergy [35,36]. Other DGK isoforms also regulate the RasGRP1 activity, specifically, DGKζ. Studies have found that the overexpression of kinase-dead DGKζ in Jurkat cells prolonged the Ras activation, and the overexpression of the wild-type DGKζ suppressed the ERK phosphorylation following the TCR ligation [37,38]. For in-depth reviews of the DGKs, we direct the reader to previous reviews [39,40]. The activity of RasGRP1 is regulated by multiple mechanisms in addition to endomembrane versus plasma membrane localization. RasGRP1 is also regulated by the phosphorylation ( Figure 3), more specifically, of threonine 184 (T184) by protein kinase C α (PKCα) after the TCR engagement or PMA stimulation [41]. While the phosphorylation of T184 enhances the activity of RasGRP1, it is not completely required. A RasGRP1 Thr184Ala mutant did not exhibit a significant signaling defect [42]. DGKζ not only indirectly regulates the RasGRP1 activity via DAG, but it also physically associates with PKCα and inhibits the phosphorylation of RasGRP1 [43]. In unstimulated cells, RasGRP1 is believed to exist in an autoinhibited dimeric form, in which the EF domains of each monomer block DAG-binding sites on the C1 domain of the partner. It was also suggested that an invariant His 212 in RasGRP1, 2, and 3 functions as a pH sensor: lymphocyte receptor stimulation causes an increase in the intracellular pH and thus the deprotonation of His 212 [44]. The later causes the structural rearrangement of the linker between the CDC25 and EF domain and the destabilization of the autoinhibition [44]. We refer the reader to the review by Griner and Kazanietz for additional details on PKC and other DAG effectors [45]. Ding and colleagues identified RasGRP1 to be a client protein of the chaperone heat shock protein 90 (HSP90) (Figure 3). Additionally, the degradation of RasGRP1 can be Ding and colleagues identified RasGRP1 to be a client protein of the chaperone heat shock protein 90 (HSP90) (Figure 3). Additionally, the degradation of RasGRP1 can be mediated by HSP90 acetylation [46]. There is emerging evidence that microRNAs also play a role in the RasGRP1 expression; specifically, miR-21 was shown to suppress the expression of RasGRP1 [47,48]. Conversely, the downregulation of miR-21 increased the RasGRP1 expression in vitro [49]. Immature Thymocytes The maturation of immature thymocytes undergo four double-negative (CD4−, CD8−) stages (DN1-4), an immature single positive stage (ISP; CD8+ in mice and CD4+ in humans), and a double positive (DP; CD4+, CD8+) stage ( Figure 4). During this process, immature thymocytes undergo two selection checkpoints. The first of which is a process termed "β-selection", which takes place in the DN3-DN4 transition stage, where the pre-T cell receptor (pre-TCR), composed of a somatically rearranged TCRβ chain and an invariant pre-TCRα chain, signal a first intersect with the Ras pathway. This signal is necessary for the αβ T cell precursors to eventually become mature CD4+ or CD8+ T cells. The activation of the Ras is critical in β-selection; in fact, activated Ras can replace the pre-TCR expression and generate DP thymocytes in Rag −/− mice [50]. At the β-selection step, RasGRP1 is dispensable and acts as a backup for SOS1 [51][52][53]. Immature thymocytes that pass the β-selection undergo a proliferative burst and initiate CD4 and CD8 expression to become DP thymocytes. DP thymocytes that express TCRαβ undergo subsequent checkpoints termed "positive" and "negative" selection, where the TCRαβ signal quality and strength are interrogated. RasGRP1 plays an essential role in a positive selection, and SOS1 acts as backup in a negative selection [52,53]. The knockout of RasGRP1 arrests the progression of DP thymocytes through a positive selection [51,54], whereas the double knockout of RasGRP1 and SOS1 is needed to arrest a negative selection [53]. humans), and a double positive (DP; CD4+, CD8+) stage (Figure 4). During this process, immature thymocytes undergo two selection checkpoints. The first of which is a process termed "β-selection", which takes place in the DN3-DN4 transition stage, where the pre-T cell receptor (pre-TCR), composed of a somatically rearranged TCRβ chain and an invariant pre-TCRα chain, signal a first intersect with the Ras pathway. This signal is necessary for the αβ T cell precursors to eventually become mature CD4+ or CD8+ T cells. The activation of the Ras is critical in β-selection; in fact, activated Ras can replace the pre-TCR expression and generate DP thymocytes in Rag −/− mice [50]. At the β-selection step, RasGRP1 is dispensable and acts as a backup for SOS1 [51][52][53]. Immature thymocytes that pass the β-selection undergo a proliferative burst and initiate CD4 and CD8 expression to become DP thymocytes. DP thymocytes that express TCRαβ undergo subsequent checkpoints termed "positive" and "negative" selection, where the TCRαβ signal quality and strength are interrogated. RasGRP1 plays an essential role in a positive selection, and SOS1 acts as backup in a negative selection [52,53]. The knockout of RasGRP1 arrests the progression of DP thymocytes through a positive selection [51,54], whereas the double knockout of RasGRP1 and SOS1 is needed to arrest a negative selection [53]. The compartmentalization of Ras signaling underlies the digital output (positive versus negative) seen in the pre-TCR and TCR selection checkpoints [55][56][57] (Figure 5). Negative selection signaling is molecularly characterized by the plasma membrane recruitment of the RasGRP1 and Grb2 (growth factor receptor-bound protein 2)-SOS1 complex from the cytosol, and the resultant activation of the Ras pathway [55][56][57]. The Grb2-SOS1 complex binds phosphorylated LAT at the plasma membrane and serves as another The compartmentalization of Ras signaling underlies the digital output (positive versus negative) seen in the pre-TCR and TCR selection checkpoints [55][56][57] (Figure 5). Negative selection signaling is molecularly characterized by the plasma membrane recruitment of the RasGRP1 and Grb2 (growth factor receptor-bound protein 2)-SOS1 complex from the cytosol, and the resultant activation of the Ras pathway [55][56][57]. The Grb2-SOS1 complex binds phosphorylated LAT at the plasma membrane and serves as another RasGEF. However, positive selecting signaling is characterized by the recruitment of RasGRP1 to the Golgi apparatus, and no involvement of the Grb2-SOS1 complex [55,56]. T Cells Despite the maturation arrest of thymocytes and the loss of mature single-positive thymocytes in RasGRP1 knockout mice, this arrest is not complete. RasGRP1-deficient CD4+ and CD8+ T cells do exist [58], however, they are defective in their capacity to become activated and proliferate after an anti-CD3 and anti-CD28 antibody stimulation [58]. Interestingly, humans deficient in RasGRP1 have increased numbers of TCRγδ+ CD8+ T cells [58]. RasGRP1-deficient mice develop splenomegaly and autoantibodies as a result of T cell dysregulation, characterized by an elevated interleukin (IL)-4 secretion [31,[59][60][61]. This elevation in IL-4 drives B cell proliferation and the production of autoantibodies [60]. Furthermore, one study found that the binding of RUNX1 to a putative autoimmunityassociated enhancer 1 upstream of Rasgrp1 mediates the RasGRP1 deficiency-mediated autoimmune disease [61]. RasGEF. However, positive selecting signaling is characterized by the recruitment of RasGRP1 to the Golgi apparatus, and no involvement of the Grb2-SOS1 complex [55,56]. T Cells Despite the maturation arrest of thymocytes and the loss of mature single-positive thymocytes in RasGRP1 knockout mice, this arrest is not complete. RasGRP1-deficient CD4+ and CD8+ T cells do exist [58], however, they are defective in their capacity to become activated and proliferate after an anti-CD3 and anti-CD28 antibody stimulation [58]. Interestingly, humans deficient in RasGRP1 have increased numbers of TCRγδ+ CD8+ T cells [58]. RasGRP1-deficient mice develop splenomegaly and autoantibodies as a result of T cell dysregulation, characterized by an elevated interleukin (IL)-4 secretion [31,[59][60][61]. This elevation in IL-4 drives B cell proliferation and the production of autoantibodies [60]. Furthermore, one study found that the binding of RUNX1 to a putative autoimmunity-associated enhancer 1 upstream of Rasgrp1 mediates the RasGRP1 deficiency-mediated autoimmune disease [61]. B-Cells B cells express RasGRP1 and RasGRP3. While both are involved in B cell receptor (BCR)-mediated Ras signaling, RasGRP3 plays the central role [59,62]. One study found that the BCR-mediated proliferation was suppressed more by the knockout of RasGRP3 than RasGRP1 and was absent in double knockouts [59]. The defect in the B cell Figure 5. Model of RasGRP1 membrane translocation. RasGRP1 activation is characterized by phosphorylation of threonine 184, binding of calcium by the EF hands, and translocation to the plasma membrane or endomembranes (Golgi apparatus and endoplasmic reticulum). Deletion of the PT domain renders RasGRP1 unable to translocate to the plasma membrane. In negative selection of αβTCR-expressing thymocytes, RasGRP1 is recruited to the plasma membrane along with SOS1-Grb2 complex. Grb2-SOS1 complex binds phospho-LAT and RasGTP at the plasma membrane and act as another RasGEF. PLCγ is also recruited to the phospho-LAT at the plasma membrane during RasGRP1 activation and generates DAG and IP3. IP3 subsequently induces the release of intracellular stores of calcium. Created with BioRender.com. B-Cells B cells express RasGRP1 and RasGRP3. While both are involved in B cell receptor (BCR)-mediated Ras signaling, RasGRP3 plays the central role [59,62]. One study found that the BCR-mediated proliferation was suppressed more by the knockout of RasGRP3 than RasGRP1 and was absent in double knockouts [59]. The defect in the B cell proliferation due to the RasGRP1 knockout was supported by a later study [58]. Unlike T cells, the knockout of both RasGRP1 and RasGRP3 did not disrupt the development of B cells [59]. However, B cells that express a dominant negative Ras mutant have severe developmental defects at the pre-pro B cell stage [63,64]. NK Cells NK cells exert their cytotoxic effect and produce cytokines and chemokines subsequent to the activation of various cell surface receptors [65]. Briefly, this signal cascade is dependent on RasGRP1, and the knockdown of RasGRP1 in NK cells results in a markedly decreased cytokine production and cytotoxicity [58,66]. In humans, this defect has been attributed to the protein-protein interaction between RasGRP1 and the dynein light chain (Dynll1) [58]. Lymphoma and Leukemia While loss-of-function RasGRP1 mutants have been described in humans [75][76][77], no oncogenic mutant of RasGRP1 has been identified. These loss-of-function RasGRP1 mutants lead to the development of autoimmune lymphoproliferative syndrome (ALPS), CD4+ T cell lymphopenia, recurrent infections, hepatosplenomegaly, and lymphadenopathy [75][76][77]. It is important to note that some patients with loss-of-function RasGRP1 mutants develop Epstein-Barr virus (EBV)-induced B cell lymphoma. However, studies have found RasGRP1 to be overexpressed in nearly half of all T cell acute lymphoblastic leukemias (T-ALL) [78,79]. Retroviral insertion studies in mice have also identified wild-type RasGRP1 as a leukemogenic oncogene [80][81][82]. Furthermore, the dysregulation of RasGRP1 in mice and cell lines has been shown to lead to the development of thymic lymphomas and T cell leukemias [79,83,84]. Interestingly, cell lines with a high RasGRP1 expression required a cocktail of IL-2, -7, and -9 for proliferation [79,84]. Additionally, leukemia driven by the overexpression of RasGRP1 and K-Ras G12D are mutually exclusive and represent the distinct mechanisms of leukemogenesis [79]. This is consistent with the finding from a later study that identified RasGRP1 as a negative regulator of Ras signaling in Kras −/− Nras Q61R/+ -driven leukemia [85]. Various studies have shown that the dysregulation of RasGRP1 itself is insufficient for leukemogenesis [79,86]; however, it does bestow a proliferative advantage in bone marrow progenitors over wild type cells [86]. Consistent with Knudson's "two-hit" theory that was proposed over 50 years ago [87], the dysregulation of RasGRP1 requires a second cooperating oncogene or cytokine stimulation for transformation [78,79,84]. The knockout of RasGRP1 negative regulators has also been shown to be oncogenic; specifically, DGKα −/− DGKζ −/− double knockout mice develop thymic lymphoma due to the failure to prevent the overactivation of RasGRP1 and Ras [88]. Beyond the role of RasGRP1 as an oncogene, its overexpression has been documented to be a mechanism of resistance to MEK inhibitors [89]. No RasGRP1-specific small molecule inhibitors currently exist. Since the overexpression of RasGRP1 renders T-ALL cells responsive to pro-tumorigenic cytokines [84], PI3K inhibitors have been tested as a monotherapy in mice, but with no success [90]. Others have tried to target the RasGRP1/Ras/Erk pathway in T cell lymphoblastic lymphomas (T-LBL), which are morphologically and immunophenotypically identical to T-ALL [91]. Bromodomaincontaining protein 2 (BRD2) binds to the promotor region of Rasgrp1 and conveys a doxorubicin resistance in some T-LBL patients [92]. The targeting of BRD2 via a bromodomain and extra-terminal (BET) inhibitor improved the therapeutic efficacy in vitro and in a patientderived xenograft mouse model [92]. DAG and its analogues have long been known to activate RasGRP1 in T and B cells [93,94], and the treatment of B cell lymphoma-derived cell lines with DAG analogues promoted apoptosis [94,95]. This proapoptotic pathway induced by DAG analogues is mediated by the PKC/RasGRP1/Erk pathway [94,95]. Squamous Cell Carcinoma While studying the role of RasGRP1 in skin tumors, one group found that the overexpression of RasGRP1, driven by a K5 promotor, in keratinocytes resulted in the development of spontaneous skin tumors [96,97]. These tumors were mostly benign papillomas and there were lesser numbers of squamous cell carcinomas. Due to the observation that the incidence of tumors development was higher in co-housed animals, it was hypothesized that wounding contributed to tumor development. Indeed, when RasGRP1-K5 transgenic mice were subjected to full-thickness incision wounding, 50% of them developed skin tumors [97]. The proposed mechanism is that the act of wounding caused the release of the granulocyte colony-stimulating factor (G-CSF) by keratinocytes [96,97], and G-CSF acted in an autocrine and paracrine fashion to cooperate with RasGRP1 in the development of skin tumors [98]. When the same RasGRP1-K5 transgenic mice were subjected to multistage skin carcinogenesis protocol, 7,12-dimethylbenz(a)anthracene (DBMA) as carcinogen, and 12-O-tetradecanoylphorbol-13-acetate (TPA) as tumor promoters, it was found that the squamous cell carcinomas that developed in the transgenic mice were larger, less differentiated, and more invasive [99]. Additionally, the overexpression of RasGRP1 was found to partially replace the DMBA induction [99]. Conversely, RasGRP1 knockout mice have impaired skin tumorigenesis, evidenced by a reduced epidermal hyperplasia induced by TPA [100,101]. To study other coopering mechanisms of oncogenesis in keratinocytes, one group transduced keratinocytes derived from a Li-Fraumeni patient with RasGRP1 and found that the keratinocytes acquired morphologic changes that are associated with a transformation [102]. This result supports the idea that RasGRP1 cooperates with other genes because patients with Li-Fraumeni syndrome are deficient in p53, a well-known tumor suppressor gene. Colorectal Cancer Surprisingly, RasGRP1 acts as a tumor suppressor in colonic epithelium; furthermore, RasGRP1 can be used as a biomarker for predicting the efficacy of anti-epidermal growth factor receptor (EGFR) therapy for CRC (colorectal cancer) patients [103,104]. The RasGRP1 expression levels decrease with the progression of CRC and predict the poor clinical outcome of patients [104]. Mechanistically, the same group found that RasGRP1 suppresses the proliferation of the KRas mutant and negatively regulates the EGFR/SOS1/Ras signal in CRC cells [104]. This mechanism may explain its tumor suppressor activity in colorectal cancer in contrast to its oncogenic activity in most other neoplasias. Hepatocellular Carcinoma RasGRP1 has been found to be upregulated in hepatocellular carcinomas (HCC) [105]; furthermore, a high RasGRP1 expression is associated with the tumor size, tumor-nodemetastasis (TNM) stage, and Barcelona Clinic Liver Cancer stage [105]. At the cellular level, in Huh7 and PLC cells, the downregulation of RasGRP1 inhibited cell proliferation, whereas the overexpression of RasGRP1 promoted cell proliferation [105]. Specific protein 1 (Sp1) was identified to bind the Rasgrp1 promotor and is a positive regulator [105]. For a review of the Ras pathways in HCC, we refer the reader to the work by Moon and colleagues [106]. Breast Cancer The role of RasGRP1 in breast cancer has only recently been studied. Specifically, it was found that the upregulation of Rasgrp1 was associated with an improved overall survival in breast cancer [107], as well as overall survival and disease-free survival in the triple-negative breast cancer subtype [107,108]. The molecular mechanism that underlies these observations is unknown. Conclusions Given that approximately 46% of cancers exhibit alterations in the Ras pathway [109], it has been extensively studied over the past decades. With RasGRP1 being a RasGEF, it too has received much attention. Through this endeavor, the structure, function, regulation, and developmental role of RasGRP1 have been described at the molecular level. This has identified RasGRP1 and its regulators as promising targets in leukemia and other cancers. Most of the domains of RasGRP1 are well characterized. The REM and CDC25 domains facilitate the Ras cycle between the GDP-bound inactive form and the GTP-bound active form. The EF hands bind calcium and induce an activation-associated conformational change [16,17]. The C1 domain binds DAG at the plasma membrane or endomembrane. The PT domain facilitates dimerization and phosphoinositide-mediated plasma membrane targeting [30]. For the regulation of RasGRP1, it is known that signal termination can be mediated by DGKα and DGKζ via the conversion of DAG to PA. For the activation, Ras-GRP1 can be phosphorylated at T184 by PKCα. Other less-well characterized mechanisms include HSP90- [46] and miR-21-mediated degradation [47,48]. In normal physiology, RasGRP1 plays an important role in the maturation of thymocytes. Specifically, it is necessary for a positive selection of the rearranged αβTCR [52,53]. The compartmentalization of Ras signaling to the plasma membrane or the endomembrane at the selection checkpoints adds an extra layer of complexity [55][56][57]. The dysregulation of RasGRP1 in peripheral T cells, B cells, NK cells, neutrophils, and mast cells are known to cause developmental and/or functional defects. One of the most surprising defects revealed in knockout mice is that RasGRP1 normally interacts with the dynein light chain in NK cells [58], and this indicates that RasGRP1 has additional functions besides as a RasGEF. Given the importance of RasGRP1 in cell development, it is unsurprising that it is expressed in numerous cancers and plays a role in oncogenesis. The overexpression of RasGRP1 alone is insufficient for lymphoma-or leukemo-genesis [79,86]. The transformation of thymocytes requires the overexpression of RasGRP1 and a cooperating oncogene or knockout of a tumor suppressor. Since no Ras-or RasGRP-specific small molecule inhibitors have been identified, efforts have been made to target regulatory pathways through the use of BET inhibitors [92], DAG analogs [94,95], and HDAC inhibitors [46]. Much of the work done on RasGRP1 within the realms of immunology and cancer research in the last 5 years has focused on three areas. The first area is its role in lymphocyte homeostasis, which can be summarized by the identification of loss-of-function RasGRP1 mutants in two patients with ALPS [75], one patient with immunodeficiency, and three patients with EBV-associated lymphoproliferative disease [76,77,110]. A second area is the clinical behavior of tumors relative to the expression of RasGRP1 in various cancers, such as CRC [103], HCC [105], and breast cancer [107,108]. The third area is the mechanism by which RasGRP1 serves as a tumor suppressor in certain cancer models [85,111]. These last two emerging areas point to the idea that RasGRP1 cannot simply be described as an "oncogene" or its overexpression as a negative indicator, but rather that its role is cancerand model-dependent. While not emphasized in this focused review, progress in RasGRP1 research is also being made in the areas of schizophrenia [112], neuro-inflammation [113], systemic lupus erythematosus [114], Parkinson's disease [115], and angiogenesis [116]. It is evident that the relevance of RasGRP1 reaches beyond the development and function of immune cells and homeostasis and cancer. Despite this progress, there is still much to understand about RasGRP1. First, a concise explanation for the conflicting role of calcium, or lack of, in the function of RasGRP1 has yet to be articulated. Second, since RasGRP1 is involved in the degranulation of NK cells and mast cells and the development of neutrophils, it is interesting to speculate on its potential developmental and functional role in other granulocytes. It is clear that RasGRP1 plays a role in T leukemogenesis; additionally, it is necessary for it to cooperate with other oncogenes for transformation. It is likely that the array of cooperating oncogenes has yet to be fully elucidated. Lastly, only in recent years was RasGRP1 identified as a differentially expressed gene correlated with overall and disease-free survival in breast cancer. It will be important to determine the molecular basis for this counterintuitive correlation.
7,091
2023-01-01T00:00:00.000
[ "Medicine", "Biology" ]
How numerals support new cognitive capacities Mathematical cognition has become an interesting case study for wider theories of cognition. Menary (Open MIND 25(T):1–20, 2015) argues that arithmetical cognition not only shows that internalist theories of cognition are wrong, but that it also shows that the Hypothesis of Extended Cognition is right. I examine this argument in more detail, to see if arithmetical cognition can support such conclusions. Specifically, I look at how the use of numerals extends our arithmetical abilities from quantity-related innate systems to systems that can deal with exact numbers of arbitrary size. I then argue that the system underlying our grasp of small numbers is an unhelpful case study for Menary; it doesn’t support an argument for externalism over internalism. The system for large numbers, on the other hand, clearly displays important interactions between public numeral systems and our cognitive processes. I argue that the large number system supports an argument against internalist theories of arithmetical cognition, but that one cannot conclude that the Hypothesis of Extended Cognition is correct. In other words, the large number case doesn’t decide (on the basis of an inference to the best explanation) between the Hypothesis of Extended Cognition and the Hypothesis of EMbedded Cognition. Introduction Our mathematical capacities are impressive. Somehow, without clear evolutionary precedents, we are able to theorize about numbers, sets, and other mathematical objects. Mathematics also makes extensive use of symbols, and in such a way that, for example, Dutilh Novaes (2013) has argued that symbol use is in some ways constitutive of doing mathematics. It thus comes as no surprise that accounts of cognition B Stefan Buijsman<EMAIL_ADDRESS>1 Filosofiska Institutionen, Stockholm University, Universitetsvägen 10D, 106 91 Stockholm, Sweden which stress the interaction between the brain and the environment have seen mathematics as an important case study (De Cruz 2008;Menary and Kirchhoff 2014;Menary 2015). More specifically, our arithmetical capacities-extensively studied by cognitive scientists-are an important example for such accounts of cognition. Typically, philosophers pointing to our arithmetical capacities draw a distinction between our innate quantity-related capacities (shared with a large number of other animals) and our learned, culturally influenced, arithmetical capacities-which I will here discuss primarily from the Western context, since most of the empirical studies on the subject used Western participants. Our innate quantity-related capacities are usually divided into two systems, which are functionally different and located in different parts of the brain (Feigenson et al. 2004). First of all, we possess what is known as the parallel individuation system, which keeps track of up to three or four objects at the same time. This system does not explicitly represent number, but it does allow infants to exhibit surprise (in the form of longer looking times) when one object unexpectedly disappears from a collection of two objects (Wynn 1992;Feigenson et al. 2002). The parallel individuation system is limited to these very small numbers. For example, infants don't distinguishing between 2 and 4 crackers. When presented with two boxes, one containing two crackers, and the other four crackers, infants will crawl to either box at chance. If the comparison is, instead, between 2 and 3 crackers they will reliably crawl towards the one with more crackers (Feigenson et al. 2002). A second innate system, known as the Approximate Number System (ANS), allows us to distinguish between larger collections. The ANS allows us to distinguish between collections with, for example, 4 and 8 items (Xu 2003). Not all collections can be distinguished, however: adults are not able to reliably choose between collections with 21 and 24 items. Whether or not we are able to distinguish between collections reliably (and how well we are able to do so when we meet a given reliability threshold) depends on the ratio between the number of items: when one collection has twice as many items it is easier than when it has only one and a half times as many items (Barth et al. 2003). The lack of exact number representations in the ANS and the parallel individuation system means that there is a big discrepancy between our innate abilities and the ones we end up with after years of practice with counting and arithmetical operations. Those years of practice help us to acquire what Menary (2015) calls the Discrete Number System (DNS), though this may not be a system in the more specific sense above (there may be different cognitive processes for processing different numbers, whereas the use of 'system' above applies to a unified cognitive process that has been localized in the brain). The DNS encompasses our familiar arithmetical abilities, such as the ability to distinguish between collections of arbitrary sizesparadigmatically by counting. That ability is exact, in the sense that people who have it can (in principle) distinguish between any two numbers. The DNS also includes representations for all of these different sizes, i.e. explicit representations of the natural numbers. These representations are not only helpful in designating numbers, they also play a role in algorithms for arithmetical operations (Menary 2015, p. 14). The DNS, and its relation to numerals, 1 has been discussed in a few places already. The work of Schlimm is notable for its focus on numerals (Schlimm 2018;Schlimm and Neth 2008), as is the study of neuronal recycling by Dehaene and Cohen (2007). On the basis of these discussions Menary (2015) has claimed that a proper understanding of the DNS implies that Cognitive Integration is true, i.e. that the Hypothesis of Extended Cognition (HEC) holds instead of the weaker Hypotesis of Embedded Cognition (HEMC). The first, HEC, was first developed by Clark and Chalmers (1998), Hutchins (1995) and Zhang and Norman (1995). It states that not all cognitive processes are wholly located in the brain. In other words, "mental and cognitive processes and states are integrated with states and processes found in the environment" (Menary 2010, p. 562). The second, HEMC, instead claims that the environment plays an important role but that cognitive processes are not (partly) constituted 2 by the environment. So, "mental and cognitive processes and states are scaffolded by/depend upon the environment" (Menary 2010, p. 562). Menary places, for example, Sterelny (2010) in this camp. Furthermore, both of these positions are externalist positions, where externalism is characterized by the claim that "(EXT) mental processes depend intimately on environmental resources, and should be studied within the context of those resources" (Sprevak 2010, p. 362). This contrasts with internalist views of cognition, which defend "(INT) mental processes are largely self-sufficient, and can be studied largely in isolation from environmental props" (Sprevak 2010, p. 361) According to Menary (2015) a case study of the DNS can help decide that HEC is a better hypothesis than HEMC, an issue which has been debated in the literature (Baumgartner and Wilutzky 2017;Pöyhönen 2014;Sprevak 2010). The goal of this paper is therefore to look at the DNS case study in more detail, combining literature from cognitive science on the functioning of the brain with more theoretical studies of numeral systems [such as Zhang and Norman (1995) and Schlimm and Neth (2008)]. This has not yet been done in the context where the DNS is used in an argument for HEC over HEMC, though details resulting from this combination of approaches are relevant: the small number DNS is, I argue, not a good basis for an argument for externalism, whereas the large number DNS at least decides in favour of EXT. I, however, also argue that the large number DNS does not decide between HEC and HEMC (on the basis of an inference to the best explanation). The small number DNS There is an important difference between the small and large number DNS. The properties of the numeral system are not equally important for the cognitive mechanisms that underlie the two parts of the DNS. The distinction is based on the fact that there are different cognitive mechanisms underlying the small and large number DNS, where the small number DNS functions irrespective of the internal structure of numerals (the way they are composed out of digits) but the large number DNS is heavily influenced by the structure of numerals. However, we do not yet know enough about the relevant cognitive mechanisms to pinpoint the border between the small and large number DNS. 'Three', for example, clearly falls under the small number DNS. The parallel individuation system is probably the underlying cognitive mechanism in most cases, as e.g. numerical comparisons for small numbers are supported by the parallel individuation system (Cheung and Le Corre, forthcoming). One could thus draw the border at the limit of that system, so around four, which Zhou and Bowern (2015) also suggests as a natural 'low-limit' case, based on a study of Australian languages. A border around four doesn't quite capture my characterization of the small number DNS as the part of the DNS which functions irrespective of the internal structure of numerals. 'Six' also lacks internal structure, and the DNS might rely on the (sufficiently exact) ANS for those numbers (Huber et al. 2017;Sullivan and Barner 2010). 'Twentythree' and 'one hundred' on the other hand, have an internal structure, which influences the underlying cognitive processes that are part of the DNS. For that reason the border between small an large may be higher than four. The location of the border could also depend partly on culture, specifically on the point where numerals first have an internal structure. Note, however, that this only changes the range and not the nature of the small number DNS. In any case, the small number DNS would (if a border higher than four or five is the right one) be made up of disparate cognitive mechanisms. The system is, instead, unified by a feature relevant to the current case study: an indifference to the specifics of the notation. Regardless of where exactly one draws the border, a good starting point for a discussion of the small number DNS is the way in which we acquire the relevant number concepts as children. As described above, initially we are able to distinguish collections with one item from those with two and three items, but not from collections with four or more items. When children are about 22 months old they start to succeed at tasks where they have to distinguish collections with one item from those with more than three items (Sarnecka and Lee 2009). Before then, comparisons of one item with any number of items above three results in behaviour that is at chance. The crucial switch at 22 months is that children acquire the ability to perform above chance at tasks where they have to choose between a collection with one item and a collection with more than three items . While it is not yet known what cognitive mechanism underpins success at these tasks, it does seem to be a fairly general mechanism. It also underlies the recognition of a grammatically marked singular/plural distinction in language (Li et al. 2009). In fact, the above ability is developed faster when the singular/plural distinction is explicitly marked in the grammar (Sarnecka et al. 2007). My reason for bringing up this ability is that I think that the acquisition of number concepts relies heavily on it. As I have defended in Buijsman (2017), it is possible to acquire the number concept one on the basis of the ability to distinguish collections with one item from those with more than one item. In effect, this means that children are able to determine that a collection has exactly one item. More precisely, as they have this ability before they have acquired the concept one, this can be interpreted as the ability to recognize when there is an F (e.g. a cookie) and there are no other Fs (no other cookies). The concept one can then be acquired based on that ability, as one applies in precisely those scenarios where there is an F and there are no other Fs. I think that the same mechanism drives the acquisition process of the following number concepts. Carey (2009) has a similar account that only applies to the numbers one through four, but I see no reason why the following idea can't be applied to five, six, and so on. The idea is that the ability to distinguish between collections of one and those with more than one item can support the acquisition of number concepts through iterated use. The concept two can be acquired on the basis of situations where a collection with exactly one item is added to another collection with exactly one item. That is a slightly different proposal from that of Carey (2009), who argues that children form mental models based on the parallel individuation system and then establish one-to-one correspondences with these mental models. I am not suggesting that children construct such mental models. Instead, I suggest that they recognize the common factor to situations where the word 'two' is correctly applied purely on the basis of an iterated application of this ability to distinguish between one item and more than one item [see Buijsman (2017) for more details on this account of number concept acquisition]. How does our acquisition of number concepts relate to the role of the DNS as a case study for theories of cognition? Menary (2015) claims that studying the DNS will decide between different accounts of cognition. Therefore, either the way in which we acquire the DNS or the normal functioning of the DNS should form the basis of an inference to the best explanation with the conclusion that HEC is true. If the DNS is best explained only when one accepts that cultural artefacts are partly constitutive of cognitive processes, then one should require that claim either to explain the acquisition of the DNS or the normal functioning of the DNS. Numerals, however, do not seem to play a large role in the above account of the acquisition of the DNS. Numerals do have to be mentioned, because studies with small cultures that seem to lack exact number words (Gordon 2004;Frank et al. 2008;Pica et al. 2004) suggest that the acquisition of exact number concepts is prompted by the availability of exact number words. Based on these studies some have argued that the presence of number words is a necessary condition (Núñez 2017) for the acquisition of number concepts, though this is contested (Butterworth et al. 2008). I will assume that we require number words to acquire number concepts, as that would be the most favourable outcome for Menary's arguments. Though number words may be a necessary precondition to acquire number concepts, the ability to distinguish between collections with one item and those with more than one item does not seem to depend on the presence of number words in the language. We, unfortunately, do not know enough about this ability to say so with certainty. Still, there is an indication that children learn to distinguish these collections regardless of any knowledge they have about numbers (Barner et al. 2004). In other words, there is reason to think that the ability which I think underlies our acquisition of number concepts is developed without any help from the presence of a numeral system. This would have to be tested in more detail though, for example by asking people from these cultures to distinguish collections with one item from those with more than one item in experimental designs similar to those presented to Western children. If the ability to recognize situations with one F and no other Fs is independent from the presence of number words, then the small number DNS is not a good basis for an argument for HEC. Number words only act as prompters to attend to numerical aspects of collections of items, but do not figure in the further explanation of the underlying cognitive mechanisms that are developed as a result. The crucial ability that allows us to acquire these number concepts is developed independently from number words and is only helped along by (and so not dependent on) a grammatically marked singular/plural distinction. Whether it is developed independently from other cultural factors is, again, hard to say. In any case, the situation is not the clear example that could be hoped for by the proponents of HEC. Note that this point doesn't depend on the particular account of the data that I have summarized above. The account presented in Carey (2009) holds that the acquisition of the number concepts two, three and four is based on the parallel individuation system. As I mentioned, she claims that children construct mental models of collections with the relevant number of items on the basis of the parallel individuation system. These mental models are then the basis for our acquisition of the number concepts. Again, numerals do not figure in the explanation of the underlying cognitive mechanisms-in this case the mental models-except as prompters to initiate the development of the small number DNS. The absence of these cultural tools in the explanations of the inner workings of the cognitive mechanisms underlying the small number DNS is not unexpected. One of the results that Menary (2015) discusses as proof of the influence of language on the underlying cognitive mechanisms is a study by Dehaene et al. (1999). They found that response times to arithmetical problems are influenced by the language in which they are presented to Russian-English bilingual speakers. Participants responded faster to exact arithmetic problems in the language in which they were given at the beginning of the experiment. If they had to switch to the other language it took them longer to solve the same arithmetic problem. No such differences were found with approximate (ANSbased) arithmetic problems, which involve arrays of dots. This result, however, doesn't seem to hold for small numbers. Spelke (2003) reports a follow-up study that found no difference in reaction times between the initial and switched language for small exact numbers, but did replicate the difference for large exact numbers. In short, it seems that the cognitive mechanisms underlying small number DNS can be explained without reference to numerals (except as prompts for the initial development). Consequently, the small number DNS does not make for a good case study in support of EXT. Instead, the hope for a defender of HEC should rest on the large number DNS. As for the development of the large number DNS, it cannot be based on the parallel individuation system, because the number of items is too large. Feigenson (2011) does suggest that the parallel individuation system can be extend beyond four, but it still would not extend to a number such as 10,000. Nor is the ANS a plausible basis, as the link of the ANS with the DNS is weak for very larger numbers (Huber et al. 2017;Sullivan and Barner 2010). Furthermore, Carey et al. (2017) provide evidence that children do not learn numbers between four and ten through word-to-ANS-value mappings. Finally, while the ANS is coupled to larger number concepts, there seems to be a lag between the time when the number concept is acquired and when the connection with the ANS is established (Lipton and Spelke 2005). So, there is good reason to expect that the acquisition of the large number DNS will be different, though we do not know enough to say how it happens exactly. Fortunately there is more to say about how we process larger numbers once we have acquired all the relevant number concepts. A good overview of that work with respect to Hindu-Arabic numerals is Nuerk et al. (2015), and I discuss some of the results reviewed in that paper. I also use some of the data on numeral systems in Oceanic languages, for which Bender and Beller (2017) is a good overview. One final issue needs to be mentioned before I discuss the empirical results. I switch here from spoken numerals (discussed for the small number DNS) to written Hindu-Arabic numerals. Based on the study by Zhang and Wang (2005) one may think that this difference is important for the findings discussed below. They argue that the cognitive processes may be different when comparing a written Hindu-Arabic numeral to a remembered number (earlier presented as Hindu-Arabic numeral), as opposed to a scenario where two written Hindu-Arabic numerals are compared. Their findings are, however, contested by further empirical tests that failed to find a difference (Moeller et al. 2009(Moeller et al. , 2013. The underlying cognitive processes seem to be the same, even if the numbers are represented differently (both internally and externally). Unless new evidence surfaces to the contrary, then, the switch between number words and Hindu-Arabic numerals shouldn't matter for the discussion of cognitive mechanisms that follows. Until recently there were two opinions on the nature of the cognitive mechanisms thanks to which we process multi-digit numerals. One option was that we process numerals as a whole: when reading a numeral such as 143 we don't break it up into its component parts (1, 4 and 3). The other option was that we process multi-digit numbers by breaking them up into parts and then work with the individual digits in parallel, eventually composing these parts to figure out which number is represented. 3 On the first option it doesn't seem to matter much what kind of numeral system one uses, since the cognitive processes would not need to break up numerals in accordance with the structure of the numeral system. In the decomposed case the way the numeral system is made up, which of course depends on one's culture, plays an important role in explaining how we cognitively process these numbers. It is good news for HEC and HEMC, therefore, that the evidence now strongly favours the hypothesis that numerals are processed in a decomposed fashion (García-Orza and Damas 2011; Moeller et al. 2011;Nuerk et al. 2015). Furthermore, the fact that we process them in this way has clear effects on our performance at different arithmetical tasks (these effects are the evidence that has been put forward in support of the hypothesis that processing is decomposed). One of these effects is known as the unit-decade compatibility effect (for numerals consisting of two digits). When participants are asked to decide which of two numbers is larger they respond faster for 42 < 57 than for 47 < 62. In the former, but not the latter, case there is no conflict between the overall outcome and the outcome for the individual digits. Both 4 < 5 and 2 < 7 for 42 < 57, whereas 4 < 6 but 7 > 2 for 47 < 62. This incompatibility in the second case leads to longer reaction times (Nuerk et al. 2001;Verguts and De Moor 2005). There is even an additional effect, where processing is faster if there is also a compatibility for the within-number comparison: 34 < 79 is easier than 32 < 76, because in the first case the irrelevant 3 < 4 and 7 < 9 are congruent with the relevant 3 < 7. In the second case the irrelevant 3 > 2 and 7 > 6 conflict with the relevant 3 < 7 (Wood et al. 2005). Finally, the unit-decade compatibility effect isn't just relevant for comparison tasks. Guillaume et al. (2012) found that the compatible pairs of numbers are also added more easily and faster than incompatible pairs (25 + 48 is easier than 28 + 45). Furthermore, there is a difference in strategy execution. With compatible pairs of numbers (25 + 48) participants more often started by adding from 48 (to 68, then 73 instead of 65, 73 when starting from 25). In the case of incompatible pairs participants showed no such preference for starting with the larger number. Instead they tended to choose the number to the left of the addition sign (Guillaume et al. 2012). This is far from the only influence of the numeral system on arithmetic performance. As is to be expected, cases of addition where one needs to perform a carry, because the units add up to 10 or more (e.g. 25 + 47), take more time and are more prone to errors than instances where this is not necessary (Ashcraft and Stazyk 1981). Similarly, subtraction takes longer and is more likely to lead to errors when one needs to borrow (e.g. for 43 − 18) because the subtraction 3 − 8 goes below zero (Sandrini et al. 2003). Multiplication errors where the decade digit is correct are also more likely than errors where the decade digit is wrong. So, for 7 × 3 the correct outcome is 21, and 24 is a more likely mistake than 18 (Domahs et al. 2006;Verguts and Fias 2005). It seems that possible solutions are represented in a decomposed format: decade and unit digits are processed separately, so errors on only the unit digit are more likely than errors on both digits. One can also consider the algorithms for multiplication that Menary (2015, p. 14) discusses from this, more cognitive, perspective. He mentions two ways in which one may calculate what 23 × 11 is. A first option is to start from the right, with 1 and then do the multiplication with 10: Another option is to proceed instead from the 10 and do the multiplication even more explicitly on a digit-by-digit basis: Menary concludes that these algorithms display the usefulness of spatially arranging the numerals in a certain way. The cognitive science results from above suggest that a stronger conclusion may be drawn from these examples. Both algorithms, because they separate the multiplication problem into subproblems about the separate digits, fit in perfectly with the way in which we process multi-digit numerals. So, one of the reasons why we may have ended up using these algorithms and not some other algorithms (e.g. one where we repeatedly add 23 to the first number and subtract 1 from the second number) is that these are in line with the way the brain processes Hindu-Arabic numerals. Since numeral processing is decomposed the brain already has the individual digits at hand when one parses the numbers involved in the multiplication problem. We moreover generate the possible answers to the multiplication separately for the separate digits. These written algorithms make that cognitive strategy more explicit, reducing errors along the way. It is not just the spatial organisation that is relevant here; the close resemblance with the way our brain parses multi-digit numerals also plays an important role. Our cognitive processes influence our cultural strategies for performing arithmetical calculations. These cognitive processes, in turn, are clearly influenced by cultural factors. Which numeral system is used has to make an important difference for how the brain processes numerals, and so how the brain works with larger numbers. For one thing it matters what base a numeral system has, as this determines (at least in a place-value system) which numerals are multi-digit numerals and which are not: a cultural factor may help determine the range of the small number DNS. For example, in our base-10 numeral system the cut-off point is at 10, though a fair number of languages have number words such as 'eleven' that do not reflect this. Other numeral systems, however, have different bases and so different numerals that count as multidigit-though low bases are typical. The Babylonian numeral system, an example that may come to mind of a system with a high base, actually has a 10-6 cycle to reach base 60: there are different signs until 10 and then a repetition of those signs until you return to the symbol for one when you reach 60 (Høyrup 2001). More importantly, not all numeral systems are place-value systems. The Roman numeral system, for example, is a sign-value system where the value of a digit is determined by the kind of sign it is [and so not by its place-at least not in the original numeral system where e.g. 4 was IIII instead of the Medieval IV (Schlimm and Neth 2008)]. In this case numerals are probably interpreted decomposed when there are several digits, but the process of arriving at the final value will be quite different. Since place-value is not important it may well be that effects based on place-value, such as the unit-decade compatibility effect, disappear. In fact, in a series of experiments Krajsci and Szabó (2012) found that when taught a completely new sign-value system and a new place-value system performance on simple addition and comparison tasks is better for the sign-value system. In other words, sign-value systems are easier to learn and work with for those kinds of tasks. The same seems to be true for other early numeral systems, such as those from Ancient Egypt and the Mayas (Nickerson 1988). As Schlimm and Neth (2008) also found, one needs to remember fewer addition facts for calculations with Roman numerals. On the other hand, such calculations require more basic perceptual-motor interactions than calculations with Hindu-Arabic numerals. So, cognitive performance and the underlying cognitive processes are different for different numeral systems. Similar differences in performance have been observed for other numeral systems. The Oceanic language Mangarevan contains two numeral systems: a decimal system and a system that mixes decimal and binary patterns (Bender and Beller 2017). The mixed system reduces the number of addition facts that have to be remembered for calculation, and is favoured by its users over the regular decimal system (Bender and Beller 2014). One reason is that the mixed system outperforms the regular decimal system found in English in terms of compactness and regularity (Bender et al. 2015). Again, these different features of the numeral system influence cognition [see also Zhang and Norman (1995) for another comparison of cognitive differences resulting from use of different numeral systems]: they reduce the load on working memory because users need to keep track of fewer symbols and addition facts. This reduction of cognitive load may also have been an important feature of early number use, especially their use of tokens, notched tallies, etc., as Overmann (2016) argues in a review of archeological evidence. Unfortunately little more is known about the cognitive processes that are used when working with any of these other numeral systems. For that reason, I focus primarily on what we know about Hindu-Arabic numerals in the rest of the paper. There is clear evidence that the numeral system influences the way in which we process larger numerals. The acquisition and functioning of the large number DNS thus depends on culture. The exact influences will be clearer once more extensive studies have been conducted with other numeral systems, but that there are such influences is not in question. To return to the idea of the DNS as a case study for our theories of cognition, there is a clear difference between the small and large number DNS. The small number case, as I argued, offers little support for the claim that cultural factors are important. The large number case on the other hand is an excellent illustration of the way in which cultural factors influence, and are influenced by, cognitive processes in the brain. Numerals are important to prompt the acquisition of the small number DNS, but we do not need to reference properties of numerals in an explanation of the cognitive mechanisms underlying the small number DNS. On the other hand, the properties of numerals are extremely important for the cognitive mechanisms underlying the large number DNS, as these mechanisms function on the template provided by the numeral system which is learned. It is the large number part of the DNS that is important for the question whether Menary (2015) is right in claiming that it can help decide in favour of HEC. Empirical evidence for enculturation? Philosophers have already debated the claim that empirical evidence supports the hypothesis of extended cognition, as I mentioned in the introduction. From that debate it is clear that the issue can be divided into two parts: whether there is empirical evidence that decides between internalism (INT) and externalism (EXT) about cognition, and whether there is empirical evidence that decides between HEC and HEMC. I will not tackle these questions in full generality, as all I aim to do here is to determine whether the case study of the DNS helps on either of these issues. In my view it is of some help, namely that it supports EXT over INT, but fails to decide between HEC and HEMC. One last question is how general we take these claims to be. Could we say, for example, that INT holds for the cognitive processes underpinning our grasp of small numbers and EXT for those processes underlying our grasp of large numbers? It at least seems to be an option. Since the two cognitive processes may be distinct, e.g. in terms of how numerical comparisons are evaluated, they may also be differently constituted. The philosophical debate tends to interpret these questions more generally, so I will also stick to a more general interpretation. Either INT holds for arithmetical cognition or EXT holds, which means that the large number DNS could be sufficient as case study to argue that EXT and HEC are true for arithmetical cognition. Internalism versus externalism Why does the above study of the DNS support EXT over INT for arithmetical cognition? The important part of the case study is, as I indicated, the large number DNS. The explanations of those cognitive processes clearly involve environmental props. One needs to appeal to features of the numeral system to explain why the brain does certain things, such as attend to the position of each decomposed digit. Furthermore, the computation of place-value/sign-value is specific to the numeral system that is in use. As I also explained, the type of numeral system that is in use impacts performance, strategy choice and strategy execution. Strategy execution is influenced by a kind of unit-decade compatibility effect, and strategy choice depends more generally on the numeral system one is used to: different strategies will be relevant for Roman numerals than for Hindu-Arabic numerals (Schlimm and Neth 2008;Zhang and Norman 1995). One such difference is the load on working memory, for example in terms of how many basic addition facts need to be remembered. The differences in performance summarized in the previous section show that environmental/cultural props involved in arithmetical cognition play an important role in explaining how that part of cognition works. A lot of our performance at tasks with larger numbers is bound up with contingent features of the numeral system we are using. As a result we need to pay close attention to environmental resources when studying these cognitive processes. In contrast to small numbers, where it is possible to study the cognitive processes (the parallel individuation system, the way we distinguish one item from more than one item, etc.) in isolation from features of the numeral system, we cannot study the large number DNS without taking into account features of the relevant numeral system. On the formulation of INT and EXT from the introduction this means that EXT is true for arithmetical cognition: arithmetical cognition depends intimately on environmental resources and so should be studied within the context of those resources. Consequently, an externalist account like HEC or HEMC is true of arithmetical cognition. However, the case study does not decide between these two alternatives, as I argue in the next subsection. HEC versus HEMC The central issue on which HEC and HEMC differ is whether the cultural practices that our cognitive processes depend on partly constitute our cognitive systems or are mere causal factors influencing the (internal) cognitive processes. The way in which a case study is supposed to decide between these two positions is that the explanation of the cognitive processes is supposed to be better, one way or the other. The argument would be an inference to the best explanation, where either HEC or HEMC offers the best explanation of (that part of) cognition (Sprevak 2010). Menary (2015) is aware of the criticism of such an argument for HEC, but maintains that HEC has the upper hand specifically because it can answer the question "Assuming that cognitive processing criss-crosses between neural space and public space, how does it do this?" (Menary 2015, p. 9). More generally, Menary sees the argument for HEC, which he calls CI, as an inference to the best explanation based on three features: the novelty and uniqueness of mathematical cognition, as well as the interactions between mathematical cognitive processes and the public numeral systems. In more detail, Menary focusses his arguments around three findings. The first is by Lyons et al. (2012), of so-called 'symbolic estrangement'. Symbolic estrangement is the finding that comparisons across formats (so with one number presented by a numeral and the other by an array of dots) are harder than within formats, even when the two numerals are from different numeral systems. That is in line with the studies by Dehaene et al. (1999) and Spelke (2003) that I commented on at the end of Sect. 2. As the reader may recall, it seems that this holds for large numbers but not for small numbers. Furthermore, these studies primarily support the novelty aspect: they show that the DNS extends our cognitive capacities and doesn't just build upon the ANS. The second finding is a study by Landy and Goldstone (2007) with college-level algebraists. They found that by altering the spacing between addition and multiplication symbols it is possible to induce errors regarding which operation takes precedence. In other words, Landy and Goldstone (2007) found that the spatial distribution of mathematical symbols has effects on performance. Again, this is in line with other findings I have discussed, such as the effect of the type of place-value system on performance effects like the unit-decade compatibility effect and the difference in performance for different numeral systems. So, this is the second part on which Menary builds his arguments. It shows some of the interactions between our cognitive processes and cultural tools. The third and last piece of support is the idea that the acquisition of the DNS involves changes to the brain (Dehaene and Cohen 2007). The DNS goes beyond our initial capacities, so Menary argues, because it requires changes to the brain in order to acquire it. The novelty aspect of the DNS is supported by these findings. Furthermore, the uniqueness of the DNS is supported, since only humans seem to be able to go through the changes to the brain that are necessary to acquire the DNS. Menary uses these three supports to argue for HEC over HEMC in two main passages. The first passage is rather multi-faceted: Our cognitive capacities cannot cope with long sequences of complex symbols and operations on them. This is why we must learn strategies and methods for writing out proofs. Symbol manipulation makes a unique difference to our ability to complete mathematical tasks, and we cannot simply ignore their role. If we take the approach of CI, then mathematical cognition is constituted by these bouts of symbol manipulation, and we cannot simply shrink the system back to the brain. The case for a strongly embedded approach to mathematical cognition depends upon the novelty and uniqueness of mathematical practices and dual component transformations. (Menary 2015, p. 16) Numerals are needed to acquire the DNS and (for large numbers) need to be mentioned in an explanation of the underlying cognitive processes. As Menary points out, we cannot ignore the role of numerals-which is why I agree that the case study supports externalism. However, I disagree that the role symbols play implies that the cognitive process cannot be shrunk back to the brain. Novelty and uniqueness hold just as much for the small number part of the DNS as for the large number part of the DNS. Without numerals, it seems, we would not develop either. The small number part of the DNS is thus also new, as it extends our cognitive capabilities beyond our innate capacities. We need to learn to work with small numbers, just as we need to learn to read. Yet the novelty and uniqueness of the small number DNS is not sufficient to necessitate acceptance of HEC. The cognitive processes underlying the small number DNS can be understood in terms that are independent from properties of public symbol systems. As discussed in Sect. 2, the parallel individuation system, the ability to distinguish between one and more than one item and the mental models based on the parallel individuation system can be understood without reference to properties of numerals. While numerals are important to prompt the development of the small number DNS, they are not needed in an explanation of the underlying cognitive processes of the small number DNS. Therefore, novelty and uniqueness alone do not support HEC. The dual component interactions (i.e. interactions going back and forth between the cognitive processes and the public numeral systems) have to provide the support for HEC instead, since of the three features Menary discusses that is the only one that is exclusive to the large number DNS. I discussed some dual component interactions in Sect. 3, namely some of the effects on performance of the numeral system and some effecte of our cognitive systems on algorithm choice. The question Menary raises is how one explains this criss-crossing without accepting the constitutivity claim. In other words, the argument is that if these cultural practices (such as algorithm choice) are not constitutive of our cognitive processes, then one cannot offer an equally good explanation of these interactions. I think that that challenge can be met. In Sect. 3 I offered an explanation of the influence of our cognitive processes on algorithm choice: our cognitive mechanisms process numerals in a decomposed fashion. That is how our cognitive mechanisms engage with public symbols. Hence, the effect of our cognitive processes on these cultural practices is that they are brought in line with the functioning of the cognitive mechanisms. Multiplication algorithms that also work in this decomposed fashion are preferred over other algorithms, because of the way the brain processes Hindu-Arabic numerals. This explanation is purely causal, yet does account for the influences going back and forth between the numeral system and our cognitive processes. In line with the general argument pushed by Sterelny (2010), one can give an equally good causal explanation of the features that are supposed to be best explained by the constitutivity claim. There are a lot of other dual component interactions that would need to be explained. For example, one needs to explain the interactions between the features of the numeral system and our cognitive mechanisms. The choice of base determines which comparisons and calculations run into the unit-decade compatibility effect. Furthermore, the choice of the kind of numeral system (place-value or sign-value, for example) has effects on performance. I think that these interactions can be explained without reference to the constitutivity claim. The effect of the choice of base has to do with the kind of numerals that feed into the decomposed processing. If the system has base 10, then those are the numerals 0-9. If the system has base 6, the numerals are 0-5, and so on. It is a matter of fine-tuning the decomposed processing mechanism, and so it seems that at least these interactions can be explained in purely causal terms. I am for that reason optimistic that dual component interactions can be explained without accepting HEC. Menary also claims that symbol manipulation makes a unique difference to our ability to complete mathematical tasks [cf. Landy and Goldstone (2007)], and that this is a reason to accept HEC over HEMC. It is certainly true that these studies show that the symbols we work with have on impact on our performance. However, as one can also see in the discussion section of Landy and Goldstone (2007), this doesn't mean that the symbols themselves need to be counted as part of the cognitive processes. Their suggestion is that the brain mechanisms for evaluating mathematical expressions interact more closely with visual processes than previously thought. Consequently, incorporating the visual processes, and not the symbols, is at the moment an equally good explanation-and one put forward by the researchers in question. The first argument, with all of its facets, does not yet decide between HEC and HEMC. The second passage in which Menary argues for HEC is more straightforward: symbols are not simply impermanent scaffolds, they are permanent scaffolds. They become part of the architecture of cognition (and not simply through internalisation). Mastery of symbol systems results in changes to cortical circuitry, altering function and sensitivity to a new, public, representational system. However, it also results in new sensori-motor capacities for manipulating symbols in public space. (Menary 2015, pp. 16-17) Menary's main claim of interest is that symbols do not simply become part of the architecture of cognition through internalisation. Of course, we manipulate symbols in public space frequently and it seems reasonable to say that performance is better with than without such symbol manipulation. That is also what one might expect if HEMC is true: without the scaffolding it is harder than with the scaffolding in place-once the DNS is acquired. But it is possible without public symbol manipulation, meaning that one can view the process as one of internalisation. The findings of Landy and Goldstone (2007) can be interpreted, as I argued above, in such a way that they only show something about the internal organization of the brain. Similarly, the cognitive mechanisms for large numbers described in Sect. 3 have all been described in an internal fashion. We may often use public symbols because it is easier (and hence performance is better), but that does not yet show that we cannot (in principle) do without these external cultural practices once we have acquired the DNS. Finally, one may wonder whether the other cultural influences discussed in Sect. 3 pi.e. the studies reviewed in Bender and Beller (2017)] can support an inference to the best explanation for HEC. First of all, there is an issue with such an argument: while we know that there are differences in performance and strategy execution depending on the numeral system, we do not know if there are also differences in the underlying cognitive mechanisms. Of course, the cognitive mechanisms are employed differently-the theoretical approach of e.g. Schlimm and Neth (2008) shows that the balance between remembered facts and perceptual-motor interactions is different for different numeral systems. This balance may also be different for people using the same numeral system, but coming from a different cultural background. Tang et al. (2006) argue that a difference in brain activation between English and Chinese speakers while solving arithmetical problems could be due to a stronger reliance on short-term memory and perceptual-motor interactions in the case of Chinese speakers. However, that doesn't mean that the components that make up the DNS are fundamentally different. One reason to think that the components underlying the DNS are the same is that participants that had to learn new place-value and sign-value numeral systems had little difficult doing so. They also did not experience difficulties when they had to calculate with these new systems (Krajsci and Szabó 2012). If so, then there is nothing more to the cultural differences than the dual-component interactions discussed above-which I argued one can interpret in a merely causal manner. Suppose, though, that there are differences; Hindu-Arabic numerals are processed decomposed but Roman numerals are processed holistically, say. Even in that scenario those differences can, probably, be explained in causal terms rather than constitutive ones. Zhang and Wang (2005), who thought they had found such a difference in processing, give an explanation in terms of interactions between perceptual mechanisms and the brain mechanisms for evaluating mathematical expressions-similar to the explanation Landy and Goldstone (2007) propose based on their findings. So, rather than accepting HEC one could instead propose an explanation where the differences in numeral systems causally interact with perceptual mechanisms, leading to a change in the cognitive processes that underlie the DNS. Conclusion Is the DNS a good basis for an inference to the best explanation with HEC as its conclusion? In the case of the small number DNS the answer is a clear no: in that case numerals help us (and may be required to) acquire the relevant concepts, but they do not seem to figure in an explanation of the underlying cognitive mechanisms. The large number DNS, on the other hand, functions in such a way that one needs to mention numerals in an explanation of the underlying cognitive mechanisms. The cognitive processes that make up the large number DNS are structured in a way that builds on the (internal) structure of the numeral system we use. Performance is clearly influenced by contingent features of the numeral system, and it seems that the underlying cognitive processes are at least combined in different ways depending on the kind of numeral system one uses. This has some importance for the way in which we think about arithmetical cognition. As I have argued, the DNS case study gives us a good reason to think that internalism about arithmetical cognition in the sense of INT is false, so some version of externalism is true. The case study on its own, however, will not decide which version that is. Whether HEC or HEMC is true about arithmetical cognition does not seem to be decided by the empirical data, because the mechanisms underlying the DNS can be explained equally well in causal as in constitutive terms-as Sprevak (2010) has argued for the general case. We can make good sense of what is going on with either framework, so something else (such as a mark of the cognitive) will have to decide between these hypotheses.
11,599
2018-10-26T00:00:00.000
[ "Mathematics", "Philosophy", "Psychology" ]
EEG-based BCI Dataset of Semantic Concepts for Imagination and Perception Tasks Electroencephalography (EEG) is a widely-used neuroimaging technique in Brain Computer Interfaces (BCIs) due to its non-invasive nature, accessibility and high temporal resolution. A range of input representations has been explored for BCIs. The same semantic meaning can be conveyed in different representations, such as visual (orthographic and pictorial) and auditory (spoken words). These stimuli representations can be either imagined or perceived by the BCI user. In particular, there is a scarcity of existing open source EEG datasets for imagined visual content, and to our knowledge there are no open source EEG datasets for semantics captured through multiple sensory modalities for both perceived and imagined content. Here we present an open source multisensory imagination and perception dataset, with twelve participants, acquired with a 124 EEG channel system. The aim is for the dataset to be open for purposes such as BCI related decoding and for better understanding the neural mechanisms behind perception, imagination and across the sensory modalities when the semantic category is held constant. Background & Summary Brain computer interfacing and cognitive neuroscience are fields which rely on high quality brain activity based datasets. Surface electroencephalography (EEG) is a popular choice of neuroimaging technique for BCIs due to its accessibility in terms of cost and mobility, its high temporal resolution and non-invasiveness. Although EEG datasets can be time consuming and expensive to obtain, they are extremely valuable. A single open source dataset can form the basis of many varied research projects, and thus can more rapidly advance scientific progress. For example, EEG datasets for inner speech commands 1 and for object recognition 2 were recently created and shared to address a lack of publicly available datasets in these areas. These datasets enable the development of sophisticated techniques for analysis and decoding, which can be used to investigate neural representation mechanisms and improve decoding performance for EEG based BCIs. Different paradigms have been used for EEG based BCIs such as Event Related Potential (ERP) BCIs for decoding inner speech 1,3 , Steady-State Visual Evoked Potentials (SSVEPs) 4 and motor imagery 5 , and oscillatory activity driven BCIs for tasks such as drowsiness detection 6 . Recently, there has been growing interest in decoding alternative information forms such as auditory and visual, perception and imagination 7 , and semantic information 8 . However, the lack of open source EEG datasets for decoding imagined and perceived semantic level information is hindering progress towards this research goal. Visual decoding involves decoding simple low level visual components such as colour and shape, or complex naturalistic images of objects, scenes and faces. In contrast, semantic decoding extracts conceptual information such as object types or classes. For example, was the object in an image shown to an observer a flower or a guitar? The low level visual and auditory sensory details of the semantic concept, such as whether the flower is yellow or purple, are ignored with a focus on the high level meaning of 'flower' . The advantage of decoding semantic information, as opposed to sensory based information such as visual details, is that semantic representation is partially invariant across modalities [9][10][11][12][13] . Invariance to low level sensory detail can be considered a desirable quality in BCI systems in which within class generalisabilty is a key goal. This can help increase robustness to real world data heterogeneity. Growing evidence of neural overlap between perception and imagination 14,15 may also facilitate generalisability. This task invariance has enabled cross-decoding between perception and imagination 16,17 . Efforts are being made to determine the spatiotemporal extent of these shared neural representations [18][19][20] , which may be most invariant in brain regions and time points associated with latent representations; i.e. closer to semantic level information. For example, the differences between imagery and vision appear to be most pronounced in the early visual cortex, with greater overlap occurring higher up in the visual hierarchy 21 and at time points linked to high level perceptual processing 14 . To drive decoding of a class based on semantic information, stimuli must vary in their low level sensory details. This approach was employed in a recent open source dataset that captured EEG measurements for object recognition using a rapid serial visual presentation paradigm 2 . The dataset includes 22,248 images related to 1,854 concepts. While there are impressive semantic decoding results emerging using fMRI [22][23][24][25] and EEG [26][27][28] which demonstrate feasibility, the field lacks an open source EEG dataset for researchers to investigate semantic representation across several sensory modalities, as well as both perception and imagination. In this paper, we introduce a novel dataset, as well as the code for pre-processing and analysis, designed for investigating and decoding semantic representation of imagined and perceived visual and auditory information. We also present an initial analysis to demonstrate this dataset's utility. To capture semantic representation, we drive high variance within each class (or rather semantic category). Specifically, we use three semantic concepts-penguin, guitar and flower-that participants perceived and subsequently imagined in auditory, visual orthographic, and visual pictorial forms. Furthermore, we provide a metric for the vividness of imagination metric for each participant for both the visual and auditory modalities. Individual differences in imagination capacity are shown to impact neural correlates 29,30 and therefore may affect the decodability of, or the decoding strategy used for, each individual. Some proposed uses of this dataset for both BCI and cognitive neuroscience oriented research questions include: 1. Decoding between sensory modalities such as auditory, visual orthographic and visual pictorial. 2. Decoding task type, specifically between perception and imagination. 3. Decoding the semantic category regardless of the sensory modality presentation or task. Methods Participants. Ethics approval was obtained from the Psychology Research Ethics Committee at the University of Bath (Ethics code: 19-302). Participants gave informed consent to take part in this study and for their data to be shared. Eight participants were recruited for a data collection pilot to ensure the quality of the dataset. This allowed us to identify and address any syncing issues with the Lab Streaming Layer network, as well as unexpected environmental noise at around 27 Hz in two of the sessions. The final version of the experiment was completed by twelve participants, most of whom were students at the University of Bath. Initially, selection criteria included normal or corrected vision and hearing, and excluded individuals with epilepsy. However, we later expanded the criteria to include individuals with visual and hearing impairments, to enable our dataset to support a wider range of research questions. One participant with visual and hearing impairment was included in the final sample. Participants were reimbursed £20 for their time in exchange for participating in an approximately two hour session. Experimental procedure. Participants were offered the opportunity to participate in a second data gathering session, in order to increase the number of trials for each participant. Of the twelve participants, nine completed one session and three returned for a second session. The experiment was conducted in a soundproof and lightproof room. It was not electrically shielded but all mains outlets other than the acquisition laptop charge point were turned off. The EEG setup, including cap fitting and gel application to the electrodes, took approximately 40 to 60 minutes. During the first session, participants completed two questionnaires while the gel was being applied: the vividness subscale of the Bucknell Auditory Imagery Scale (BAIS-V) 31 and the Vividness of Visual Imagery Questionnaire (VVIQ) 32 . Subsequent to this, participants performed a practice version of the experimental tasks with a chance to ask questions around any uncertainties. After the setup was complete, the light was turned off, the experimenters left the testing room and went into an adjacent room, and the participant began the study when ready by pressing the computer keyboard's space bar. For a schematic of the main task flow, see Fig. 1. The experiment was designed using Psychopy Version 3 33 , and presented on a 1920 × 1080 resolution screen. The Psychopy files are made available as described in the Usage Notes section. The ANT Neuro acquisition software 'eego' was used to record the EEG data. A Lab Streaming Layer (LSL) network sent the triggers from the presentation PC to the acquisition software to time-stamp the stimuli and task relevant information. There were www.nature.com/scientificdata www.nature.com/scientificdata/ ten blocks in total, though the majority of participants did not complete all the blocks due to fatigue or reporting reduced concentration. See Table 4 for the amount of trials completed for each condition for each participant. The participants were encouraged to take breaks between each block and call the experimenter if they required water or had any concerns. Questionnaire. The VVIQ and BAIS-V are self-report measures of mental imagery ability. The BAIS-V is a subscale of BAIS which captures the subjective clarity or vividness of an imagined sound, such as a trumpet playing happy birthday, on a scale of 1-7. A score of 7 is as vivid as the actual sound, whereas 1 indicates there was no sound at all. The VVIQ measures the subjective vividness of an imagined scenario such as a sunset, on a scale of 1-5, with 5 being the most vivid and 1 meaning no image at all. For VVIQ and BAIS-V results, see Table 1. The mean VVIQ score was 3.75 (std = 0.55) and average BAIS-V was 4.76 (std = 0.85). VVIQ and BAIS scores are significantly correlated as calculated using Spearmans Rank with r = 0.79 and p = 0.007. Data acquisition. A 128 channel ANT Neuro eego Mylab measuring system (ANT Neuro B.V., Hengelo, Netherlands) was used, with 124 EEG electrodes. The gel-based waveguard cap has active shielding which protects the signal from 50/60 Hz environmental noise. The sampling rate was 1024 Hz, with a 24-bit resolution. The montage, with pre-fixed electrode positions, is laid out according to the five percent electrode system 34 , which is an extension from the standard 10/20 layout for higher resolution EEG systems. The EEG cap size was selected based on the participant's head circumference in cm. Large is 56-61 cm, medium is 51-56 cm and small is 47-51 cm. Once the cap was fitted to the participant's head, OneStep Cleargel conductive gel was applied to the electrodes with CPz as reference, and the ground fixed to the left mastoid with Ten20 paste. Impedance of below 50 was sought, but due to variables such as hair thickness and other factors, there were often up to ten electrodes that had higher impedance. After the experiment was finished, the recording was stopped and the EEG data were stored as.cnt files, and the events as.evt files in ANT Neuro native format. the paradigms. This study involved six paradigm variations, consisting of two tasks: imagination or perception, and three sensory modalities: visual pictorial, visual orthographic and auditory comprehension. The semantic categories used were flower, penguin and guitar. These three categories were selected based on semantic distance and syllable length. Semantic distance was determined by computing a Word2Vec latent space 35 , where each word is represented as a vector and the distance between vector pairs signifies the semantic similarity of two words. The distance between each of the pairs was calculated to ensure all pair-distances were < 0.2. A visual plot was then created using a t-distributed Stochastic Neighbour Embedding (t-SNE) which enables high dimensional data to be visualised in a 2D space (see Fig. 2). While common daily objects may be preferred as stimuli for BCI purposes, we selected more obscure objects which are unlikely to be used in the same contexts. This decision was driven by two main factors. First, using objects that people encounter on a daily basis can introduce unpredictable semantic associations and relations from their daily routines. Secondly, objects we have expertise in processing, such as faces, may result in spatially clustered selectivity or brain modularity 36 . This can restrict the generalisability of findings to non-expertise categories and thereby reduce the overall scope of application. Another constraint in selecting the semantic categories was that they all have two syllables. It is crucial to keep syllable length constant in the auditory comprehension paradigm to ensure that decoding is based on semantic properties rather than the syllable number associated with different words. Visual pictorial. The visual pictorial paradigm involves perception and imagination of images belonging to the three semantic categories: flower, penguin and guitar. The visual pictorial stimuli consisted of coloured images with a resolution of 1200 × 1200 pixels against a black background (see Figs. 3,4). In the context of object representation, incorporating objects within a consistent scene can enhance their semantic relations and aid in their recognition 37 . However, to maintain the purity of our study's semantic concepts we opted to exclude any contextual scene information. This was because the addition of contextual information could potentially introduce unexpected semantic associations, thus introducing semantic noise. Furthermore, including contextual scenes would have added complexity, making the imagination task more challenging and potentially leading to increased participant fatigue. Therefore, we chose to focus solely on the objects themselves, without any accom- www.nature.com/scientificdata www.nature.com/scientificdata/ www.nature.com/scientificdata www.nature.com/scientificdata/ Visual orthographic. Visual orthographic is the image of the word form of the semantic categories. The stimuli consisted of a 1200 × 1200 pixel white background with writing of either 'penguin' , 'flower' or 'guitar' overlaid (see Figs. 3,5). There were 30 exemplars for each category, with five different colours used (black, blue, red, green, purple) and six different font styles. Auditory comprehension. Auditory comprehension consists of the speech version of the three semantic categories 'penguin' , 'guitar' and 'flower' . Recordings of these words were obtained from different speakers who did not participate in the EEG experiment. In the perception task, participants passively listened to these recordings which were processed using Audacity to remove background noise. Each clip was two seconds long. The words were spoken in either a normal, low or high voice. During the imagination task, participants were asked to imagine the spoken words that they had heard, using the same voice of the speaker rather than their own inner voice. To view an example of an audio trial, refer to Fig. 6. Data processing. Bad channels. To rigorously adjust for bad channels, a combination of manual and automatic bad channel detection was used. Bad channels identified from visual inspection of the plotted raw data in the ANT Neuro eego software were recorded in the meta_extended.csv file, discussed in the Data Records section, for each participant and session. Automatic bad channel detection was computed using PyPrep PrepPipeline https://pyprep.readthedocs.io/en/latest/generated/pyprep.PrepPipeline.html. This method utilises several bad channel detection methods, including identifying channels that do not correlate with other channels, channels with abnormally low or high amplitudes, or high quantities of high frequency noise, and channels with flat signals. Channels were re-referenced before interpolation was applied to correct for bad channels. Re-referencing. During acquisition, electrodes were referenced to CPz. Re-referencing was conducted after all steps that offset the statistical trend of the overall data. Re-referencing was applied before and then after bad channel interpolation using common average referencing in MNE. A third re-referencing step was applied after filtering to remove low frequency drifts. Fig. 4 An example of a pictorial trial. After the cue, 5 trials occur with a different picture used in each. The picture is bounded in a white box, which reappears to frame the mental image for the imagination trial. Fig. 5 Example of an orthographic trial. After the cue, 5 trials occur with a different orthographic representation used in each. The written word appears against a white background, which reappears in the imagination trial to ensure similar scaling between imagination and perception. www.nature.com/scientificdata www.nature.com/scientificdata/ Filtering. Data were filtered to remove power-line noise via notch filtering. Powerline noise in the UK where this dataset was recorded is at 50 Hz, therefore we filter for 50 Hz and its harmonics: 100 and 150 Hz. We also remove low frequency drifts which arise from movements of the head, scalp perspiration and wires. Filtering out frequencies below 2 Hz, via high pass filters, is recommended for high quality ICA decompositions 38 . Artefact removal. Artefacts include eye movements such as blinks and horizontal eye movements as well as muscle activity. Independent Component Analysis (ICA) was applied to the raw pre-processed data rather than epoched data. The FastICA algorithm was used, and 50 components selected. To identify eye components, we used an MNE implementation to generate epochs around electrooculogram (EOG) artefact events. These were estimated from channels close to the eyes 'Fp1' and 'Fp2' . By estimating these artefacts, the components can then be rejected from the ICA components. The resulting data after ICA retains all 124 original dimensions. Epoching. Event labels for each condition were used to identify the beginning of each epoch. As the mne. Epochs() method to extract epochs from the raw data expects a consistent duration, we initially set tmin = 0 and tmax = 4. Subsequently, we use the known duration of each condition (see Table 2) to find the end points to properly epoch the data for the technical validation steps. We retain just the data relevant to perception and imagination, and keep only the additional data related to prior visual or auditory noise/mask, for the average event related potential analyses (see subsection Average Event Related Potentials). Data records The full dataset 39 can be accessed at the OpenNeuro repository (https://openneuro.org/datasets/ds004306/ snapshot). The file structure and naming follows Brain Imaging Data Structure (BIDS) format (https:// bids-specification.readthedocs.io/en/stable/). See Fig. 7. The participant with visual and hearing impairments is noted in the repository. raw data. The original data produced in the ANT Neuro eego software are in .cnt and .evt format. They were converted in Matlab into .set and .fdt files to be in a format usable with the MNE package. A final conversion is computed to align the event data with BIDS format, resulting in a .tsv file. Therefore under the directories for each participant and session, i.e sub-01/ses-01/eeg/, are four files including the raw EEG data, the electrode data, the events data and a report file. The raw data are a continuous recording of one whole session. The event files have an event label for each specific stimulus used. The trial type provides information about the specific stimulus. For example, 'Imagination_a_flower_high_5' refers to the imagination audio condition in which a relatively high pitched voice saying the word flower is imagined and the specific voice id of this stimulus is '5' . An example of a visual event is 'Perception_image_flower_c' which refers to a perception of a flower picture. The 'c' indicates that the picture is relatively naturalistic/complex. Additionally, the start and end of the baseline obtained prior to the experiment tasks are provided. Fig. 7, the preprocessed data is formatted as .fif for each participant and session. Both the EEG data and the event data can be extracted from these files in MNE. The preprocessing pipeline that has been applied to the data is described in the Data Processing section. Preprocessed data. As seen in technical Validation average event related potentials. Event Related Potential (ERP) plots can be used to investigate how the brain is modulated across time in response to specific stimuli. Averaging across trials shows consistent modulations. As there is high individual variance in neural anatomy and task strategy, we calculated the average ERPs for each participant and session separately. In Fig. 8, average ERPs for each of the six tasks for participant 18 from session 1 are shown. The selected electrodes for this analysis were in occipital and posterior regions. We can see that there is no consistent pattern for imagined audio. In contrast, there is a fairly consistent ERP across electrodes for the four visual conditions. Inter-trial coherence. Inter-trial coherence (ITC) captures phase synchronisation or consistency across trials. A high ITC of 1 would indicate perfect coherence, whereas 0 is the lowest value and indicates no coherence. ITC is computed separately for each of the six conditions, and is shown here as the average across participants. In Fig. 9, it can be seen that there is stronger coherence for perception trials than for imagination trials. This is consistent with the expected increase of inter-trial and within participant variation in timing for generating imagined www.nature.com/scientificdata www.nature.com/scientificdata/ stimuli, whereas perceived stimuli have consistent onset and therefore higher ITC. Orthographic and pictorial perception both show strong ITC in the first 800 ms which likely relates to visual stimuli onset. Coherence is present but weaker in the same time window for imagined orthographic and pictorial tasks. Imagined audio has the least ITC, with a very weak ITC demonstrated in the first 500 ms. averaged power spectral density. We report the power spectral density (PSD) averaged over participants to represent the distribution of signal frequency components (see Fig. 10). This is computed for each of the six tasks separately. In each task there is a strong alpha peak. The plot also demonstrates that the 50 Hz power-line noise has been successfully addressed via the notch filtering described in the Filtering section. www.nature.com/scientificdata www.nature.com/scientificdata/ Task classification. To demonstrate the feasibility of this dataset 39 for decoding purposes and exploring neural mechanisms, we report a baseline performance on the imagination vs perception tasks for each sensory modality separately, using a logistic regression binary classification pipeline. For cognitive neuroscience, this gives insight into how distinct each task is for each modality. For BCI purposes, it can be useful to identify whether an individual is performing an imagination or perception task. To ensure consistency between the imagination and perception trials in terms of epoch length, we segmented all visual conditions into three second epochs and all the audio data into two seconds. In our analysis, we utilized a stratified cross-validation approach with five folds and conducted 50 iterations to ensure robustness of the results. The reported results (see Table 3) are averaged over the 50 iterations. Given that this was a binary classification task, chance level was set at 50%. For the visual modalities, classification accuracy is 75%. This is similar performance to that found in previous work 27 in which www.nature.com/scientificdata www.nature.com/scientificdata/ 71% accuracy was achieved when classifying between whether a participant was imagining or observing pictures of flowers or hammers, using a SpecCSP classifier. In this current study, the average decoding performance of 60% accuracy between imagined audio and perceived audio is substantially lower. One potential explanation is that auditory perception and imagination have a higher degree of overlap than visual imagery and perception. Limitations and final remarks. We present a novel high resolution EEG dataset 39 consisting of 124 channels. To the best of our knowledge, this is the first open source EEG dataset which captures not only semantic representation for several sensory modalities but also for both imagination and perception tasks for the same participant sample. This dataset is a promising starting point for investigating the feasibility of using semantic level representation for BCI input as well as enabling insights in cognitive neuroscience into the overlap in neural representation for semantic concepts in imagination, perception and different modalities. Still, decoding semantic representations from EEG data is difficult. To drive a representation related to semantic meaning rather than low level sensory details, we introduced high intra-class variance in this dataset. Intra-class variance results in more noise being present alongside the noise inherent from using EEG. Consequently, this is a challenging dataset for decoding, which makes it an interesting opportunity to apply deep learning techniques to extract meaningful information from noise. It is impossible to determine to what extent our participants were engaged with the experimental tasks, particularly for the imagination tasks. We included vividness metrics to indicate at minimum an individual's capacity for imagery tasks. While this metric may be relevant for the decodability of sensory information, it is less likely to correlate with semantic representation. We anticipate that decoding accuracy will vary Table 3. Depicting classification accuracy between imagination and perception for stratified cross validation with five folds for each participant and session, averaged over 50 iterations. LR refers to logistic regression. Here the participant number is before the underscore, and the session number after. For example, 3_3 is participant 3 and session 3. (2023) 10:386 | https://doi.org/10.1038/s41597-023-02287-9 www.nature.com/scientificdata www.nature.com/scientificdata/ significantly between people. We hope that this dataset will create opportunities for other researchers to explore semantic decoding for BCIs as well as research questions related to neural mechanisms. By providing access not only to the raw data but also to the processed data and code for decoding, we offer a resource that can accelerate and support future research in these areas. Usage Notes To facilitate the reproducibility and replicability of the study, the experiment was presented in Psychopy v.2021.2.3, a freely available software package. This ensures there are minimal barriers such as licences to prevent other researchers from using or modifying this experimental paradigm for their own studies. All code for processing and technical validation has been provided in a Jupyter Notebook tutorial style format so that following the steps for replication is as clear as possible, while also making it convenient for users to modify the code for related research questions. For example, minimal additional code is required to create classification pipelines for decoding semantics, tasks and modalities. File paths will need to be changed directly in the notebooks. Code availability The Psychopy files to compile the experiment are stored on the Github repository https://github.com/hWils/ Semantics-EEG-Perception-and-Imagination. Also on this repository are the Python processing and technical validation scripts. Users can directly use the Python code provided 1) to compute preprocessing as described in this paper, and 2) to reproduce the experimental results presented in the technical validation section. Table 4. Depicts the number of trials for each overall task for each participant and session. There is large variation in the number of trials completed out of 150, with a minimum of 64 and some participants completing all 150 trials. Here the participant number is before the underscore, and the session number after. For example, 3_3 is particpant 3 and session 3.
6,074.6
2023-06-15T00:00:00.000
[ "Computer Science" ]
δ Gravity : Dark Sector , Post-Newtonian Limit and Schwarzschild Solution We present a new kind of model, which we call δ Theories, where standard theories are modified including new fields, motivated by an additional symmetry (δ symmetry). In previous works, we proved that δ Theories just live at one loop, so the model in a quantum level can be interesting. In the gravitational case, we have δ Gravity, based on two symmetric tensors, gμν and g̃μν, where quantum corrections can be controlled. In this paper, a review of the classical limit of δ Gravity in a Cosmological level will be developed, where we explain the accelerated expansion of the universe without Dark Energy and the rotation velocity of galaxies by the Dark Matter effect. Additionally, we will introduce other phenomenon with δ Gravity like the deflection of the light produced by the sun, the perihelion precession, Black Holes and the Cosmological Inflation. Introduction In the last century, the cosmological observations have revealed that the dynamic of the Universe is dominated by an accelerated expansion, apparently due to a mysterious component called Dark Energy [1][2][3].Additionally, these observations say that most of the matter is in the form of unknown particles that interact principally by gravitation with the ordinary matter, called Dark Matter [4].In this paper, we will refer to Dark Matter (DM) and Dark Energy (DE) effects as the Dark Sector.Although General Relativity (GR) is able to accommodate the Dark Sector, its interpretation in terms of fundamental theories of elementary particles is problematic [5]. For one side, some candidates exist that could play the role of DM, but none have been detected yet.In a galactic scale, DM produces an anomalous rotation velocity which is relatively constant far from the center of the galaxy [6][7][8][9][10][11][12], and a lot of alternative models, where a modification to gravity is introduced, have been developed to explain this effect.For instance, an explanation based on the modification of the dynamics for small accelerations cannot be ruled out [13,14]. On the other side, DE can be explained with a small cosmological constant (Λ).At early times, Λ is not important for the evolution of the Universe, but at later stages it will dominate the expansion, explaining the observed acceleration.However, the cosmological constant is too small to be generated in Quantum Field Theory models because it is the vacuum energy, which is usually predicted to be very large [15]. For all these reasons, one of the most important goals in cosmology and cosmic structure formation is to understand the Dark Sector in the context of a fundamental physical theory [16,17].Some explanations include additional fields in approaches like quintessence, chameleon, vector Dark Energy or massive gravity; the addition of higher order terms in the Einstein-Hilbert action, like f (R) theories and Gauss-Bonnet terms and finally the introduction of extra dimensions for a modification of gravity on large scales (see, for instance, [18]). Recently, in [19], a model of gravitation that is very similar to GR is presented.In that paper, two different points were considered.The first is that GR is finite on shell at one loop in vacuum [20], so renormalization is not necessary at this level.The second is the approach of δ gauge theories (DGT), studied for the first time in [21,22] and defined as follows: (a) consider a model described by a set of fields φ I and an action S(φ I ), which is invariant under an algebra of infinitesimal transformations G. (b) A new kind of field φI is introduced, different from the original set φ I .In the extended set of fields, an extra symmetry that we call δ symmetry is realized.It is formally obtained as the variation of the original symmetry G. (c) We find an action for the extended set of fields which is invariant under the extended symmetry.The action is unique if we impose the additional condition that the original action for φ I is recovered if all φI vanish.(c) It turns out that the classical equations of motion of φ I with the action S(φ I ) are still satisfied, even in the quantum level where the corrections live only at one loop.Therefore, when we implement this extension to General Coordinates Transformations (GCT), which we call Extended General Coordinates Transformations (ExGCT), and imposing the extended symmetry provides a unique extension for the Einstein-Hilbert action, which define the dynamics of the new model, called in this paper δ Gravity.For these reasons, the original motivation was to develop the quantum properties of this model (see [19]).In this work, we will study the classical properties of δ Gravity. A first approximation was developed in Section [23,24], presenting a truncated version of δ Gravity applied to Cosmology.The δ symmetry was fixed in different ways in order to simplify the analysis of the model and explain the accelerated expansion of the universe without DE.The results were quite reasonable taking into account the simplifications involved, but δ Matter was ignored in the process.After in [25], we developed the Cosmological solution in a δ Gravity version where δ symmetry is preserved, which means that we are forced to include δ Matter.In that case, the accelerated expansion can be explained in the same way and additionally we have a new component of matter as DM candidate.In addition, we guaranteed that the special properties of δ Theories previously mentioned are preserved.In this work, we will continue with the full-fledged δ Gravity presented in [25]. We will develop this paper as follows.In Section 2, a complete resume about the bases of δ Theories will be presented.Then, we will show the δ Gravity action that is invariant under ExGCT, including the equations of motion.We will see that the Einstein's equations with the usual Energy momentum Tensor, T µν , continue to be valid, but new equations of motion appears for gµν , depending on a new Energy momentum Tensor, Tµν , by the presence of δ Matter.Additionally, we will derive the equation of motion for the test particle.We distinguish the massive case, where the equation is not a geodesic, and the massless case, where we have a null geodesic with an effective metric.A complete derivation of all these is presented in [23,24]. To give a complete description of δ Gravity in this paper, in Section 3, we will present the cosmological case developed in [25].We obtain the accelerated expansion of the universe assuming a universe without DE, i.e., only having non-relativistic matter and radiation which satisfy a fluid-like equation p = ωρ, for both normal and δ Matter.A preliminary computation was done in [23], where an approximation is discussed.Later, in [24], we developed an exact solution of the equations, but in both cases we assumed that we do not have δ Matter, such that the new symmetry is broken.For that reason, in [25], we studied δ Gravity with δ Matter to preserve the new symmetry.This is very interesting because the presence of δ Matter gives us a possibility to explain DM.In both limits, the solution is used to fit the supernovae data and get predictions for the cosmological parameters.In this calculation, we obtained that the physical reason for the accelerated expansion of the universe is a geometric effect, where an effective scale factor is defined by the model, producing the accelerated expansion of the universe without a Cosmological Constant and predicting a Big-Rip as [26][27][28][29][30].This effective scale factor agrees with the standard cosmology at early times and shows acceleration only at late times.Therefore, we expect that primordial density perturbations should not have large corrections.Moreover, in [25], the value of the cosmological parameters improve greatly by the inclusion of δ Matter, increasing the importance of this component.For instance, the age of the Universe is much closer to the Planck satellite value now than the value we got in [24].Finally, we will include a brief analysis of the Inflation Case to present how δ Gravity could explain the exponential expansion in this era just like the accelerated expansion by DE. To study the DM phenomenon and complete the Dark Sector in δ Gravity, in Section 4, we will study the Non-Relativistic case to understand the behavior of the model in the (Post)-Newtonian limit.We have that δ Gravity agrees with GR at the classical level far from the sources.However, inside a matter fluid like a galaxy, new effects appear because of δ Matter.In that sense, δ Matter could be considered like a DM candidate.In this limit, a relation between the ordinary density and δ Matter density will be found.With this and some realistic density profiles as Einasto and Navarro-Frenk-White (NFW) profiles for a galaxy [31][32][33][34][35][36], we can study the modifications in the rotation velocity.We will see that δ Matter effect is not related to the scale, but rather to the behavior of the distribution of ordinary matter.In the solar system scale, where the large structures as planets and stars have a concentrated and almost constant distributions, δ Matter is practically negligible.However, in a galactic scale, where the distribution is strongly dynamic, δ Matter will be important to explain the DM effect.Thus, the amount of ordinary DM could be much less, explaining its extremely problematic detection. We know that the causal structure of δ Gravity in vacuum is the same as in GR.However, important effects could appear when really massive object, as Black Holes, are taken into account.Thus, in Section 5, we will solve the equation of motion of g µν and gµν for the Schwarzschild Case in the vacuum with appropriate boundary conditions.Then, we will use this solution to compute the deflection of light by the sun and analyze the perihelion precession [37].We have to guarantee that these results are very close to GR, unless we consider highly massive object. We have to say that models like δ Gravity are not ghost-free, which can produce some problems like non-unitarity or instabilities [22,[38][39][40][41].This ghost component produces a phantom behaviour.A scalar phantom has been used in the current literature to describe the current data of the most classical tests of cosmology [26][27][28][29][30], but this background solution becomes unstable by the ghost field.Without a doubt, the ghost problem must be kept in mind, but some solutions can be implemented to obtain harmless ghosts (see, for instance, [42][43][44][45][46]).In spite of the ghost problem, the nature of the Dark Sector is such an important and difficult problem that cosmologists do not expect to solve in one stroke, so we must be open to explore new possibilities.For that reason, we will study δ Gravity as a classical effective model and use it in Cosmology.This means to approach the problem from the phenomenological side instead of neglecting it a priori because it does not satisfy yet all the properties of a fundamental quantum theory.Now, the phantom problem is being studied in this moment for δ Gravity and the results will be presented in a future work. With respect to the extended symmetry, we must emphasize that a closed algebra is formed (see [47]), producing a Noether conserved current to define the usual Energy-Momentum Tensor T µν with the usual conservation equation and another one with a new Energy-Momentum Tensor Tµν , representing the δ Matter component.In addition, to solve the equations of g µν and gµν , we need to fix an extended harmonic gauge due to the existence of the GCT and ExGCT symmetries.The closure of the algebra, the ensuing Noether currents and the gauge fixing show that ExGCT is an extension of GCT.The existence of this extra symmetry is crucial to the model.Finally, it should be remarked that δ Gravity is not a metric model of gravity because only massless particles move on null geodesics, given by a linear combination of both tensor fields.For this, see Section 2.3. δ Gravity In this paper, we are studying δ Gravity at a classical level.This gravity model was developed from a special kind of modified theories named δ Theories.In this section, we will define the δ Theories in general and their properties and then we will apply this modification to the Einstein-Hilbert model.For more details, see [19,24,47]. δ Theories Formalism and Modified Action These modified theories consist of the application of a variation represented by δ.As a variation, it will have all the properties of the usual variation such as: where δ is another variation.The particular point with this variation is, when we apply it on a field (function, tensor, etc.), it will give new elements that we define as δ fields, which is an entirely new independent object from the original, Φ = δ(Φ).We use the convention that a tilde tensor is equal to the δ transformation of the original tensor when all its indexes are covariant, which is: and we raise and lower indexes using the metric g µν .Therefore: where we used that δ(g µν ) = −δ(g αβ )g µα g νβ .Now, with the previous notation in mind, we can define how the tilde elements, given by Equation (2), transform.In general, we can represent a transformation of a field Φ i like: where j is the parameter of the transformation.Then, Φi = δΦ i transforms: where we used that δ δΦ i = δ δΦ i = δ Φi and ˜ j = δ j is the parameter of the new transformation. For example, if we consider General Coordinates Transformation (GCT) or diffeomorphism in its infinitesimal form, we have: and defining: we can use Equation (5) to see a few examples of how some elements transform: (I) A scalar φ: (II) A vector V µ : (III) Rank two Covariant Tensor M µν : This means that all the fields have δ partner and their transformations depend on their nature (scalar, vector, tensor, etc.).Particularly, in gravitation, we will have a model with two tensor fields.The first one is just the usual gravitational field g µν and the second one will be gµν .Then, we will have two gauge transformations associated with GCT.We will call it Extended General Coordinate Transformation (ExGCT), given by: With all these, we can introduce the δ Theories.We start by considering a model that is based on a given action S 0 [Φ I ], where Φ I are generic fields.Then, we add to it a piece which is equal to a δ variation with respect to the fields and we let δΦ J = ΦJ , so that we have: where the indexes I can represent any kind of indexes.Then, the Action in ( 16) is invariant under transformations given by ( 4) and ( 5), if S 0 [Φ] is invariant under (4).A first important property of this action is that the classical equations of the original fields are preserved.We can see this when Equation ( 16) is varied with respect to ΦI : Obviously, we have new equations when varied with respect to Φ I .These equations determine ΦI and they can be reduced to: where δ 2 S 0 δΦ I (y)δΦ J (x) has to be considered as an operation on ΦJ , so the solution of this equation is not trivial.Now, we will apply this result to ExGCT, given by Equations ( 8)-( 15), for gravity. δ Gravity Action and Equations of Motion In this paper, we will use the δ Theories formalism on the Einstein-Hilbert Action to obtain our modified action of gravity, which we call δ Gravity.That is: where and LM = φI , where φI = δφ I are the δ Matter fields.Previously, we mentioned that a truncated version of δ Gravity was presented in [23,24] and applied to Cosmology.To simplify the analysis of the model, we did not consider the δ Matter components, which means LM = 0, breaking the ExGCT.In this work, we present the full-fledged δ Gravity, introducing the δ Matter in order to preserve the δ symmetry.In this case, the equations of motion are: with Tµν = δT µν and: where (µν) denotes that µ and ν are in a totally symmetric combination.An important fact to notice is that the Einstein equations, given by Equation (21), are preserved.Thus, the only difference with General Relativity (GR) are the additional equations presented in (22) to find gµν .Moreover, our equations are of second order in derivatives, which is needed to preserve causality.Finally, we have that the action (20) is invariant under the transformations ( 14) and (15), so two conservation rules are satisfied.They are: They are related to the Noether conserved current of the extended symmetry.Then, we have two Energy-Momentum Tensors, the usual T µν and Tµν .The last one is due to the new symmetry and δ Matter.Thus, to solve the equations of motion of δ Gravity, given by ( 21)-( 24), we need an expression of the perfect fluid's energy momentum tensors, given by (A24) and (A25) in Appendix A. Test Particle In the previous subsection, we found the equations of motion for δ Gravity.However, we need to know how the new fields affect the trajectory of a test particle.For this, we will study the test particle action separately for massive and massless particles.The first discussion of this issue in δ Gravity is in [23]. Massive Particles In GR, the action for a test particle, including the Einstein-Hilbert term, is given by: with ẋµ = dx µ dt and the variation in x µ produces the geodesic equation.This action is invariant under reparametrizations, t = t − (t), where the infinitesimal form is: and the action ( 25) is similar to (19), where x µ is a field in L M in a sense.To obtain the modified action, we have to use Equation (20), where a new field should be obtained, given by δx µ .However, this new field does not make sense because it should represent an additional coordinate system and our model just lives in four dimensions.Therefore, we will impose that x µ does not have a δ partner, so the action in (25) will be modified to: (27) where: Notice that T µν is t-parametrization invariant, but our action breaks the ExGCT symmetry.Now, extracting the pure δ Gravity components from the action (27), we obtain the modified test particle action: This action for a test particle in a gravitational field is the starting point for the physical interpretation of this model.It is t-reparametrization invariant, invariant under general coordinate transformations, but it is not invariant under ExGCT.Now, the trajectory of massive test particles is given by the equation of motion of x µ .This equation say us that g µν ẋµ ẋν = cte, just like GR.Now, if we choose t equal to the proper time, then g µν ẋµ ẋν = −1 and the equation of motion is reduced in this case to: with: Equation ( 30) is a second order equation, but it is not a classical geodesic, because we have additional terms and an effective metric can not be defined.Moreover, the equation of motion is independent of the mass of the particle, so all particles will fall with the same acceleration. Massless Particles The massless case is particularly important in this work because we need to study photons trajectories to define distances.Unfortunately, the action (25) is useless for massless particles because it is null when m = 0. To solve this problem, it is a common practice to start from the action [48]: where v is an auxiliary field.From Equation (31), we can obtain the equation of motion for v: If we substitute Equation ( 32) in (31), we recover the action (25).This means that ( 31) is equivalent to (25), but additionally includes the massless case. In our case, a suitable action, similar to (31), is: In δ Gravity, the equation of v is still (32), and if we use it in (33), we obtain the massive test particle action given by (29).However, now, we can study the massless case. If we evaluate m = 0 in (31) and (33), we can compare GR and δ Gravity, respectively.They are: with g µν = g µν + gµν .In both cases, the equation of motion for v implies that a massless particle move in a null-geodesic.In the usual case, we have g µν ẋµ ẋν = 0.However, in our model, the null-geodesic is given by g µν ẋµ ẋν = 0, so the trajectory obey an effective metric given by g µν = g µν + gµν .The equation of motion for the path of a test massless particle is given by: g µν ẋµ ẋν = 0, with: ). On the other side, in [19], a quantum analysis of δ Gravity was presented, where the quantum corrections just live at one loop only, producing an interesting chance to obtain a quantum gravity model.In that work, we did a counting of degrees of freedom, where the combination g µν + gµν is a normal particle, whereas gµν is a ghost.This result and the fact that the effective metric g µν = g µν + gµν defines the geometry in our model say that this particular combination must be seen as the unique graviton [38][39][40][41]. Additionally, we know that the proper time must be defined for massive particles.The equation of motion for massive particles satisfies the important property of preserving the form of the proper time in a particle in free fall.Notice that, in our case, the quantity that is constant using the equation of motion for massive particles, derived from Equation (30), is g µν ẋµ ẋν .This single out this definition of proper time and not other.Thus, we must define proper time using the original metric g µν .That is: where g 00 < 0. We must consider these two facts to study the cosmological phenomenon in the next sections. At this point, it should be remarked that δ Gravity is not a traditional bigravity model.Only g µν is used to raise and lower indexes, the differential volume component in the action (20) just depend on g µν and, the most important fact, the dynamic of the universe is governed by the combination g µν + gµν , given by the free massless particle action in (35).This means that the phantom behaviour can be preserved even if we restrict the ghost component to turn it harmless [42][43][44][45][46]. Adding to quantum corrections truncated to one loop, δ Gravity could be a good model to deal with the ghost problem.On the other side, it is not a metric model of gravity too because massive particles do not move on geodesics as opposed to massless particles. Cosmological Case In this section, we will study photons emitted from a supernova using δ Gravity to explain the accelerated expansion of the universe without DE, with a little more detail compared with [25].For this, we have to use the correct cosmological geometry to represent an homogeneous and isotropic universe, given by the Friedmann-Lemaître-Robertson-Walker (FLRW) metric: such that T(u) = dt du (u) and t is the cosmological time.If we apply the Extended Harmonic Gauge defined in Appendix B to Equation (38), we obtain that T(u) = T 0 R 3 (u) and F b (u) = 3(F a (u) + T 1 ), where T 0 and T 1 are gauge constants.We use T 0 = 1 and T 1 = 0 to fix the gauge completely.Thus, with these conditions, the system (u,x,y,z) correspond to harmonic coordinate.Later, we can return to the usual system (t,x,y,z), where g µν and gµν are given by: These expressions represent an isotropic and homogeneous universe.From Section 2.3, we know that the proper time is measured using the metric g µν , but the space-time geometry is determined by the null-geodesic of g µν + gµν .Then, with Equations ( 39) and ( 40), we have in the Cosmological case that t is the proper time and: All of these will be essential considerations to explain the expansion of the universe with δ Gravity, using the supernova data. Photon Trajectory and Luminosity Distance When a photon emitted from a supernova travels to the Earth, the Universe is expanding.This means that the photon is affected by the cosmological Doppler effect.For this, we must use a null geodesic in a radial trajectory from r 1 to r = 0.Then, we can define an effective scale factor with Equation (41) as: such that cdt = − R(t)dr.Now, if we integrate this expression from r 1 to 0, we obtain: where t 1 and t 0 are the emission and reception times.If a second wave crest is emitted at t = t 1 + ∆t 1 from r = r 1 , it will reach r = 0 at t = t 0 + ∆t 0 , so: Therefore, if ∆t 0 and ∆t 1 are small, which is appropriate for light waves, we get: where ν 0 is the light frequency detected at r = 0, corresponding to a source emission at frequency ν 1 .Thus, the redshift is given by: We see that R (t) replaces the usual scale factor R(t) to compute z.This means that we need to redefine the luminosity distance too.For this, let us consider a mirror of radius b that receive light from our distant source at r 1 .The photons that reach the mirror are within a cone of half-angle with origin at the source.Let us compute .The path of the light rays is given by r(ρ) = ρ n + r 1 , where ρ > 0 is a parameter and n is the direction of the light ray.Since the mirror is in r = 0, then ρ = r 1 and n = −r 1 + , where is the angle between − r 1 and n at the source, forming a cone.The proper distance is determined by the tri-dimensional metric, given by: in the cosmological case.Then, b = R(t 0 )r 1 and the solid angle of the cone is: where A = πb 2 is the proper area of the mirror.Thus, the fraction of all isotropically emitted photons that reach the mirror is: We know that the apparent luminosity, l, is the received Power per unit mirror area and Power is energy per unit time, so the received power is P = hν 0 ∆t 0 f , where hν 0 is the energy corresponding to the received photon.On the other side, the total emitted power by the source is L = hν 1 ∆t 1 , where hν 1 is the energy corresponding to the emitted photon.Therefore, we have that: where we have used that . In addition, we know that, in an Euclidean space, the luminosity decreases with distance d L according to l = L 4πd 2 L .Therefore, using Equation ( 43), the luminosity distance is: On the other side, we can define the angular diameter distance, d A .If we consider a light ray emitted at time t 1 and moving in the θ coordinate, our null geodesic, given by ( 36), tells us that the proper distance is If we compare it with Equation ( 52), we obtain that: Therefore, the relation between d A and d L is the same as in GR [49].This result is important because, in other modified gravity theories, this relation is not satisfied [50].We will use d L to analyze the supernovae data, but d A could be useful for other phenomena. Solution of the Equations of Motion In cosmology, the metric g µν is given by (39).In addition, by Equations ( 21) and ( 23), we know that Einstein's equations do not change and T µν is conserved.Therefore, the usual cosmological solution is still valid.Thus, using the expression in (A24) with U µ = (c, 0, 0, 0), we obtain the well-known equations: with ḟ (t) = d f dt (t) and we assumed that the interaction between different components of the universe is null.To solve Equation ( 54), we need equations of state as p i (t) = ω i ρ i (t).Since we wish to explain DE with δ Gravity, we will assume that we only have non-relativistic matter (cold DM, baryonic matter) and radiation (photons, massless particles) in the Universe.Thus, we will require two equations of state.For non-relativistic matter, we use p M (t) = 0 and for radiation p R (t) = 1 3 ρ R (t).Replacing in (54) and solving them, we find the exact solution: where Y = R(t) R 0 , t(Y) is the time variable, R 0 is the scale factor in the present, C = Ω R Ω M , and Ω R and Ω M = 1 − Ω R are the radiation and non-relativistic matter density in the present, respectively.We know that Ω R 1, so Ω M ∼ 1 and C 1. We can see that Y can be used as the independent variable.By definition, Y C describes the non-relativistic era and Y C describes the radiation era.The equation of motion for gµν is given by ( 22) and (24), where Tµν is a new energy-momentum tensor for δ non-relativistic matter and radiation densities, given by ρM and ρR , respectively.Thus, using (A25) and ( 55)-( 57), the Equations ( 22) and ( 24) are reduced to: where we used pM = 0, pR = 1 3 ρR and U T µ = 0.The solution of these equations are: where C 1 , C 2 and C 3 are integration constants.ρM (Y) and ρR (Y) are densities of δ Matter, so they must be not-negative functions.Then, for all Y ≥ 0. Evaluating (64) at Y = 0, we get C 2 ≥ 0 and 2C 2 + C 1 ≥ 0. On the other side, at Y C, we get C 3 ≤ 0. Now, if we use Equation ( 61) in (42) and define Ỹ = R(t) R(t 0 ) , we can see that: when Y C. Ỹ is the effective scale factor, so it represents the evolution of the universe.We know that an accelerated expansion must be produced at late times, but the expansion must be driven by the non-relativistic matter and radiation at early times, this means Ỹ For this, we have to fix C 1 = 0 and C 2 = 0 to guarantee the temporal behavior of expansion is just like GR at early times.The other constants will be chosen such that a Big-Rip is produced.That is, Ỹ(Y Rip ) = ∞.We need a Big-Rip to explain the accelerated expansion of the universe because we want that Ỹ grows quickly when Y is bigger. The Big-Rip is determined by C 3 , but a very small value for this parameter is necessary, otherwise the Big-Rip would be too early.However, if we use C 3 = 0, the Big-Rip is not produced and we cannot explain the accelerated expansion of the universe.Thus, using In summary, we have that Ỹ ∼ Y in the radiation era, where Y C, so the Universe evolves without differences with GR.However, in some moment during the non-relativistic matter era, where Y C, an accelerated expansion is produced, ending in a Big-Rip.We will give more details for this when we study the supernovae data.Additionally, inequalities in (64) are always satisfied; then, the δ densities are non-negative. Analysis and Results Before we start the data analysis, we must define the parameters of the model.In the first place, d L in GR depends upon four parameters: Y, H 0 = 100h km s −1 Mpc −1 , Ω M and Ω R .However, from CMB black body spectrum, we obtain the photons density in the present, Ω γ .Now, if we Ω γ (Ω ν is the primordial neutrino density), we get Therefore, the parameters in d L can be reduced to three: Y, h and h 2 Ω M .In the same way, in δ Gravity with δ Matter, d L depends on three parameters: Y, C and L 2 .We will use The supernovae data give the apparent magnitude, m, as a function of redshift, z.For this reason, it is useful to use z instead of Y.The apparent magnitude is: where M is the absolute magnitude, constant and common for all supernovae.Finally, the difference between GR and δ Gravity will be given by d L (z).Thus, the luminosity distance expressions are: δ Gravity: where we used Equations ( 52) and ( 57) for δ Gravity.Besides, Y(z) must satisfy Ỹ(Y(z)) = Ỹ0 1+z in (70), with Ỹ0 = Ỹ(1) and Ỹ(Y(z)) given by Equation (66). The statistical method used to interpret errors in data is given by the variance σ in a normally distributed random variable.In our case, the data is given by (z i , µ i ), where: is the distance modulus.Then, we must minimize: where N is the number of data points and σ µ i is the error of µ i .Now, we can proceed to analyze the data given in [51] with N = 580 supernovae.In both cases, GR and δ Gravity, d L is given by an exact expression, but we need to use a numerical method to solve the integral and fit the data to determinate the optimum values for the parameters that represent the µ v/s z of the supernovae data 1 . The parameters that minimize Equation (71) are: We can see in Figure 1 that δ Gravity with δ Matter fit the data very well.Now, with these values, we can compute the age of the universe and the Big-Rip era.For GR, the age of the universe is 1.377 × 10 10 years.However, in our model, the time is given by ( 57).Thus, substituting the corresponding values for L 2 , C and taking Y = R(t) R(t 0 ) = 1, we obtain 1.391 × 10 10 years for the age of the universe.To compute when the Big-Rip will happen, we need to use Equation (67).That is, Y Rip = 1.684, so t Big-Rip = 3.042 × 10 10 years.Therefore, the Universe has lived less than half of its life. On the other side, in [24], we obtained that, in δ Gravity without δ Matter, the age of the universe is 1.92 × 10 10 years and t Big-Rip = 4.3 × 10 10 years.The problem in this case is the huge age of the universe, compared with Planck Collaboration given by 1.381 × 10 10 years 2 .However, we cannot say that this case is totally rejected yet, but the age of the universe for δ Gravity with δ Matter is more similar to Planck.Now, using Equations ( 62) and (63), the δ Matter in the present is given by: To obtain the best combination of parameters, we used NonLinearModelFit from Mathematica 11.0.Then, we used these parameters to minimize Equation (71).For more details, see the Mathematica 11.0 help. 2 The age of the universe of Planck was calculated using the cosmological parameters obtained in [52].That is, Ω M = 0.308 and H 0 ≡ 100h = 67.8km/s/Mpc. where ΩM and ΩR are the normalized δ densities in the present.Therefore, we have two components of δ Matter, related to ordinary matter at cosmological level.These components can be considered like a contribution to DM; however, a more accuracy analysis in a field theory level is necessary to understand the nature of δ Matter.In the next section, we will study the Non-Relativistic limit to understand the phenomenological effect δ Matter in the Dark Sector.In any case, ΩM and ΩR are important to explain the dynamic of the expansion of the universe. Introduction to δ Inflation To finish with the cosmological case, we will make a few comments about the general equations of motion for only one fluid in δ Gravity for future references.Using the cosmological solution for g µν and gµν , given by ( 39) and ( 40) respectively, we obtain: where is the Hubble parameter and p(t) = δp(t).To complete the system, we need equations of state to solve them.They are p(t) = ω(t)ρ(t) and p(t) = ω(t) ρ(t) + ω(t)ρ(t).In a perfect fluid, ω(t) usually is assumed to be constant and ω(t) must be zero in that case.Thus, using these equations of state in (73)-(76), we obtain: In order to understand the behavior of these equations, we will solve the case where ω(t) is almost constant, to be close to the usual perfect fluid.The solution is: , with: ω = 1, where R 0 , R 1 , R 2 , R 3 , t 0 , a 0 and a 1 are integration constants.From this result, we notice that R(t) obey the standard power-law solution in a perfect fluid with a constant ω.However, we must remember that the dynamic of the universe in δ Gravity is given by the effective scale factor, (42), which produce an accelerated expansion.This means that we can define an effective Hubble parameter given by H(t) ≡ Ṙ(t) R(t) .In this case, with ω = 0, it is: where X = R(t) R 0 and c 1 , c 2 , d 1 and d 2 are integration constants.Therefore, even with a standard power-law solution, we can obtain a different behavior.However, we have to say that the power-law solution is just for a perfect fluid.Actually, the solution of R(t) in some specific model will be the same solution obtained in GR.This is because the Einstein's equations are preserved in δ Gravity, but the dynamic is affected by the effective scale factor.For example, in inflation, a scalar field is used to produce the exponential expansion.In that case: With GR, inflation must obey V(ϕ 0 (t)) φ2 0 (t) to obtain ω(t) = p(t) ρ(t) ∼ −1, such that the expansion is exponential (see Equation ( 79)).However, in δ Gravity, the accelerate expansion could be produced by a divergence in R(t), just like we explained DE.Additionally, in inflation, we have a new field, φ0 , giving us a non-zero ω(t).In conclusion, in inflation with δ Gravity, an accelerated expansion can be produced by additional factors.Basically, the expansion rate is governed by H(t) = Ṙ(t) R(t) .Now, an effective ω parameter can be defined as: If we apply Equation (80) in (81), we can study the expansion behaviour of our model.If ω e f f (t) < −1, the expansion is like a phantom model.This calculation is briefly developed in [53], where we demonstrated that δ Gravity works like a phantom model.In addition, Equation (81) can be used to compare other alternative theories with δ Gravity in a cosmological level.A more detailed version of this work is in progress. Non-Relativistic Case In this section, we will consider the Newtonian and Post-Newtonian limit to study new effects on δ Gravity.We expect a weak deviation of GR at solar system scale, but we want to see in a galactic scale if DM could be explained with δ Matter. Newtonian and Post-Newtonian Limit If we introduce one order more for the Newtonian limit, the metric will be given by [54]: where ∼ v c is the perturbative parameter and g µν → η µν for r → ∞.We can see the Newtonian limit represented by φ at order 2 in the components of g µν .In the Post-Newtonian limit, we have ten additional functions to represent ten degrees of freedom on the metric.In the same way, gµν will be: where we used gµν → 0 for r → ∞.All functions in Equations ( 82) and (83) depend on (t, x, y, z), but 1 c ∂ ∂t ∼ .For this reason, we use ct → ct to obtain the equations.Equations ( 82) and ( 83) are the more general expression for a covariant tensor of rank two, so we need to fix a gauge.One particularly convenient gauge is given by the extended harmonic coordinate conditions presented in Appendix B. Using Equations (A26) and (A27), we obtain: On the other side, in Appendix A, it is proved that the energy-momentum tensors for a perfect fluid are given by (A24) and (A25).Now, in the Non-Relativistic limit, we have that: ρ = ρ(0) + 2 ρ(2) , ( 89) reducing the equations of motion ( 21) and ( 22) to: i ρ (0) , (94) i U (1) (1) (99) T(1) k (1) where p (2) (ρ) = ∂p (2) ∂ρ (ρ), . We can see that Equations ( 93) and (97) correspond to the Newtonian limit.To complete the system, we have Equations ( 23) and ( 24), but they are automatically satisfied when we consider (84)-(87).However, they are useful because we can write them in terms of ρ (0) , ρ (2) , ρ(0) , ρ(2) and p (2) .In spherical symmetry with U (1) i = U T(1) i = 0, it is possible to prove that the conservation equations can be reduced to: From (102), we can obtain p (2) (r) and Equation (103) say us that: where C is an integration constant.We preserve 2 2 φ(r) in the last term of the right side because the order of C is unknown.Equations ( 102) and (103) are obtained in a Post-Newtonian level.This means that we need to study the system at this level to obtain the relation (104) and complete the Newtonian limit.Finally, Equations ( 93) and (97) for spherical symmetry are reduced to: Then, to obtain φ(r) and φ(r), we just need ρ (0) (r), completing the Newtonian limit.Now, we can ask ourselves if it is possible to explain DM with this result.For this, we will need to study the trajectory of a free particle. Trajectory of a Particle: From Section 2.3, we know that the acceleration is given by (30).In the Post-Newtonian limit, it is reduced to [54]: where v = d x dt , φ N = φ + φ and analogous expressions for the others fields.From (107), we can deduce a couple of things.Firstly, in the Newtonian limit, we have: so φ N is the effective Newtonian potential.Secondly, the acceleration is similar to the usual case if we replace φ → φ N , with the exception of the last term in (107).If we analyze the case with spherical symmetry far away from matter, we can see from Equation (106) that φ2 ∼ r −2 .This means that this term is ∼−r −3 , therefore it is an attractive contribution and can be considered as a contribution to DM. In this paper, we will focus on the Newtonian approximation, given by (108), which is the dominant term.The contribution by other terms will be considered in future works. We have said that φ N is the effective potential in the Newtonian limit.This means that the effective density is ρ e f f = ρ (0) + ρ(0) .In spherical symmetry, that is: Therefore, the second term in ( 109) is an additional mass and it could be identified as DM.To verify this, we will study some density profile used to fit the galaxy distribution and then obtain the effective density.Next, we will analyze Equation ( 109) and the equations of motion of φ and φ to see if δ Matter can explain the DM effect. Density Profiles To study the δ Matter effects, we must analyze Equations ( 104)-(106).To explain DM, δ Matter must be negligible in the solar system scale, but important in galactic scale.We will study these equations with some density profile.In the first place, we will see a spherically homogeneous density like a first approximation for a planetary or stellar distribution.Then, we will study the exponential profile and finally we will use the Einasto and Navarro-Frenk-White profiles to describe galaxies' distributions.We define a normalized radius x such that r = Rx; then, our equations are: where R is a convenient radius.With these equations, we can define the ordinary mass and tilde mass.That is: such that the effective mass is M(x) = m(x) + m(x).Finally, from Equation (108), we have that the rotation velocity is: where we used that θ in this case. Spherically Homogeneous Profile We can think, in a first approximation, that planets or stars are spheres with a constant density.That is, ρ(x) = ρ 0 Θ(1 − x).Thus, Equations ( 110) and (111) are: where R is the radius of the sphere.From the first equation, we can obtain φ(x).Using the boundary conditions φ(∞) → 0, φ(0) → "finity value" and imposing that φ(x) and φ (x) must be continuous for all x, we obtain: On the other side, from the second equation, we obtain φ(x), but we can not impose a continuous φ (x).Instead, the equation of motion of φ(x) tells us: where the other conditions on φ(x) are the same.They are φ(∞) → 0, φ(0) → "finity value" and φ(x) is continuous for all x.Then, the solution is: In addition, the δ Matter density is: and the acceleration is given by (108), then: Naturally, we expect a continuous a, but we saw that φ (x) is not.This means that we have to accept that δ Gravity produces an additional force on the surface of the sphere, or C = 0.In the last case, all δ components disappear, so δ Gravity is the same as GR.This result is a really important condition because δ Matter will be negligible when the distribution of ordinary matter can be represented by a homogeneous sphere.This case can be an acceptable representation of planets and stars, where δ Matter does not produce important effects.However, δ Matter could be important when the distribution of ordinary matter is not homogeneous like a galaxy, where the DM effects are important. Exponential Profile We will study this profile to develop an acceptable first approximation to a galaxy distribution.That is, ρ(x) = ρ 0 e −x .In this case, Equations ( 110) and (111) are: From the first equation, we can solve φ(x) analytically.Using the boundary conditions φ(∞) → 0 and φ(0) → "finity value", we obtain: On the other side, the equation of φ(x) is too complicated, which is: Now, we can solve this equation numerically and find m(x) and m(x).First, we must analyze the initial conditions.We can do it studying the behavior of m(x) and m(x), given by ( 112) and (113), for x 1 because they are related to x 2 ∂φ(x) ∂x and x 2 ∂ φ(x) ∂x , respectively.From the solution (119), we have that φ(x) ≈ κR 2 ρ 0 2 and m(x) ≈ 4πR 3 ρ 0 3 x 3 for small x.On the other side, from Equation (120), we need φ(x) ≈ C 1 − 2 κR 2 ρ 0 and m(x) ≈ 2 CπR 3 ρ 0 x 4 .Thus, the total mass is completely dominated by ordinary components in the center. Previously, we said that the C order is unknown, but the initial conditions say us that m(x) m(x) ≈ 3 2 C 4 x.This means that, the δ Matter is increased from x << 1 too slowly, unless 2 C ∼ O(1).We can see this in Figure 2a, where we present to m(x) and m(x).Additionally, we represent the relation between m(x) and m(x) in Figure 2b.These plots say us that ordinary matter is dominant in the center, but δ Matter is accumulated when we get away from the center and the behavior of both kinds of matters become similar when x increase.In fact, the relation between them is practically constant in the edge of the galaxy, m(x) m(x) → 3 2 C. In conclusion, ordinary and δ Matter evolve in a similar way, but δ Matter is concentrated outside of the galactic nucleus. In Figure 2c, we show the rotation velocity in δ Gravity with 2 C = 1 2 , 1, 3 2 and GR.Clearly, the similar behavior of both elements and the additional mass from δ Matter produce an amplifying effect in the rotation velocity.This means that a minimal quantity of ordinary matter could explain the rotation velocity in a galaxy because of the additional contribution produced by δ Matter. Einasto Profile The Einasto profile is a spherically symmetric distribution used to describe many types of real system, like galaxies and DM halos (see for instance [31][32][33]).It is represented by a logarithmic power-law: with α > 0, then ρ(x) = ρ 0 e −x α .Thus, it is a most general case of the exponential profile and many simulations of galaxies have been done using this profile, where they obtained values of α given by 0.1 ≤ α ≤ 1 [33].Evaluating in Equations ( 110) and (111), we have: As in the exponential profile, we can solve Equations ( 121) and ( 122) to find m(x) and m(x) 3 .Thus, using expressions ( 112) and ( 113), we can see that the appropriate initial conditions are given by m(x) ≈ 4πR 3 ρ 0 3 x 3 and m(x) ≈ 4πR 3 ρ 0 2 Cα (3+α) x α+3 .Clearly, we can verify that this result is reduced to the exponential case with α = 1 and we need 2 C ∼ O(1) to obtain enough δ Matter too.m(x) and m(x) are represented in Figure 3a for α = {0.7,0.4, 0.1}, and we represent the relation between m(x) and m(x) in Figure 3b.As in the exponential case, more δ Matter is accumulated far from the center.Actually, we have that m(x) increases faster than m(x), especially when α is smaller.However, far from the center, m(x) m(x) → constant ≤ 3 2 C when α is close to 1.For smaller α's, it is more like a logarithmic behavior.Finally, in Figure 3c, we present the rotation velocity for 2 C = 1 2 , 1, 3 2 and GR.Just like we expected, the rotation velocity is amplified by the δ Matter effect, in such a way that if C is bigger, we have higher velocities. Our conclusions in this case are similar to the exponential profile.The ordinary matter leads over δ Matter in the center, but rapidly the second one increase until it is completely dominant.Thus, δ Matter is also concentrated outside of the galactic nucleus in this case, but additionally we obtained a logarithmic contribution from δ Matter, producing an additional DM behavior to small values of α.We note this in Figure 3c for α = 0.1.m(x) (in units of 2 C) vs. normalized radius.In fact, we have that m(x) increases faster than m(x) and it is faster when α is smaller. On the other side, at the end of the galaxy, it is like a constant too, with m(x) m(x) → 3 2 C, but, for smaller α, the behavior is more similar to a logarithmic function; (c) rotation velocity vs. normalized radius for different values of 2 C. The Black-Dashed line corresponds to GR case, so the 2 C value indicates the contribution of δ Matter.In this case, we have an amplifying effect in the rotation velocity too, such that, if 2 C is bigger, we have higher velocities. In these calculations, we have used 3 The only constant that we can not fix is φ 0 = φ(0).Fortunately, this constant is irrelevant to find m(x).This is true for an NFW profile too. Navarro-Frenk-White Profile The Navarro-Frenk-White (NFW) profile is another kind of distribution of the of DM halo (see, for instance, [34][35][36]), given by ρ(x) = ρ 0 x γ (x+1) 3−γ .In pure DM simulations, γ = 1 is usually used; however, baryonic matter effects are expected, producing 1 ≤ γ ≤ 1.4 [36].Now, in this case, Equations ( 110) and (111) are: We can also use Equations and (113) to obtain the appropriate initial conditions.They are too.After solving (123) and ( 124), we obtain m(x) and m(x), represented in Figure 4a, and the relation between m(x) and m(x) in Figure 4b for γ = {1, 1.2, 1.4}.Just like the exponential and Einasto cases, more δ Matter is accumulated far from the center.Compared with the other profiles, we have that m(x) increases faster than m(x) too, but in this case the relation between both kinds of matter is practically logarithmic, expected in a relation DM/Baryonic Matter.We know that γ = 1 correspond to just-dark-matter distribution and the other cases with γ > 1 consider a baryonic matter effect [36], which means that δ Gravity could give us a greater value of γ than GR in a data simulation, so we obtain less ordinary Dark-Matter.In any case, we obtain the same conclusion; the ordinary matter leads over δ Matter in the center, but rapidly the second one increases until it is completely dominant.Thus, δ Matter is concentrated outside of the galactic nucleus. The rotation velocity is presented in Figure 4c for 2 C = 1 2 , 1, 3 2 and GR.In all our profiles, the rotation velocity is amplified by the δ Matter effect, such that, if C is bigger, we have higher velocities. From exponential, Einasto and NFW profiles, we saw that δ Matter produces an amplified effect to the ordinary matter, affecting the rotation velocity.Unfortunately, the δ Matter is principally concentrated outside of the galactic nucleus, as opposed to expected DM distribution.On the other side, from Einasto and NFW profiles, we observed a logarithmic relation between m(x) and m(x).That is: In this way, we obtained an additional DM effect.Thus, we can divide the ordinary matter in Baryonic and DM; then, we have δ Baryonic and δ DM.In a cosmological level, the Einstein equations just take into account the ordinary matter, and δ Matter appears in the equation of gµν (see Equations ( 21) and ( 22)).All these mean that the quantity of ordinary (real) DM is less than GR.The rest of the DM effect is due to δ Matter components and the contribution is bigger when we get away from the center.Now, if we compare the spherically homogeneous profile with the other ones, we note that the δ Matter effect is only produced by the distribution of the ordinary matter; the scale is not so important.Thus, if the DM is principally explained by δ DM, then this effect is only important when the distribution of ordinary matter is strongly non homogeneous.For example, we could find some Globular Clusters where the DM, δ Matter in this context, is less than Baryon Matter.Evidence of that has been found in [55,56], where an enormous quantity of DM is not necessary in the formation of some GCs.This computation will be developed in a future work.A more accurate calculation could be developed using a specific profile for DM, an Einasto Profile with a small α or NFW profile with γ = 1 for example, and a second profile for Baryonic Matter.With these considerations, we can isolate the DM effect from Baryonic contribution, including δ Matter components.In [32], a multi-component Einasto profile was used.These computations will be also developed in a future work. Schwarzschild Case Until now, the accelerated expansion of the universe was explained without a cosmological constant and δ Matter was studied to explain the DM phenomenon in the rotation velocity in the galaxies.In this section, we will develop δ Gravity in a Schwarzschild geometry to study the principal phenomena used to test GR, the deflection of light by gravitational lensing and the perihelion precession; then, we will introduce the effect of δ Gravity in a black hole.Thus, in this case: Before to solve g µν and gµν , using the equations presented in Section 2, we need to fix the gauge and the correct boundary conditions.To satisfy Equation (A26), we will use the coordinate transformation: where µ = GM.In this coordinate system, we have: where r = µ + X 2 1 + X 2 2 + X 2 3 .Equation (A26) is automatically satisfied, but this system is not convenient to work, so we will impose (A27) and then we will return to the standard coordinate system, given by ( 126) and (127).The additional condition to complete the gauge will be presented below. Schwarzschild Solution The correct boundary conditions, which give us the correct Minkowski limit, are g µν → η µν and gµν → 0 for r → ∞ 4 .Now, we can solve the equations of motion for the Schwarzschild metric.To simplify the problem, we will solve the equations in empty space.This means the region where Tµν = T µν = 0.The solutions of our equations of motion ( 21) and ( 22) are: and survive the equation: where = d dr .Equations ( 130) and ( 131) is the well-known Schwarzschild solution to Einstein equations, where we imposed A(∞) = B(∞) = 1, to obtain g µν → η µν , when r → ∞.As we said previously, we need to fix the gauge for gµν to obtain an additional equation of Ã(r) and F(r).This equation comes from (A27), so: In [19], we suggested that the boundary condition is given by gµν → η µν for r → ∞ (see Equation (48) in the reference), but recently we noticed that the conditions presented in this paper are the correct choice.This is because g µν → η µν , gµν ≡ δg µν and δη µν = 0, so it is natural to use gµν → 0 for r → ∞. Therefore, the general solution of (133) and ( 134) is given by: where Ã1 , Ã2 and Ã3 are integration constants.By the boundary conditions, we have to impose the conditions Ã(∞) = B(∞) = F(∞) = 0 to obtain gµν → 0, when r → ∞.These conditions just means that Ã2 = 0 and F1 = 0.Then, the solutions are ( Ã1 = −2a 0 and Ã3 = −a 1 ): Thus, we have three parameters: µ comes from the ordinary metric components and represents the mass of a massive object (planets, stars, black holes, etc.).Finally, we have a 0 and a 1 .These parameters are adimensional and represent the correction by δ Gravity.Later, we will understand the physical meaning of these parameters.Now, we must remember that Equations ( 130), ( 131) and ( 137)-( 139) correspond to the solution in the region without matter.This means that r > R, where R is the radius of a star for example.Generally, the Newtonian approximation can be used, so that R 2µ.However, the logarithmic solution could be important in black holes, where the Newtonian approximation is not valid.Thus, considering the leading order in µ r , the solution is reduced to: Notice that a 1 disappears in Equations ( 140)-( 144).This means that this parameter is only important in a Post-Newtonian approximation.We will use these expressions in the next section to describe the Gravitational Lensing effect. Gravitational Lensing To describe this phenomenon, we need the null geodesic, in our case, given by (36).Then, we will consider a coordinate system where θ = π 2 such that the trajectory is given by Figure 5 (see, for instance, [54]).The geodesic equations are complicated, but, with some work, we may reduce it to: where u is the trajectory parameter such that x µ = x µ (u) and E and J are constants of motion.From Equation (145), we can see that t → Eu when r → ∞.On the other side, Equation (146) defines a effective potential given by: such that: Using Equations ( 130)-( 132) and ( 137)-( 139), we can see that V e f f (∞) = 0 and v r (∞) = 1.To obtain a plot of V e f f (r), we need to fix a 0 and a 1 .From Figure 5, we know that φ(r) is necessary to study the gravitational lensing.For this, we use Equations ( 146) and ( 147) to obtain: −2 = 1 + F(r 0 y) B(r 0 y) + B(r 0 y) 1 + F(r 0 y) A(r 0 ) + Ã(r 0 ) 1 + F(r 0 ) A(r 0 y) + Ã(r 0 y) where y = r r 0 ≥ 1 is a normalized radius with r 0 the minimal radius, given by dr du | r=r 0 = 0. Thus: . Then, the deflection of light can be obtained solving (150).However, the approximation r 0 >> 2µ is usually used to obtain an explicit result of ∆φ when the matter source is not dense enough.Thus, with this approximation, we obtain: where = 2µ r 0 .We notice that this expression to first order on is exact in GR, but in δ Gravity we have higher order terms.Now, to obtain the deflection of light, we develop Equation (151) such that: We want to describe a complete trajectory, so the photon start from φ(y = ∞) up to φ(y = 1) and then back again to φ(y = ∞) (see Figure 5).In addition, if the trajectory were a straight line, this would equal just π.All of this means that the deflection of light is: From GR, ∆φ = 4µ r 0 .Thus, in our modified gravity, we have an additional term given by 4µa 0 r 0 .On the other side, we have an experimental value ∆φ Exp = 1.761 ± 0.016 for the sun [37] and it is very close to the prediction of GR.This means that, to satisfy the experimental value with δ Gravity, it is necessary that our additional term provide a very small correction, such that: From Equation (153), we can see that a 0 represents an additional mass by δ Matter given by M add = a 0 M, where M is the solar mass.Thus, the result of (154) tells us that δ Matter must be less than 1% close to the Sun.In [57], it was estimated observationally that the DM mass in the sphere within Saturn's orbit should be less than 1.7 × 10 −10 %.On the other side, we expect that δ Matter will be more important in a galactic scale and explain a part of DM. Perihelion Precession In the last section, we used the null geodesic to compute the deflection of light.However, if we want to study the trajectory of a massive object, we need equations in (30).These equations, for θ = π 2 , are given by (see for instance [54]): where dt du obey a fifth order equation: These equations are very difficult to solve, but in the approximation r 0 >> 2µ with r 0 the minimal radius, we have: where y = r r 0 , = 2µ r 0 , j = J r 0 and dr du | r=r 0 = 0, so: From Equations ( 159) and (160), we obtain: However, it is useful to rewrite it using: . That is: where: with ē the orbital eccentricity.In GR, Equation ( 162) is exact to first order on ε and the cubic term in λ(φ) explain the mercury's perihelion precession.In δ Gravity, we have high order corrections too, but they are practically suppressed when r 0 >> 2µ.In addition, Equation (164) can be interpreted as an energy redefinition, such that the new energy is given by Ẽ2 ∼ 1 + (E 2 −1) (1+a 0 (2j 2 +1)) , but this modification must be small to satisfy Equation (154), so the orbital movements are equal to GR.On the other side, other corrections appear when r 0 is smaller, close to 2µ.In that case, we must use the exact equations, given by Equations ( 155)-( 157), but they are very complicated.However, we can try to solve them in a particular radius, for example r 0 where dr du = 0.In that case, Equations ( 155) and (157) can be reduced to: where j = J r 0 .They are a fourth and a third order equation in dt du r 0 , respectively.Thus, by iteration, we can reduce the order of these equation, such as: with: and the minimum radius can be obtained numerically from: where we have to use the exact solution of A(r 0 ), B(r 0 ), Ã(r 0 ), B(r 0 ) and F(r 0 ) in Section 5.1. In the process, we imposed that Equations ( 165) and (166) possess one pole in common for Σ(r 0 ).r 0 is an important element to understand the orbital trajectories, but we need to fix a 0 and a 1 to solve it.On the other side, we can verify that in the limit where Ã(r 0 ), B(r 0 ), F(r 0 ) → 0, our results are reduced to GR, which is: In conclusion, δ Gravity gives us important corrections to orbital trajectories, but they do not produce big differences with GR when the trajectory is far away from the Schwarzschild radius, r s = 2µ, on the condition that a 0 is small enough (See Equation ( 154)).This is always true for stars, planets and any low-density object.We saw that a 0 represents the δ Matter contribution; however, the physical meaning of a 1 is unknown yet.To solve this, the study of massive object is necessary.To finish, in the next section, we will introduce the analysis of Black Holes to complete the Schwarzschild case. Black Holes In Section 2.3, we saw that the proper time is defined using the metric g µν , such as GR where g µν ẋµ ẋν = −1.Then, the proper time is given by (37).On the other side, the null geodesic of massless particles tells us that the space geometry is determined by both tensor fields, g µν and gµν .This means that the three-dimensional metric in a Schwarzschild geometry is given by [58]: To guarantee that the three-dimensional metric is definite positive, we need: First, we needed to find the equations of motion for δ Gravity.One of them are Einstein's equations, which gives us g µν , and additionally we have the equation of gµν .Secondly, we presented the modified geometry for this model, incorporating the new field gµν , where the massless particles, like a photon for example, move in a null geodesic of the effective metric g µν = g µν + gµν .Third, we needed to fix the gauge for g µν and gµν .For this, we developed the extended harmonic gauge given by (A26) and (A27). In this paper, we studied three particular phenomenons in a cosmological level.In the first place, we present the calculation developed in [25].Unlike [23] and [24], we found the exact solution for FLRW geometry in δ Gravity and preserving δ Matter, assuming a universe with just non-relativistic matter and radiation.Then, we verified that δ Gravity does not require DE to explain the accelerated expansion of the universe because a new scale factor R(t) is defined with the effective metric, given by (66).Then, we computed the age of the Universe and it is practically the same as in GR and Planck.On the other side, our model ends in a Big-Rip and we computed that the universe has lived less than half of its life.In addition, we computed the δ Matter in the present, where the δ non-relativistic matter is 23% of the ordinary non-relativistic matter.This result may imply that DM is in part δ Matter.In addition, a very small quantity of δ radiation has been found. In second place, we studied the Non-Relativistic case.In the Newtonian limit, we obtained a similar expression as in GR, where we have an effective potential.This potential depends on ρ (0) and ρ(0) , where the last one corresponds to δ Matter.We found a relation between ρ (0) and ρ(0) and we used it in different density profiles for a galaxy to explain DM.In the first place, we can see that, in a spherically homogenous density, the δ Matter effect is completely null.If we consider this profile like a first approximation to a planet or a star, we conclude that in these cases do not have δ Matter contributions, so it is equivalent to GR.On the other side, with other kinds of densities, where the matter distribution changes in the space, we obtain important modifications.Particularly, we used the exponential, Einasto and NFW profiles, where we noted that δ Matter produces an amplifying effect in the total mass and rotation velocity in a galaxy.Considering all these, we can say that the δ Matter effect is only important when the distribution of ordinary matter is strongly non homogeneous, so the scale is not so important.Therefore, considering δ Matter as DM, large structures with a small quantity of DM should be found [55,56].In addition, we saw that δ Matter has a special behavior, more similar to DM compared with its equivalent ordinary component.We can see this in Figures 3 and 4, where a logarithmic relation between δ and Ordinary Matter is observed, so δ Matter contribution is bigger when we get away from the center.A more complete calculation can be developed if we use a multi-components profile to simulate data of some galaxies [32].In this way, we can isolate the different contributions: Ordinary Baryonic Matter, Ordinary DM, δ Baryonic Matter and δ DM.An analogous result must be obtained in the CMB Power Spectrum.With all these, we concluded that the DM effect could be explained with a considerably less quantity of Ordinary DM, considering that the principal source of this effect is δ DM.This result would explain the problematic detection of DM; however, a field theory description of δ Gravity is necessary to understand the nature of δ Matter. Finally, we analyzed the Schwarzschild case outside matter.We found an exact solution for the equations of motion and used the Newtonian approximation in these solutions to find the deflection of light by the Sun.To explain the experimental data, the correction must be small, such that δ Matter is <1% of the total mass at a solar system scale.Then, we study the perihelion precession.The exact solution is very complicated because a fifth order equation must be solved.However, even in the Newtonian approximation, we can see interesting, but really small, corrections.Basically, δ Gravity does not have important corrections to GR for low-density object like stars or planets.This means that we need to study high-density objects to observe important effects by δ Gravity.To have an idea of how the trajectory of a massive particle is affected by massive objects, we solved the equations in the minimum radius.For that reason, we presented an introduction to Black Holes, where some conditions to guarantee that the three-dimensional metric is definite positive are studied.In the same way than GR, the three-dimensional metric for δ Gravity gives us the inner and outer event horizon radiuses, defining different regions like an ergosphere whenever the conditions in (174) are violated, even in a Schwarzschild black hole.We understood that a 0 gives us the quantity of δ Matter (maybe DM), but the meaning of a 1 is more difficult to define.In any case, it is only relevant to highly massive object, so it is important to define these regions.Black holes in δ Gravity must be studied in more detail. We have shown that g µν = g µν + gµν is the graviton, whereas gµν is a ghost [19,[38][39][40][41].In the non ghost-free models, like δ Theories [22], the Hamiltonian is not bounded from below.On the other side, phantom cosmological models, produced by a ghost component, are used to explain the accelerated expansion of the Universe [26][27][28][29][30], but this background solution becomes unstable by the ghost field.However, some mechanisms can be implemented to restrict the ghost component, avoid the instability and resurrect some of these theories [42][43][44][45][46]. Examples of this are developed in the Lee and Wick finite electrodynamic theory, non-Hermitian models with PT symmetry and others where some restrictions to the configuration space are introduced to make the ghost modes harmless.Added to the quantum corrections limited to one loop, it is not clear whether this problem will persist or not in a diffeomorphism-invariant model as δ Gravity. At this moment, we have studied some phenomenon (Expansion of the Universe, DM, the Deflection of Light, etc.) and introduced a preliminary discussion of others (Black Holes and Inflation).Further tests of the model must include the computation of the CMB power spectrum, the evolution and formation of large-scale structure in the universe and a more detailed analysis of DM.These works are in progress now. Figure 2 . Figure 2. Exponential Profile Calculation.(a) ordinary and δ Matter vs. normalized radius, with m 0 = 8πR 3 ρ 0 .From the center, more δ Matter is accumulated, but the behavior of both terms become similar when we distance from the center; (b) m(x) m(x) (in units of 2 C) vs. normalized radius.We verify the last conclusion.In the center, the relation is almost linear, m(x) m(x) ∼ x, and, at the end of the galaxy, it is like a constant, m(x) m(x) → 3 2 C; (c) rotation velocity vs. normalized radius for different values of 2 C. The Black-Dashed line corresponds to GR case, so the 2 C value indicates the contribution of δ Matter.The similar behavior of both elements and the additional mass from δ Matter produce an amplifying effect in the rotation velocity.In these calculations, we have used κρ 0 R 2 = 10 −4 . Figure 3 . Figure 3. Einasto Profile Calculation for α = {0.7,0.4, 0.1}.(a) ordinary and δ Matter vs. normalized radius, with m 0 = 8πR 3 ρ 0 .From the center, more δ Matter is accumulated, just like the exponential case; (b) m(x)m(x) (in units of 2 C) vs. normalized radius.In fact, we have that m(x) increases faster than m(x) and it is faster when α is smaller. Figure 4 . Figure 4. Navarro-Frenk-White (NFW) Profile Calculation for γ = {1, 1.2, 1.4}.(a) ordinary and δ Matter vs. normalized radius, with m 0 = 8πR 3 ρ 0 .We have that more δ Matter is accumulated from the center, just like the exponential and Einasto cases; (b) m(x) m(x) (in units of 2 C) vs. normalized radius.Compared with the other profiles, we have that m(x) increases faster than m(x), but, in this case, the relation between both masses is practically logarithmic; (c) rotation velocity vs. normalized radius for different values of 2 C. The Black-Dashed line corresponds to GR cases and the other 2 C values indicate the contribution of δ Matter.Here, we verify the result in the other profiles, δ Matter amplifies the rotation velocity, such that, if 2 C is bigger, we have higher velocities.In these calculations, we have used κρ 0 R 2 = (3−γ) 3−γ 4(2−γ) 2−γ × 10 −4 . Figure 5 . Figure 5. Trajectory by gravitational lensing.R is the radius of the star, r 0 is the minimal distance to the star, b is the impact parameter, φ ∞ is the incident direction and ∆φ is the deflection of light.
17,344.6
2019-04-26T00:00:00.000
[ "Physics" ]
CONSTRAINED GLOBAL OPTIMIZATION OF MULTIVARIATE POLYNOMIALS USING POLYNOMIAL B-SPLINE FORM AND B-SPLINE CONSISTENCY PRUNE APPROACH . In this paper, we propose basic and improved algorithms based on polynomial B-spline form for constrained global optimization of multivariate polynomial functions. The proposed algorithms are based on a branch-and-bound framework. In improved algorithm we introduce several new ingredients, such as B-spline box consistency and B-spline hull consistency algorithm to prune the search regions and make the search more efficient. The performance of the basic and improved algorithm is tested and compared on set of test problems. The results of the tests show the superiority of the improved algorithm over the basic algorithm in terms of the chosen performance metrics for 7 out-off 11 test problems. We compare optimal value of global minimum obtained using the proposed algorithms with CENSO, GloptiPoly and several state-of-the-art NLP solvers, on set of 11 test problems. The results of the tests show the superiority of the proposed algorithm and CENSO solver (open source solver for global optimization of B-spline constrained problem) in that it always captures the global minimum to the user-specified accuracy. Introduction Generally constrained global optimization of nonlinear programming problems (NLP) is the study of how to find the best (optimum) solution to a problem. The constrained global optimization of NLPs is stated as follows. Branch-and-bound framework is commonly used for solving constrained global optimization problems [13,17]. For instance, several interval methods [14,18,19,35] use this framework to find the global minimum of NLPs. Since interval analysis methods require function evaluation, which leads to a computationally slow algorithm. Compared with these methods the global optimization algorithm of multivariate polynomial using Bernstein form, e.g. [26,27] has the advantage that it avoids function evaluations which might be costly if the degree of the polynomial is high. Global optimization of polynomials using the Bernstein approach needs transformation of the given multivariate polynomial from its power form into its Bernstein form. The minimum and maximum values of the Bernstein coefficients provide lower and upper bounds for the range of polynomial. Generally, this range enclosure (i.e. bounds) obtained is overestimated in nature and can be improved by degree elevation. Unfortunately, this process implies the increase of computation time. In this paper we propose polynomial B-spline as an inclusion function [7,9,11,25]. The minimum and maximum B-spline coefficients provide lower and upper bounds for the range of polynomial. The range enclosure obtained using the polynomial B-spline form can be sharpened by increasing the number of B-spline segments, i.e. without degree elevation as shown in Figure 1, which motivates us to use polynomial B-spline form as an inclusion function. In the B-spline approach for unconstrained global optimization [29] a B-spline is used to approximate the objective function with randomly scattered data using the least-square and pseudo-inverse methods. The use of B-spline approach for constrained global optimization is given in [11,13] and references therein. Strength of the B-spline form, and thus solvers operating on the B-spline form, is the possibility for exact representation of multivariate (piecewise) polynomials and approximate representation of any function by sampling. Whereas we follow the procedure given in Section 2 by [21,22] to obtain the B-spline representation of a multivariate polynomial. This procedure do not require sample points and corresponding function evaluations. To the best of our knowledge, there are few papers in the literature on B-spline constrained global optimization [11,13]. In this work, we propose B-spline based algorithms for solving non-convex nonlinear multivariate polynomial programming problems, where the objective function and constraints ( & ℎ ) are limited to be polynomial functions. The proposed work extends the Bernstein method in [26,27] and B-spline method in [11][12][13] for constrained NLPs. The extensions are based on tools such as B-spline hull consistency (BsHC) and B-spline box consistency (BsBC) to contract the variable domains. The merits of the proposed approach are: (i) it avoids evaluation of the objective function and constraints; (ii) an initial guess to start optimization is not required, only an initial search box bounding the region of interest; (iii) it guarantees that the global minimum is found to a user-specified accuracy, and (iv) prior knowledge of stationary points is not required. Numerical performance of the proposed basic and improved algorithms are tested on 11 standard benchmark problems taken from [1,4,5,10,16] with dimensions varying from 2 to 7 and the number of constraints varying from 1 to 11. The optimal value of global minimum obtained with the proposed algorithms for 3 test problems are also compared with CENSO, GloptiPoly and some of the well-known NLP solvers. The rest of the paper is organized as follows. In Section 2, we give the notations and definitions of the B-spline form. In Section 3, we present the Bspline constrained global optimization and outline range enclosure property, subdivision procedure, the cut-off test, and the basic algorithms. In Section 4, we present the B-spline hull, B-spline box consistency techniques, and the B-spline constrained global optimization. In Section 5, we first compare the performances of the proposed basic and improved algorithms, and then we compare the optimal value of global minimum obtained using proposed algorithm with GloptiPoly [15] and several state-of-the-art NLP solvers. We give the conclusion of the work in Section 6. Univariate case Firstly, we consider a univariate polynomial to be expressed in terms of the B-spline basis of the space of polynomial splines of degree ≥ (i.e. order + 1). By substituting (2.1) into (2.7), we get Multi segment B-splines Let us consider a polynomial ( ) = 3.371 3 − 10.10 2 + 8.233 + 2 and ∈ [0, 2]. Its polynomial B-spline plot with number of segments equal to 1, 2 and 3 are shown in Figure 1. The B-spline with a single segment has four control points, while the one with two segments has five control points, and the one with three segments has six control points. The advantage of B-spline with more number of segments is that we have more control points. This gives a tight range enclosure without having to increase the degree of the B-spline. The drawback of having more segments is the increase in computation time with the number of B-spline coefficients. In our application to global minimization, we find that the B-spline having a number of segment equal to the order of B-spline plus one is a good option. (1) equal to order of B-spline ( = + 1): Let us continue considering same polynomial ( ) as above. Its polynomial B-spline form of degree = 3 and order 4 with the number of segments taken equal to order of B-spline, will consist of seven B-spline coefficients, i.e. seven B-spline control points and seven B-spline basis functions. The plot of these seven basis functions is shown in Figure 2. As seen from this figure, one of the B-spline basis function that is 3 3 lies on the entire domain of . (2) taken equal to order of B-spline plus one ( = + 2): We continue with the same polynomial ( ) as above and obtain the polynomial B-spline form with the number of segments equal to order plus one. Now, the B-spline has eight B-spline coefficients and eight B-spline basis functions. As shown in Figure 3, half the B-spline basis functions are having the support of lower domain value of , whereas the other half has the support of upper domain value of . Because of this symmetry, a B-spline with the number of segments equal to order plus one is a good option for our application of global minimization. Multivariate case Now, we derive the B-spline representation of a given multivariate polynomial . . . ∑︁ where := ( 1 , 2 , . . . , ), and := ( 1 , 2 , . . . , ). By substituting (2.1) for each , equation (2.10) can be written as The B-spline form of a multivariate polynomial is defined by (2.11). The partial derivative of a polynomial in a particular direction can be found from the B-spline coefficients of the original polynomial on a box b ⊆ x, the first partial derivative with respect to of a polynomial ( ) in B-spline form in [34] where represents knot vector of . Now, ′ (b) contains an enclosure of the range of the partial derivative of on b. Example We consider following example to explain the above ideas. Example 2.1. Let ( , ) = 2 + 2 − − + 0.34 and , ∈ [0.5, 1.5]. We want to obtain the polynomial B-spline form having two B-spline segments for given power form polynomial. The matrix form representation of ( , ) is The degree of variable in the given polynomial is 1 = 2 and that of is 2 = 2. The degree of B-spline in direction 1 , can be greater or equal to the degree of . Similarly, the B-spline in direction will have a degree 2 equal to or greater than degree of variable . In practice, we can therefore take 1 = 1 = 2 and 2 = 2 = 2. Therefore, the order, of B-spline will be = + 1 = 3 in both the directions. As = 2, the B-spline will have two segments in each direction. As , ∈ [0.5, 1.5], the knot vector for both the variables will be the same. From (2.11), B-spline representation of ( , ) can be expressed in matrix form as From (2.13), we calculate the values of B-spline coefficients as A simple computation leads to the B-spline coefficient matrix B-spline constrained global optimization Let ∈ N be the number of variables and = ( 1 , 2 , . . . , ) ∈ R . A multi-index is defined as = where IR denotes the set of compact intervals. Let wid x denotes the width of x , that is wid Global optimization of polynomials using the polynomial B-spline approach needs transformation of the given multivariate polynomial from its power form into its polynomial B-spline form. Then B-spline coefficients are collected in an array (x) = ( (x)) ∈ , where = { : ≤ }. This array is called a patch. We denote 0 as a special subset of the index set comprising indices of the vertices of this array, that is (3.1) Range enclosure property The following lemma describes the range enclosure property of the B-spline coefficients. Obtaining the B-spline coefficients of multivariate polynomials by transforming the polynomial from power form to B-spline form provides an enclosure of the range of the multivariate polynomial on x. Then by Lemma 3.1, the minimum and the maximum values of B-spline coefficient provide lower and upper bounds for the range of polynomial . This range enclosure will be sharp if and only if min( (x)) ∈ (respectively max( (x)) ∈ ) is attained at the indices of the vertices of the array (x), as described in Lemma 3.2. This condition is known as the vertex condition. Based on Bernstein coefficients range enclosure proofs [32], the proof of Lemma 3.1 is given next. Proof. Let = + 1 represent the set of integers between and ( < ). Also let be the B-spline representation for polynomial and let¯be its range, that is min Proof. Let be the B-spline representation for polynomial and let¯be its range, We first note that (0) = 1 , We have that (0) = 1 = = + = (1) and the property is valid. and assume that min for simplicity. Then the bound is sharp, i.e. the same argument is valid if min Definition 3.3. The vertex condition is said to be met within a given tolerance , if As said earlier set 0 comprises of indices of the vertices of the B-spline coefficient array (x), where min gives the minimum value of B-spline coefficient at any vertex point. If the difference between the minimum value of B-spline coefficient at any vertex and the minimum value in B-spline coefficient array is less than , then vertex condition is said to be met within a given tolerance . B-spline subdivision procedure The proposed algorithm is based on branch and bound framework of global optimization. Therefore we need to use domain subdivision. Generally, the range enclosure obtained from Lemma 3.1 is over-estimated and can be improved either by subdividing the domain, degree elevation of the B-spline or by increasing the number of B-spline segments. Subdivision is generally more efficient than degree elevation strategy [6] or increasing the number of B-spline segments. Therefore subdivision strategy is preferred over the latter two. A subdivision in the th direction (1 ≤ ≤ ) is a bisection perpendicular to this direction. Let be any subbox. Further suppose that x is bisected along the th component direction then, two subboxes x and x are generated as The cut-off test As mentioned earlier, the minimum and maximum B-spline coefficients provide the range enclosure of the function. Let˜be the current minimum estimate, and {b, (b)} be the current item for processing. We denote the minimum over the second entry of item {b, (b)} as . If is greater than˜, then this item cannot contain global minimum and it can be discarded. If the maximum over the second entry of item {b, (b)} is lesser thañ , then the current minimum estimate can be updated, and˜takes this maximum value as the new value. Next we present the cut-off test algorithm. A basic B-spline constrained global optimization algorithm In this subsection, we present the basic B-spline algorithm for constrained global optimization of multivariate nonlinear polynomials. The algorithm is inspired by the one described in [31,33]. This basic algorithm uses the polynomial coefficients of the objective function, the inequality constraints, and the equality constraints. The inputs to the algorithm are the polynomial degrees and the initial search box, while the outputs are the global minimum and global minimizers. The polynomial degree is used to compute the B-spline segment number, as the B-spline is constructed with number of segments equal to order of the B-spline plus one. As equality constraints ℎ ( ) = 0 are difficult to verify on computers with finite precision, the equality constraints ℎ ( ) = 0 in (1.3) are replaced by relaxed constraints ℎ ( ) ∈ [− zero , zero ], = 1, 2, . . . , , where zero > 0 is a very small number. The basic algorithm works as follows. We start the algorithm by computing the B-spline segment vectors , and ℎ , where = [ 1 , . . . , ], for each variable occurring in the objective, inequality and equality polynomials. We keep it as + 1 for each variable, giving = [ 1 + 2, . . . , + 2]. Then, we compute the B-spline coefficients of objective, inequality and equality constraint on the initial search box. We store them in arrays (x), (x) and ℎ (x) respectively. We initialize the current minimum estimate˜to the maximum B-spline coefficient of the objective function on x. Next, we initialize a flag vector with each component to zero, a working list ℒ with the item {x, (x), (x), ℎ (x), }, and a solution list ℒ sol to the empty list. We then pick the last item from the list ℒ and delete its entry from ℒ. For this item, we subdivide the box x along the longest width direction creating two subboxes b 1 and b 2 . We compute the B-spline coefficients arrays {b , (b ), (b ), ℎ (b )}, = 1, 2 for b 1 ,b 2 and the B-spline range enclosures D (b ), D (b ) and D ℎ (b ) of objective, inequality and equality constraint polynomials respectively. We check the feasibility of the inequality and equality constraints for b 1 , b 2 using the B-spline coefficients of the constraint polynomials functions by doing the following tests: zero ] for all = 1, 2, . . . , and = 1, 2, . . . , , then b is a feasible box. -If D (b ) > 0 for some , then b is a infeasible box and can be deleted. {Set local current minimum estimate} We then have the following theorem, The following theorem discusses the convergence properties of basic constrained global optimization algorithm. Theorem 3.9. Let basic constrained global optimization algorithm applied to the box x, the inclusion functions , , and of , , and ℎ, respectively. Let the contraction assumption (3.5) and (3.6) and condition ( ) be satisfied. Then the sequence ( ) is nested and ∩ ∞ =1 = D which means that → D. Furthermore˜→ * with˜≤ * and ↘ * as → ∞. The similar proof for convergence of algorithm for global optimization of B-spline constrained problems is given in [13]. Improved algorithm with B-spline consistency techniques We can apply the concept of consistency to each constraints of the problem to eliminate subboxes of the given box that cannot contains the solution. Let ( ) = ( 1 , . . . , ) = 0, ∈ x be a constraint that must be satisfied in an optimization problem we use B-spline box and hull consistency to eliminate subboxes of x that cannot contain a point satisfying ( ) = ( 1 , . . . , ) = 0. The B-spline box and hull consistency can be applied for inequalities. Suppose that in place of the equality we have an inequality ( ) ≤ 0. We can replace this inequality by ( ) = [−∞, 0] and obtain the equation ( ) + [0, +∞] = 0. B-spline box consistency requires application of interval Newton method, generally it does not perform well when the variable bound is very wide. Next we present the B-spline box consistency (BsBC) and B-spline hull consistency (BsHC) techniques with an examples which show that B-spline hull consistency gives 60% pruning and B-spline box consistency gives 38% pruning in variable bound. These can be viewed as extensions of the ideas in [27] in the context of the polynomial B-spline based approach to global optimization. B-spline hull consistency In this subsection, we present the B-spline hull consistency to reduce the search region. B-spline hull consistency can be viewed as extension of interval hull consistency in the context of the B-spline form. We present next the procedure of interval hull consistency described in [14,30]. Let us start by considering to apply hull consistency for domain reduction of a variable 1 using a two variable equality constraint ℎ( 1 , 2 ) = 0, ℎ( 1 , 2 ) = 0,0 + 1,0 1 + 0,1 2 + 1,1 1 2 + · · · + 0,2 2 2 + · · · + ,0 1 + · · · + , 2 = 0. (4.1) The implementation of the hull consistency involves the constraint inversion. To obtain constraint inversion we select term having only one variable with highest degree. Lets consider ,0 1 then constraint inversion is given as, Then we use B-spline expansion to obtain the range h ′ (x 1 , x 2 ) of constraint inversion function ℎ ′ ( 1 , 2 ). The new value for interval x 1 is estimated in [14] as This procedure is repeated to contract the domain of 2 variable using the same constraint function ℎ( 1 , 2 ) = 0. This domain box is further contracted in a similar way using the remaining equality constraints. The hull consistency method can also be applied to inequality constraints, we need only replace an inequality of the form ( ) ≤ 0 by the equation ( ) = [−∞, 0] in [14]. Then B-spline hull consistency can be applied as before to this equality constraint. We illustrate the B-spline hull consistency method for equality constraint via an example. To get a new interval for 2 1 , the interval methods require the evaluation of ℎ 1 (x), so computing the B-spline coefficients for ℎ 1 (x) we get Since (x ′ 1 ) 2 must be non-negative The updated value of x 1 is therefore Next we present B-spline hull consistency (BsHC) algorithm. Algorithm 3 : BsHC ( , , , x, , , ). Input : Here is a cell structure containing the coefficients array of all the constraints, is a cell structure, containing degree vector for all constraints. Where elements of degree vector defines the degree of each variable occurring in all constraints polynomial, is a cell structure containing vectors corresponding to all constraints, i.e. , ℎ . Where elements of this vector define the number of B-spline segments in each variable direction, the number of constraint variables , number of inequality constraints and number of equality constraints . Output: A box x that is contracted using B-spline box consistency technique for the given constraints. Begin Algorithm {Formulate the constraint inverse polynomial} From the coefficient matrix ( ), choose the monomial term in i.e. having the highest degree and substitute zero for its coefficient i.e = 0, then multiply the coefficient matrix ( ) by −1. (c) {Compute h ′ } Compute the B-spline coefficients of the constraint inverse polynomial, and then obtain h ′ , as minimum to maximum of these B-spline coefficients. The algorithm in [8] is suggested for the computation. if < + 1 then Compute B-spline box consistency First of all, we recall the procedure of interval box consistency described in [14,30]. The implementation of interval box consistency involves the application of a one dimensional Newton method, to solve a single equation for a single variable. Let us start by applying box consistency to the following equality constraint (4.4) We use box consistency to eliminate those subboxes of x that do not satisfy ℎ( ) = 0. If we replace all the variables except th variable by their interval bounds, we obtain the equation If 0 / ∈ ℎ( ) for in some subinterval x ′ of x , then we do not have consistency for ∈ x ′ and the subbox ( If 0 / ∈ h( ), then < so we use a interval Newton method to remove points from the lower end of x that are less than . Thus, the Newton result when expanding about the point is [14] ( The new contracted interval can be obtained by intersecting the interval values of x and . That is, Similarly, the upper bound can be contracted using the right narrowing operation. This procedure is repeated for x , = 1, . . . , , using the other equality constraints. We can also apply box consistency to an inequality constraints, by replacing an inequality of the form ( ) ≤ 0 by the equation ( ) = [−∞, 0] in [14], that is rewrite this an another function ℎ( ) given by Lets consider to apply box consistency for domain reduction of variable x ′ = [ , ] using equality constraint (4.9). To apply B-spline Newton contractor for ℎ(x ) for end point , we required interval range enclosure h( ) of equality constraint (4.9). To obtain h( ) we first find interval range enclosure g( ) for ( ) then the interval range enclosure h( ) is obtained by considering min g( ) and ∞ as lower and upper bound of h( ) respectively. Thus by applying B-spline Newton contractor for ℎ( ) for end point , we obtain )︁ . When we apply the B-spline box consistency to a selected variable of a multivariate constraint, then only that variable's domain will be contracted. The domains of the remaining variables will remain unaffected. We apply B-spline box consistency to each variable in turn, to get a contracted box in all variables. Suppose ℎ( ) = 0 is the given equality constraint. We compute the B-spline coefficients array (x) for this constraint, and consider a variable direction, say the first one x 1 = [ , ]. In the B-spline box consistency, we try to increase the value of and decrease the value of , thus effectively reducing the width of x 1 . The procedure to increase the value of is listed in the below steps, (1) Compute interval ℎ( ). Corresponding to 1 = , find all B-spline coefficients in (x). The minimum to maximum of these B-spline coefficients gives an interval ℎ( ). As 0 ∈ ℎ( ), means ℎ( ) is not completely positive interval, then we cannot increase . We instead switch over to the other end point and try to decrease it in the same way as we try to increase . We illustrate the BsBC method for equality constraints via an example. Note that we are constructing B-spline with + 1 number of segments. Consider the application of BsBC along the first component direction, that is, along 1 . Along the direction 1 , the first row corresponds to 1 = = 0.2, and the sixth row corresponds to 1 = = 1. Along the first row, the minimum to maximum values of the B-spline coefficients, are −0.88 and 0.12 giving h( ) = [−0.88, 0.12]. Along the sixth row, the minimum and maximum values of the B-spline coefficients, respectively are 2 and 3 giving h( ) = [2,3]. Since 0 ∈ h( ) the left end point cannot be increased. However, 0 ̸ ∈ h( ), hence the right end point can be decreased. The partial derivative in the direction 1 , that is, h ′ 1 will be as follows Therefore, the updated value of x 1 is Next we present B-spline box consistency (BsBC) algorithm. Algorithm 4: BsBC ( , , , x, , , ). Input : Here is a cell structure containing the coefficients array of all the constraints, is a cell structure, containing degree vector for all constraints. Where elements of degree vector defines the degree of each variable occurring in all constraints polynomial, is a cell structure containing vectors corresponding to all constraints, i.e. , ℎ . Where elements of this vector define the number of B-spline segments in each variable direction, the number of constraint variables , number of inequality constraints and number of equality constraints . Output: A box x that is contracted using B-spline box consistency technique for the given constraints. Begin Algorithm {Compute B-spline coefficient for constraint polynomial} ( ) ( ) using ( ( ) , ( ) , ( ) , x). The algorithm in [8] is suggested for the computation. there is no zero of ℎ in entire interval x , and hence the constraint ℎ is infeasible over box x. Exit the algorithm in this case with x ′ = ∅. Proposed improved algorithm for constrained global optimization To apply the proposed B-spline box and B-spline hull consistency algorithms using the basic algorithm (see Sect. 3.4), we modify step 7 of basic algorithm to step * 7 given below. We refer to the resulting modified algorithm as improved algorithm. Numerical test In this section, we present the result and analysis of our tests. The computations are done on a PC Intel i3-370M 2.40 GHz processor, 6 GB RAM, while the algorithms are implemented in MATLAB [24]. An accuracy = 10 −6 is prescribed for computing the global minimum and minimizer(s) in each test problem. For the tests, we select 11 benchmark optimization problems taken from [1,4,5,10,16] (described in Appendix A). Table 1 reports the global minimum obtained with the proposed algorithms. Comparisons between basic and improved proposed algorithms First, we test and compare the performance of the basic and improved B-spline constrained global optimization algorithms on a set of 11 test problems. The performance metrics are taken as the number of subdivisions and computation time (in seconds) required to compute the global minimum for the problem. These values are reported in Tables 2 and 3 respectively. For these metrics, we give the computed as where PMBA is the performance metric with the basic algorithm, and PMIA is the performance metric with the improved algorithm. In Table 2, we compare the number of subdivisions required for obtaining global minima using the basic and improved algorithms. It is found that except problem P2 for all test problems the number of subdivisions required to compute the global minimum reduces in the range of 2.22% to 96%. Whereas algorithm is unable to solve problem P10. In Table 3, we compare the computation time required for obtaining the global minima using the basic and improved algorithm, it is observed that the improved algorithm computes the results slower than that of the basic algorithm except for the problems number 3, 6, 7 and 9. The use of B-spline box and B-spline hull consistency algorithms found to be very effective in pruning the search domain by discarding those subboxes that cannot contain global minimizers. As shown in Section 4.3 during each iteration improved algorithm requires to compute B-spline coefficients more than once (like steps * 7( ) and * 7( ) in improved algorithm) as compared to basic algorithm. In improved algorithm during each iteration B-spline coefficients are computed after domain pruning by the B-spline box and B-spline hull consistency. Also, in each iteration of B-spline hull consistency algorithm requires to compute B-spline coefficients after each constraint inversion (cf. 4.1) to contract the variable bounds. All the variable bounds are contracted using each constraint function individually. This is also evident from the results in the Table 3 for an improved algorithm. We would like to mention that, a more sophisticated implementation of B-spline box and hull consistencies can alleviate the computational bottle neck associated with them. In case of problem P2 algorithm is unable to minimize the number of subdivision still it is slow because B-spline box consistency (Newton method) is fast when the initial intervals are small. Unfortunately, the running time of algorithm increases linearly with the size of the initial interval [36]. Comparison with GloptiPoly and other NLP solvers Next, we compare the optimal value of global minimum found using the proposed B-spline algorithms ( & ) with CENSO, GloptiPoly [16,20] and BARON, LINDO Global and SCIP NLP solvers. For the current tests, we consider the above set of 11 test functions. The idea is to investigate where same optimal value of global minimum can be found with the considered methods. The CENSO solver implements the algorithm in [13] and is available on Github: https://github.com/bgrimstad/censo. The GAMS and GloptiPoly source code for these problems are available at [https://bit.ly/2YHonrh].The GAMS interface for BARON, LINDO Global and SCIP is available through the NEOS server [28]. Table 4 report only those test functions from the above set of 11 test functions for which the proposed algorithms provides the optimal value of global minimum compare to CENSO, GloptiPoly and NLP solvers. The bold values in the table indicate the local minimum value. For P8 and STP1 BARON, LINDOGlobal and SCIP are able to capture the local minimum, where CENSO, GloptiPoly and proposed algorithms found the Notes. (⋆) Indicates that the algorithm did not give the result even after one hour and therefore terminated. Notes. (#) Indicates that the algorithm did not give the result even after one hour and therefore terminated. correct global minimum. In P10, BARON, LINDOGlobal and SCIP are able to capture the local minimum and GloptiPoly did not give the result even after an hour and is therefore terminated. In P8 and STP1 (STP1 is a problem constructed by authors based on P8), the relaxation order for GloptiPoly had to be systematically increased to a high order to obtain convergence to the final results. The proposed algorithms and CENSO find the optimal value of global minimum for all the test problem considered. Where as BARON and LINDO Global are unable to capture optimal value of global minimum in all the 3 test functions even after specifying a large value of the AbsConFeasTol option in GAMS. The SCIP solver is unable to find the optimal value of global minimum for P10 test problem. For GloptiPoly, the relaxation order need to be systematically increased to a higher order to obtain convergence to the optimal value of global minimum. As the dimension and number of constraints in problem increases, to obtain convergence the relaxation order is gradually increased to a value greater than or equal to the dimension of the problem. For medium dimension problem (like 7 with 11 constraints) GloptiPoly exhausts with memory even with small relaxation order. Whereas other NLP solvers may be able to solve large dimension problems with non-optimal value of global minimum. Here, we want to mention that proposed algorithms are implemented in MATLAB, whereas except GloptiPoly all other NLP solvers mentioned above are implemented in different languages and therefore comparison based on computation time between proposed algorithms and these NLP solvers is not carried out. Conclusion We presented the basic and improved algorithm for constrained global optimization of multivariate polynomials using polynomial B-spline form. The performance of the basic and improved algorithm are tested and compared on 11 test problems. The test problems had dimensions ranging from 2 to 7 and number of constraints varying from 1 to 11. The results of the test show that improved algorithm is more efficient, in terms of a number of subdivisions with little extra computational time. We also compared the optimal value of global minimum obtained with the proposed algorithms with CENSO, GloptiPoly and some of the well known NLP solvers, on a set of 3 test problems. The result shows the superiority of the proposed algorithm and CENSO solver, in that it captures the global minimum to user specified accuracy. One possible extension of this research work is to investigate the performance of proposed B-spline approach for solving problems with more number of variables. This problem will be addressed in a future work.
7,904.4
2021-12-02T00:00:00.000
[ "Mathematics" ]
The type II transmembrane serine protease matriptase cleaves the amyloid precursor protein and reduces its processing to β-amyloid peptide Recent studies have reported that many proteases, besides the canonical α-, β-, and γ-secretases, cleave the amyloid precursor protein (APP) and modulate β-amyloid (Aβ) peptide production. Moreover, specific APP isoforms contain Kunitz protease-inhibitory domains, which regulate the proteolytic activity of serine proteases. This prompted us to investigate the role of matriptase, a member of the type II transmembrane serine protease family, in APP processing. Using quantitative RT-PCR, we detected matriptase mRNA in several regions of the human brain with an enrichment in neurons. RNA sequencing data of human dorsolateral prefrontal cortex revealed relatively high levels of matriptase RNA in young individuals, whereas lower levels were detected in older individuals. We further demonstrate that matriptase and APP directly interact with each other and that matriptase cleaves APP at a specific arginine residue (Arg-102) both in vitro and in cells. Site-directed (Arg-to-Ala) mutagenesis of this cleavage site abolished matriptase-mediated APP processing. Moreover, we observed that a soluble, shed matriptase form cleaves endogenous APP in SH-SY5Y cells and that this cleavage significantly reduces APP processing to Aβ40. In summary, this study identifies matriptase as an APP-cleaving enzyme, an activity that could have important consequences for the abundance of Aβ and in Alzheimer's disease pathology. Therefore, APP-processing proteases are the subject of an increasing number of studies related to AD. Within the extracellular space in vertebrates, several proteases act as essential modulators of development and tissue remodeling (10). One of these, matriptase, is a member of the type II transmembrane serine protease (TTSP) family that is encoded by the suppression of tumorigenicity-14 (ST14) gene (11). Matriptase is mostly expressed in epithelial cells (12) and involved in development and maintenance of epithelial barrier integrity such as in the skin and gut (11). This protease is a cell-surface glycoprotein that undergoes catalytic autoactivation and is released from the cell surface as a soluble, shed form to the pericellular environment (13). Indeed, matriptase had initially been identified in the culture medium of breast cancer cells and detected in human milk (12,14,15). This shed form of matriptase can thus interact with proteins located at the cell surface or in the extracellular matrix. Numerous matriptase substrates have been identified, including hepatocyte growth factor (16), prostasin (17), urokinase-type plasminogen activator (18), protease-activated receptor 2 (18), and epithelial cell adhesion molecule (19). The serine protease catalytic activity of matriptase is physiologically controlled by interaction with hepatocyte growth factor activator inhibitor types 1 and 2 (HAI-1 and HAI-2) through their KPI domain (20). This regulation is essential for proper function of matriptase in adult and embryonic tissues (12,21). Although matriptase was originally described as solely expressed in epithelial cells, it was shown that loss of inhibition of matriptase disrupts neural tube closure in mice (21), suggesting that it plays a role in neurogenesis. A recent report also demonstrated that matriptase is expressed in mouse neural progenitor cells and promotes cell migration and neuron differentiation (22), whereas another study revealed the presence of matriptase in human glioblastoma multiform cells where it regulates the neuronal channel ASIC1 (23). Moreover, a member of the TTSP family, matriptase-2, which is almost exclusively expressed in hepatocytes, was shown to alter APP cleavage either indirectly through the activation of the metalloprotease meprin-␤, which cleaves APP695 (24), or directly through an interaction with the KPI domain of APP770, which reportedly inhibits matriptase-2 enzymatic activity and protects APP from being processed by this enzyme (25). Together, these reports led us to investigate whether matriptase is also an APP-cleaving enzyme. Here we show that matriptase is expressed in human brain and neuronal tissues and that the enzyme directly interacts with and cleaves the three APP isoforms at a specific residue in their ectodomains. Furthermore, exogenous addition of matriptase alters A␤ production in neuronal SH-SY5Y cells. These events can have important consequences to the overall processing profile of APP in normal conditions as well as in AD. Matriptase is expressed in the human brain To investigate matriptase expression in the human brain, RT-quantitative PCR (qPCR) analysis was performed on postmortem human brain tissues (mean age at death was 73.8 Ϯ 12.2 years) (Fig. 1A). mRNA levels of matriptase (ST14 gene) were measured in human frontal cortex, hippocampus, temporal cortex, and cerebellum tissues. Given that matriptase expression in epithelial cells of intestinal and especially of colon tissue is high (26), the level of matriptase mRNA in the brain region was expressed relative to its expression in colon. Matriptase transcripts were clearly detectable in the frontal cortex, hippocampus, temporal cortex, and cerebellum with no significant difference between the regions tested but at much lower levels than in colon tissue (Fig. 1A). , and cerebellum (n ϭ 7) and expressed relative to that in the human colon tissue (n ϭ 3). The difference between the different brain regions was not significant (Student's t test, p Ͼ 0.05). Error bars represent means Ϯ S.D. B, the levels of ST14 mRNA were analyzed in human neurons, astrocytes, microvascular endothelial cells (Micro. Endo.), choroid plexus epithelial (Chor. Plex. Epi.) cells, Schwann cells, and epithelial colorectal adenocarcinoma Caco-2/15 cells and expressed relative to human colon carcinoma HCT116 cells (triplicate analyses were performed on each sample). C, expression of matriptase (immunoblot) at different stages of neuronal differentiation (0, 1, 3, and 6 weeks) of hiPSCs. ␤3-Tubulin was used as neuronal marker, and histone H3 was used as a loading control. D, expression levels of ST14 and GAPDH mRNAs across development in the DLPC as measured by fragments per kilobase of exon per million fragments mapped (FPKM). Each dot represents data from an individual brain. Negative correlation between ages after birth and ST14 was significant (Spearman's correlation coefficient r ϭ Ϫ0.73, p Ͻ 0.001) (n ϭ 39). APP processing by matriptase To ascertain in which cells of the human nervous system matriptase is expressed, RT-qPCR was next performed on total human mRNA from different cell types (Fig. 1B). Levels of ST14 transcripts in these cells were expressed relative to those of human colon carcinoma cells HCT116 (27). Matriptase mRNA was detected in neurons, astrocytes, microvascular endothelial cells, and choroid plexus epithelial cells, whereas no matriptase mRNA was detected in Schwann cells. Interestingly, the mRNA level in neurons was similar to that for human epithelial colorectal adenocarcinoma Caco-2/15 cells. Together, these results reveal matriptase expression in different cell types of the human brain and are in agreement with previous data obtained from mouse brain (22). Because matriptase was shown to be expressed in mouse differentiating neural progenitor cells (22), we used human induced pluripotent stem cells (hiPSCs) at different stages of neuronal differentiation (0, 1, 3, and 6 weeks) to analyze matriptase protein expression (Fig. 1C). From the pluripotent state (hiPSCs) to up to 3 weeks, matriptase is not detected, but a 70-kDa immunoreactive band was detected after 6 weeks of neuronal differentiation. Neuronal differentiation was confirmed by validating the expression of the neuronal markers GABA, vesicular glutamate transporter, NeuN, and ␤3-tubulin by immunofluorescence (supplemental Fig. S1). The expression of matriptase in differentiated hiPSCs is in line with its detection in mouse neural progenitor cells (22). On the basis of matriptase expression in mouse brain during development (21,22), we investigated its ontogeny in the developing human brain. Publicly available RNA sequencing (RNAseq) data sets of human dorsolateral prefrontal cortex (DLPC) from fetuses, newborns, children, adults, and elderly subjects were retrieved for analysis (28) (Fig. 1D). Very low levels of matriptase RNA were detected in utero, but much higher levels were found soon after birth. Furthermore, although relatively high levels of matriptase mRNA were found in young brains (age, Ͻ20 years), levels were significantly lower in brains of older individuals with a constant decrease over lifetime. Interestingly, these data show a statistically significant negative correlation between age groups (starting after birth) and matriptase RNA levels (p Ͻ 0.001), whereas no correlation was observed for the housekeeping gene GAPDH. Taken together, these results confirm matriptase expression in the human brain and highlight the temporal and spatial modulation of its expression through a lifetime. Matriptase directly interacts with APP To investigate whether matriptase can interact with the three major APP isoforms, immunoprecipitations were performed on HEK293 cells transfected with wild-type (WT) matriptase together with GFP-tagged APP770, APP751, or APP695 or GFP ( Fig. 2A). Matriptase coimmunoprecipitated with GFP-APP770, -APP751, and -APP695 but not with GFP alone (Fig. 2B), suggesting that matriptase interacts with all three APP isoforms. Because matriptase associated with GFP-APP695, we conclude that the KPI domain found in isoforms APP770 and APP751 is not required for interaction with the enzyme. GST pulldown assays were next used to verify the in vitro interaction between matriptase and the extracellular region of APP695 (GST-APP695 N-term) and/or the cytoplasmic region of APP695 (GST-APP695 C-term) (Fig. 3A). 35 S-Labeled in vitro translated matriptase coprecipitated with GST-APP695 N-term but very weakly with GST-APP695 C-term or GST alone (Fig. 3B). Densitometric analysis statistically supports the difference between GST and GST-APP695 Nterm and between GST-APP695 C-term and GST-APP695 N-term (p Ͻ 0.05) (Fig. 3C). These results indicate that matriptase interacts directly and predominantly with the N-terminal ectodomain of APP695. Matriptase cleaves APP When performing immunoprecipitation with GFP-tagged APP and matriptase, we detected a GFP-APP fragment of 35 kDa in cell lysates (Fig. 2B), suggesting cleavage of APP by matriptase. This 35-kDa fragment would correspond to the GFP tag (25 kDa) and a portion of the APP extracellular N terminus (10 kDa). To confirm the role of matriptase in the cleavage of APP and the formation of this APP fragment, HEK293 cells were transfected with GFP-tagged APP770, APP751, and APP695 together with WT matriptase or catalytically inactive matriptase mutant S805A in which the catalytic serine of the active site is replaced with alanine (29) (Fig. 4). Given that the cleavage is expected to occur on the extracellular domain of APP, we also attempted to detect the presence of APP fragments in the culture medium. The GFP-tagged APP fragment of 35 kDa was detected in both cell lysates and conditioned medium of cells transfected with WT matriptase, but not with cells transfected with matriptase S805A, for all three APP isoforms (Fig. 4A). In concordance, the levels of GFPtagged full-length APP (band at 130 -150 kDa in cell lysates) or soluble APP (band at 130 -150 kDa in medium) were reduced in cells expressing WT matriptase compared with control cells or cells expressing the catalytically inactive matriptase, suggesting conversion of the precursor form into smaller fragments (Fig. 4A). Moreover, no cleavage of GFP-APP695 was observed when HEK293 cells were transfected with matriptase and HAI-1, the physiological inhibitor of matriptase (supplemental Fig. S2) or with matriptase-2 (TMPRSS6), a close member of the matriptase subfamily (supplemental Fig. S3). Moreover, only the extracellular region of APP is involved in the matriptase processing event because a chimeric construct in which the transmembrane and cytoplasmic domains of APP695 (residues 624 -695) were replaced with an equivalent domain of the unrelated type I membrane-bound protein LRP10 (residues 441-713) was also cleaved (supplemental Fig. S4B). These results support the exclusive role of the extracellular domain of APP in its processing by matriptase. Given that active matriptase exists as a membrane-bound as well as a soluble shed entity (30), we next investigated whether an active soluble matriptase form could be involved in APP cleavage. Purified soluble matriptase or inactive S805A forms were exogenously added to the culture medium of HEK293 cells overexpressing GFP-tagged APP770, APP751, or APP695. A GFP-tagged APP fragment of 35 kDa was detected in the conditioned medium of all cells incubated with recombinant WT matriptase but not with matriptase S805A (Fig. 4B). APP processing by matriptase Concentration-dependent curves indicated that the cleavage of GFP-APP695 occurs at a concentration as low as 1 nM soluble matriptase (supplemental Fig. S5). These results suggest that soluble active matriptase can cleave APP isoforms in the pericellular space. The physiological relevance of APP processing by matriptase was next analyzed with the human neuroblastoma SH-SY5Y cell line, which expresses endogenous APP695, APP751, and APP770 (31) but not matriptase (data not shown). SH-SY5Y cells were incubated with exogenous soluble WT matriptase or matriptase S805A as described above (Fig. 4C). Interestingly, a 10-kDa APP fragment was detected in the conditioned medium by Western blotting using an antibody against the N terminus of APP when cells were incubated with WT matriptase but not with the inactive recombinant matriptase S805A. This 10-kDa fragment would correspond to the portion of the APP extracellular N terminus fused to GFP (25 kDa) to form the 35 kDa fragment detected in the previous assays. This result confirms that soluble matriptase can cleave endogenous APP on SH-SY5Y cells. To determine whether APP cleavage is due to a direct action of matriptase on APP and not from an indirect action of matrip-tase on another APP-cleaving enzyme, in vitro cleavage assays were performed with 35 S-labeled in vitro translated APP770, APP751, and APP695, and purified soluble WT matriptase or matriptase S805A (Fig. 4D). After incubation with increasing concentrations of purified soluble WT matriptase for 1 h at 37°C, a reduction in the amount of full-length APP isoforms and an increased amount of APP fragments of around 50, 27, 20, and 10 kDa were detected by autoradiography. The higher molecular mass forms may correspond to intermediate processed fragments, but the 10 kDa fragment would correspond to the N-terminal APP fragment described in Fig. 4C. In contrast, no APP fragments were detected in the presence of inactive matriptase S805A (Fig. 4D). Together, these results suggest that matriptase directly cleaves the different APP isoforms and is not inhibited by the KPI domain of APP. Identification of the precise matriptase cleavage site on APP To identify the precise matriptase cleavage site on the APP extracellular domain, mass spectrometry (MS) analysis was performed on the APP fragments generated following the APP processing by matriptase in vitro incubation of purified GST-APP695 N-term with or without soluble recombinant WT matriptase. Isolated GST-APP695 fragments were digested with chymotrypsin to produce several overlapping peptides, analyzed by HPLC coupled to an Orbitrap MS, and compared with purified GST-APP695 N-term alone (Fig. 5A). A main cleavage site for matriptase was identified at Arg-102 located on the first heparin domain of APP695 (Fig. 5B). This cleavage site is conserved in the different APP isoforms and would yield an N-terminal fragment with a predicted molecular mass of 12 kDa, consistent with the low-molecular-mass (10-kDa) APP fragment detected by immunoblotting and autoradiography A, schematic representation of the GST-tagged APP695 deletion mutants used to determine the matriptase-binding domain. B, APP695 mutants and GST protein (10 g) were immobilized on glutathione beads and incubated with in vitro translated 35 S-labeled matriptase. Bound proteins were separated by SDS-PAGE and detected by autoradiography. GST proteins were detected with Coomassie Blue staining. Input, 2.5% of the total in vitro translated product (n ϭ 6). C, a Kruskal-Wallis test on the densitometric analysis of B was applied. There is a statistical difference between GST alone and GST-APP695 N-term and between GST-APP695 C-term and GST-APP695 N-term (*, p Ͻ 0.05). Error bars represent means Ϯ S.D. Expression of the R102A mutant abolished the formation of the 35-kDa GFP-APP695 fragment, indicating that this mutant is resistant to cleavage by matriptase. Together, these results indicate that Arg-102 is the main matriptase cleavage site in the ectodomain of APP. Lysates and conditioned media of these cells were immunoblotted (IB) with anti-matriptase or anti-GFP antibody to detect matriptase, APP isoforms, and APP fragments (n ϭ 3 for each isoform). Note the GFP-tagged APP fragment (cleaved) of 35 kDa in cell lysate and medium (arrow). B, HEK293 cells transfected with GFP-tagged APP770 (left panel), APP751 (middle panel), or APP695 (right panel) were incubated without (Buffer) or with 5 nM recombinant WT matriptase (sMat-WT) or catalytically inactive matriptase mutant (sMat-S805A). Conditioned media were immunoblotted as described in A. C, SH-SY5Y cells expressing endogenous APP were incubated without (Buffer) or with 5 nM of recombinant WT matriptase (sMat-WT) or catalytically inactive matriptase mutant (sMat-S805A). Conditioned media were immunoblotted with anti-APP N-terminal antibody (22C11) to detect APP and APP fragments in the medium (n ϭ 3). Note the APP fragment (cleaved) of 10 kDa. D, in vitro translated 35 S-labeled APP770 (left panel), APP751 (middle panel), and APP695 (right panel) were incubated with different concentrations (0, 1, 10, and 100 nM) of recombinant WT matriptase (sMat) or 100 nM catalytically inactive matriptase mutant (sS805A) for 1 h at 37°C. Reaction products were separated by SDS-PAGE and detected by autoradiography (n ϭ 3). Note the APP fragment (cleaved) of 10 kDa (arrow) and other higher molecular mass fragments (asterisks) in the presence of recombinant WT matriptase. APP processing by matriptase Matriptase processing of APP alters A␤40 production Given that the cleavage of APP by proteases has been previously shown to result in an alteration of A␤ peptide formation (32,33), we next determined whether matriptase cleavage has an effect on the endogenous APP processing pathway leading to A␤ formation (Fig. 6). SH-SY5Y cells were incubated with or without exogenous, purified soluble WT matriptase or matriptase S805A for 36 h, and ELISAs were performed on the culture medium to specifically quantify the accumulation of A␤40 peptide. A 72% decrease (p Յ 0.001) in A␤40 levels was observed in cells incubated with WT matriptase compared with control cells (without matriptase) or cells incubated with matriptase S805A (Fig. 6A). These results suggest that matriptase cleavage reduces APP processing into A␤40 in SH-SY5Y cells. To determine whether the alteration of A␤40 production involved the Arg-102 matriptase cleavage site in APP, A␤40 levels were measured in the culture medium of HEK293 cells transfected with GFP-tagged APP695 WT or APP695 R102A mutant with or without matriptase (Fig. 6B). As observed in SH-SY5Y incubated with exogenous purified soluble WT APP processing by matriptase matriptase (Fig. 6A), a 90% decrease of A␤40 levels (p Յ 0.01) was quantified in HEK293 cells expressing APP695 with matriptase (Fig. 6B). In contrast, the level of A␤40 was not altered in cells expressing APP695 R102A with or without matriptase compared with cells expressing APP695 WT without matriptase (Fig. 6B). These results suggest that the cleavage of APP at Arg-102 by matriptase affects the processing of APP by secretases in turn to reduce A␤40 production. Discussion Recent advances have reinforced the hypothesis that accumulation of A␤ is the main initiator of AD. Therefore, identifying factors that influence/regulate APP processing into A␤ is crucial in understanding AD pathogenesis and designing novel therapeutic strategies. In this study, we have identified matriptase as a novel protease expressed in human brain tissue that cleaves APP in its ectodomain, which causes a significant reduction in the production of A␤40. Importantly, this suggests a potential neuroprotective role of matriptase in the APP processing events leading to A␤ production. Matriptase, one of the best characterized TTSPs, is known to be mostly expressed in epithelial cells where it carries out essential functions in development, differentiation, and maintenance of epithelial barrier integrity (27). Interestingly, matriptase has been recently reported to be expressed in non-epithelial cells, more specifically in mouse neural stem/progenitor cells and neurons as well as in mouse cortex, hippocampus, striatum, and subventricular zone. Its expression has also been associated with mouse neuronal development, migration, and differentiation (21,22). In this study, we demonstrate for the first time that matriptase mRNA is present in the frontal and temporal cortex, hippocampus, and cerebellum of the human brain. The expression levels of matriptase mRNA observed between the different human brain regions tested were similar to those detected in mouse brain regions (22). By analysis of whole transcriptome data sets deposited in the European Nucleotide Archive, we found very low levels of matriptase mRNA in human fetal brains, but a sharp increase in levels was observed in the DLPC of young individuals followed by a constant decrease during aging. The DLPC is an area of the brain involved in executive functions that undergoes the greatest amount of postnatal development that lasts until adulthood (34 -36). Thus, the matriptase expression pattern in this brain area potentially follows a specific temporal pattern during brain development and neurogenesis and may explain why low matriptase mRNA levels were detected in older individuals. Interestingly, of all brain cells tested, neurons showed the highest level of matriptase mRNA, similar to that found in human epithelial colorectal adenocarcinoma Caco-2/15 cells, a low invasive colon cancer cell line, but that level was 100 times lesser than levels found in HCT116 cells, a highly invasive colon cancer cell line. These data indicate that human neurons express matriptase mRNA and support the idea that matriptase has important physiological functions in these cells. Our attempts to detect matriptase protein expression in human brain tissues or neurons from adult/elderly individuals were unsuccessful, which may be due to antibody sensitivity but may also be due to temporal regulation of matriptase expression with transcripts at their highest expression levels soon after birth and significantly lower in brains of older individuals (Fig. 1D). However, we detected matriptase protein expression in hiPSCs after 6 weeks of neuronal differentiation but not in undifferentiated cells. The detection of matriptase in human neuronal cells derived from iPSCs is novel and is in accordance with previous results obtained by others with mouse neuronal progenitor cells (22) and by our group with mouse astrocytes (supplemental Fig. S6). Overall, although matriptase levels may be low and particularly difficult to detect in adult brain tissues, they could be sufficient for physiological relevance. The presence of matriptase in human brain cells could also have a pathophysiological effect such as in cancer progression. In many tumors, matriptase RNA/protein levels are up-regulated, and there is a positive correlation between matriptase expression and tumor grade (37)(38)(39)(40). Additionally, the expression of HAI-1, an endogenous inhibitor of matriptase, whose expression in the human brain is well-documented (41,42), is often deregulated in human cancer (39). Indeed, overexpression of HAI-1 has been reported to suppress the in vitro invasive capability of human glioblastoma cells (41). Therefore, identification of the different proteolytic substrates of matriptase in brain cells will help delineate its basic function in the central nervous system and its implication in various neurological diseases. We found that matriptase interacts with and cleaves all three major APP isoforms, indicating that the KPI domain found in APP751 and APP770 is not crucial for this interaction and does not inhibit matriptase. These results differ from those reported for matriptase-2 (also named TMPRSS6), a close member of the matriptase subfamily. The intact KPI domain of APP770 and APP751 was shown to be important for the interaction with matriptase-2 and to inhibit its enzymatic activity (25). Taken together, these results may be explained by the differences in the catalytic domain of matriptase and matriptase-2 (45% homology) and in their protein-protein interaction domains, which are key for their respective activity and substrate speci- ficities (43). Moreover, the amino acid sequence of the APP KPI domain also differs from that of the KPI domain of HAI-1. The canonical active site of HAI-1 that interacts with the second negatively charged binding site in the catalytic domain of matriptase was identified as Arg-258 and Arg-260 (44). Comparison of the Kunitz sequences of APP770/751 and HAI-1 indicates that Arg-258 of HAI-1 is replaced by a proline in the APP Kunitz domain, which could significantly hinder its interaction with the active site of matriptase. Matriptase hydrolyzes peptide bonds C-terminal to specific basic amino acids with a clear preference for Arg over Lys in the P1 position and basic amino acids in the P3 position (18,43). Accordingly, mass spectrometry analysis identified Arg-102 within the sequence KRGR2KQCK in the first heparin domain of the extracellular region of APP as the main matriptase cleaving site. The three-dimensional structure of residues 18 -123 of this heparin domain (Protein Data Bank code 1MWP) reveals that Arg-102 is well-exposed at the surface of the protein and thus accessible for cleavage by a protease. To our knowledge, this specific Arg has never been identified as a cleavage site for other proteases. Interestingly, the residues that form the structural network in the heparin domain are conserved between the E1 domain of APP and amyloid precursor-like protein 2 (APLP2) but not APLP1 (45), suggesting that APLP2 may also be a potential matriptase substrate. Enzymatically active matriptase exists as a membrane-bound as well as an extracellular soluble, shed form (46,47). Consequently, both forms can interact with APP and cleave its ectodomain. Using purified soluble matriptase applied on SH-SY5Y cells (which do not express matriptase), we showed that endogenous APP is cleaved, suggesting that shed matriptase originating from expression in either homologous or adjacent cells can process APP and therefore impact the physiological function of APP. Recent evidence suggests that the ectodomain of APP and its proteolytic fragments are important for its biological roles such as cell growth, cell adhesion and motility, neurite outgrowth, and cell survival (48,49). For example, the first heparin domain in the APP N terminus, which contains the Arg-102 matriptase cleavage site, interacts with the extracellular matrix through heparan sulfate proteoglycans and is involved in the regulation of neurite outgrowth (50). Moreover, both APP and matriptase activities were reported to be involved in the migration and differentiation of mouse neuronal precursors (22,51). Whether matriptase cleavage alters these different APP functions will need to be elucidated. N-terminal processing of APP may influence the cleavage efficiency toward A␤ production. In the last few years, many proteases (other than ␣and ␤-secretases) have been reported to cleave the APP ectodomain and alter A␤ production. These include the membrane type 5 matrix metalloproteinase, referred to as -secretase, which cleaves APP at residue 504 (APP695 numbering) and releases a long truncated ectodomain (sAPP) as well as a membrane-bound CTF that is further cleaved by ␣and ␤-secretases, releasing A peptides that alter neuronal activity and plasticity (52). The asparagine endopeptidase has also been reported to act as a novel ␦-secretase by cleaving APP at Asn-373 and Asn-585 residues, selectively influencing the rate of ␤-secretase cleavage and promoting A␤ production (33). In a similar way, we showed that matriptase cleaves APP at Arg-102, causing a significant decrease (Ͼ70%) of A␤40 levels, which was completely abolished when Arg-102 was replaced by Ala (APP R102A), indicating that the matriptase-specific cleavage affects APP processing by secretases in turn to reduce A␤40 production. Interestingly, the matriptase cleavage site is located in a highly flexible loop region (residues 98 -105) shown to be important for APP dimerization and processing into A␤ (53,54). Indeed, biochemical data revealed that addition of a synthetic peptide corresponding to this loop region interferes with APP dimerization and decreases the generation of sAPP␤ and A␤ when added to neuroblastoma SH-SY5Y cells (54), indicating a direct or indirect influence of dimerization on APP processing by ␤-secretases. On this basis, we propose that matriptase proteolytically cleaves the APP ectodomain, potentially impairing APP dimerization and interaction with secretases, which would reduce the rate of A␤ production. Despite large-scale efforts to therapeutically target the putative disease mechanisms in AD, neuroprotective treatments are still lacking. Presently, ␤and ␥-secretases are prime therapeutic targets under development, but many concerns have been recently raised as to how effective these particular enzymes are as therapeutic targets (55,56). Therefore, there is growing consensus that gaining a better understanding of the regulation of APP processing is crucial for identifying new potential therapies to reduce A␤ accumulation and combat AD. Our findings describe a new cleavage of APP by matriptase that reduces the production of A␤ peptide probably by altering the processing by secretases. These observations suggest that matriptase may have a neuroprotective role in controlling the levels of A␤ peptide. Conversely, low levels of matriptase observed in aging brain as well as impaired matriptase activity or HAI-1 levels could accelerate the formation of amyloid plaque and the progression of the disease. Our findings highlight a previously unappreciated role of matriptase in human brain, and future studies will aim to clarify its role in physiological and pathophysiological functions of APP. In conclusion, this study identifies matriptase as a novel APP-cleaving protease and furthers our understanding of APP biology. Future experiments will be needed to validate the biological relevance of matriptase-mediated APP processing in vivo and its potential role in the onset of AD pathology. Antibodies and reagents Anti-GFP rabbit polyclonal antibodies were purchased from Clontech Molecular Probes (Eugene, OR), anti-human matriptase polyclonal antibodies were from Bethyl Laboratories (Montgomery, TX), and anti-APP N-terminal 22C11 mAbs were from EMD Millipore (Billerica, MA). Coimmunoprecipitation HEK293 cells were plated in 60-mm culture dishes and transfected with the indicated constructs. After 48 h, the cells were washed twice with phosphate-buffered saline; lysed in 50 mM Tris buffer (pH 7.4) containing 150 mM NaCl, 1% Triton X-100, and protease inhibitors for 1 h at 4°C; and then centrifuged at 15,000 ϫ g for 20 min. The cleared supernatants were incubated with GFP-Trap_A (Chromotek, Germany) overnight at 4°C and washed with 10 mM Tris/HCl (pH 7.5), 150 mM NaCl, 0.5 mM EDTA buffer three times. Bound immune complexes were boiled in Laemmli sample buffer and analyzed by SDS-PAGE and immunoblotting. Immunoblotting The protein samples were separated by 10 or 16% SDSpolyacrylamide gel electrophoresis (PAGE) and transferred to 0.45-m-diameter pore-size nitrocellulose membranes (PerkinElmer Life Sciences). The membranes were blocked in Tris-buffered saline (20 mM Tris-HCl (pH 7.4) and 150 mM NaCl) containing 0.1% Tween 20 and 5% nonfat dry milk, incubated with primary antibodies for 1 h at room temperature, subsequently incubated with horseradish peroxidase-conjugated goat anti-rabbit or anti-mouse IgG for 1 h at room tem-perature (Bio-Rad), and enhanced using a chemiluminescence detection reagent (Pierce). Glutathione S-transferase pulldown assays GST fusion proteins were expressed in Escherichia coli BL21 and purified on glutathione-Sepharose 4B beads (GE Healthcare) according to the manufacturer's instructions. The 35 Slabeled in vitro translation products of pcDNA3.1-human APP770, APP751, APP695, and matriptase were prepared using the TNT T7 rabbit reticulocyte Quick Coupled Transcription/ Translation system (Promega, San Luis Obispo, CA) in the presence of EasyTag EXPRESS 35 S labeling mixture (73% Met and 22% Cys; 41,000 Ci/mmol; PerkinElmer Life Sciences). A total of 5-10 mg of purified GST or GST fusion protein was incubated with the in vitro translated products in 20 mM Tris-HCl buffer (pH 7.4) containing 150 mM NaCl, 1% Triton X-100, and protease inhibitors for 2h at 4°C. Beads were washed four times with the same buffer. Bound proteins were eluted with Laemmli buffer, resolved by SDS-PAGE, and visualized by autoradiography. In vitro cleavage assays WT matriptase (residues 596 -855) and S805A mutant were produced and purified as described previously (57). The 35 Slabeled in vitro translation products of pcDNA3.1-human APP770, APP751, and APP695 were prepared as described under "Glutathione S-transferase pulldown assays." Enzymatic assays were performed in a final volume of 100 l in 100 mM Tris-HCl (pH 8.5) containing 500 g/ml BSA. In vitro translated 35 S-labeled APP isoforms (0.5 l) were incubated with 0, 1, 10, and 100 nM recombinant WT matriptase or 100 nM of inactive matriptase S805A for 1 h at 37°C. Enzymatic reactions were stopped by the addition of 30 l of Laemmli buffer, resolved by SDS-PAGE, and visualized by autoradiography. Treatment of cells with recombinant matriptase Culture medium of HEK293 or SH-SY5Y cells was removed and replaced with 2 ml of serum-free HCELL-100 medium (Wisent, St-Bruno, Quebec, Canada) containing different concentrations (0 -100 nM) of recombinant soluble human WT matriptase or mutant S805A. After a 36-h incubation, the conditioned medium was collected and either concentrated with Amicon Ultra centrifugal filters (3,000 nominal molecular weight limit (Merck Millipore Ltd.) for immunoblotting or directly used for ELISA analysis. Cells were lysed in 50 mM Tris buffer (pH 7.4) containing 150 mM NaCl, 1% Triton X-100, and protease inhibitors for 1 h at 4°C. Both conditioned medium and cell lysate were boiled in Laemmli sample buffer and analyzed by SDS-PAGE and immunoblotting. ELISA quantification of A␤40 For ELISAs, SH-SY5Y cell were seeded at a density of 2 ϫ 10 6 cells in 60-mm culture dishes and allowed to grow to Ͼ75% confluence. HEK293 cells were seeded at a density of 1.5 ϫ 10 6 cells in 60-mm culture dishes and transfected as described under "Coimmunoprecipitation." When the cells reached the desired density, the medium was removed and replaced with conditioned medium containing matriptase WT or S805A as APP processing by matriptase described under "Treatment of cells with recombinant matriptase." After a 48-h incubation, the supernatants were harvested, and A␤40 levels were quantified using an Amyloid ␤40 Human ELISA kit (KHB3481, Invitrogen) according to the manufacturer's instructions. An Infinite M200 Plate reader (Tecan) was used to detect the signal. The A␤40 concentrations were determined by comparison with the standard curve and normalized to total protein concentration in the medium. Human brain tissues and human cell total RNA Human frontal cortex, temporal, hippocampal, and cerebellum samples were obtained from the Douglas Hospital Brain Bank in Montreal, Quebec, Canada. The mean age at death was 73.8 Ϯ 12.1 years. The post-mortem interval was 22.3 Ϯ 7.4 h. Control cases had a clinical diagnosis of non-demented elderly patients. Total RNAs from human neuron, astrocyte, microvascular endothelial cell, choroid plexus epithelial cell, and Schwann cell were purchased from 3H Biomedical (Uppsala, Sweden). Tissue RNA isolation and quantitative real-time RT-PCR Total RNA extractions were performed on cell pellets using TRIzol (Invitrogen) with chloroform following the manufacturer's protocol. The aqueous layer was recovered, mixed with 1 volume of 70% ethanol, and applied directly to an RNeasy Mini kit column (Qiagen). DNase treatment on the column and total RNA recovery were performed according to the manufacturer's protocol. RNA quality and the presence of contaminating genomic DNA were verified as described previously (58). RNA integrity was assessed with an Agilent 2100 Bioanalyzer (Agilent Technologies). Reverse transcription was performed on 1.1 g of total RNA with Transcriptor reverse transcriptase, random hexamers, dNTPs (Roche Diagnostics), and 10 units of RNaseOUT (Invitrogen) following the manufacturer's protocol in a total volume of 10 l. All forward and reverse primers were individually resuspended to 20 -100 M stock solutions in Tris-EDTA buffer (Integrated DNA Technologies, Inc.) and diluted as a primer pair to 1 M in RNase DNase-free water (Integrated DNA Technologies, Inc.). Real-time qPCRs were performed in 10 l in 96-well plates on a CFX-96 thermocycler (Bio-Rad) with 5 l of 2ϫ iTaq Universal SYBR Green Supermix (Bio-Rad), 10 ng (3 l) of cDNA, and 200 nM (final concentration; 2 l) primer pair solutions. The following cycling conditions were used: 3 min at 95°C and 50 cycles of 15 s at 95°C, 30 s at 60°C, and 30 s at 72°C. Relative expression levels were calculated using the qBASE framework (59) and the housekeeping genes YWHAZ, GAPDH, and SDHA for human cDNA. Primer design and validation were evaluated as described elsewhere (58). In every qPCR run, a no-template control was performed for each primer pair, and these were consistently negative. All primer sequences are available in supplemental Table S1. Differentiation of human induced pluripotent stem cells into cortical neurons The differentiation protocol was based on a previous study (60). However, the Noggin agonist LDN193189 was used to reduce the recombinant Noggin concentration. The hiPSCs were dissociated using Accutase (Innovative Cell Technology, San Diego, CA) and plated on growth factor-reduced Matrigel (Corning) in PeproGrow human ES cell medium (PeproTech) supplemented with 10 M ROCK (Rho-associated, coiled-coilcontaining protein kinase) inhibitor (Y-27632; 10 M; Cayman Chemical). When 70% cell confluence was reached, the medium was changed to defined default medium (61) supplemented with B27 (1ϫ final), 10 ng/ml Noggin (PeproTech,) and 0.5 M LDN193189 (Sigma). The medium was changed every day. After 16 days of differentiation, the medium was changed to defined default medium/B27 and replenished every day. At day 24, neural progenitors were manually detached from the plate and plated on growth factor-reduced Matrigel-coated plates or chamber slides (LabTek). Five days after dissociation, half of the medium was exchanged for Neurobasal A medium supplemented with B27 (1ϫ final) and changed again every 3 days. Analysis of public RNA-seq data sets RNA sequences were obtained from a previously published study on human brain development (28). Briefly, sequences from deep-frozen post-mortem brain tissues from 39 individuals without neurological or psychiatric illnesses were retrieved from the European Nucleotide Archive (http://www.ebi.ac.uk/ ena). 4 All samples are from DLPC gray matter (Brodmann area 9/46) spanning from fetal life to the eighth decade of life. Fetal tissue was taken from the prefrontal region over the dorsal convexity of the frontal lobe just anterior to the temporal pole. Run accession numbers used are listed in supplemental Table S2. The obtained paired-end reads from RNA-seq data sets were aligned to the human reference genome GRCh37/hg19 using HISAT2 (version 2.03) (62). The number of reads mapping to each gene was calculated with featureCount version 1.4.6.p5 (63) using the annotated transcriptome from Ensembl (http:// www.ensembl.org/Homo_sapiens/Info/Index). Normalization of gene expression was obtained by calculating fragments per kilobase of exon per million fragments mapped for each RNAseq sample (supplemental Table S2). Mass spectrometry analysis GST fusion proteins were expressed in E. coli BL21 and purified on glutathione-Sepharose 4B beads according to the manufacturer's instructions. Bound proteins were incubated for 2 h at 37°C in a volume of 100 l of 100 mM Tris-HCl (pH 8.5) containing or not 100 nM recombinant soluble WT matriptase. The supernatant was then collected, lyophilized, and suspended in 25 l of 10 mM HEPES/KOH (pH 7.5). Proteins were reduced with 3.24 mM DTT and alkylated with 13.5 mM iodoacetamide. The urea concentration was lowered to 1 M with the addition of 50 mM ammonium bicarbonate (NH 4 HCO 3 ) and 1 mM CaCl 2 and digested with chymotrypsin (Thermo Scientific, catalogue number 90056). Digested samples were desalted with C 18 tip (Thermo Scientific, catalogue number 87764), lyophilized, and resuspended in 1% formic acid prior to mass spectrometry analysis. Chymotrypsin-digested peptides were separated using a Dionex UltiMate 3000 nano-HPLC system. Ten microliters of sample (a total of 2 g) in 1% (v/v) formic acid was loaded with a constant flow of 4 l/min onto an Acclaim PepMap100 C 18 column (0.3-mm inner diameter ϫ 5 mm; Dionex Corp., Sunnyvale, CA). After trap enrichment, peptides were eluted off onto a PepMap C 18 nanocolumn (75 m ϫ 50 cm; Dionex Corp.) with a linear gradient of 5-35% solvent B (90% acetonitrile with 0.1% formic acid) over 240 min with a constant flow of 200 nl/min. The HPLC system was coupled to an Orbitrap QExactive mass spectrometer (Thermo Fisher Scientific Inc.) via an EasySpray source. The spray voltage was set to 2.0 kV, and the temperature of the column was set to 40°C. Full-scan MS survey spectra (m/z 350 -1600) in profile mode were acquired in the Orbitrap with a resolution of 70,000 after accumulation of 1,000,000 ions. The 10 most intense peptide ions from the preview scan in the Orbitrap were fragmented by collision-induced dissociation (normalized collision energy of 35% and resolution of 17,500) after the accumulation of 50,000 ions. Maximal filling times were 250 ms for the full scans and 60 ms for the MS/MS scans. Precursor ion charge state screening was enabled, and all unassigned charge states as well as singly, septuply, and octuply charged species were rejected. The dynamic exclusion list was restricted to a maximum of 500 entries with a maximum retention period of 40 s and a relative mass window of 10 ppm. The lock mass option was enabled for survey scans to improve mass accuracy. Data were acquired using Xcalibur software. Data were processed, searched, and quantified using the MaxQuant software package version 1.5.2.8 using the human UniProt database (July 16, 2013; 88,354 entries). Quantification and bioinformatics analysis The settings used for the MaxQuant analysis were as follows: five miscleavages were allowed; trypsin (Lys/Arg not before Pro) and chymotrypsin (Leu/Phe/Trp/Tyr not before Pro) were used; variable modifications included in the analysis were methionine oxidation and protein N-terminal acetylation. A mass tolerance of 7 ppm was used for precursor ions, and a tolerance of 20 ppm was used for fragment ions. To achieve reliable identifications, all proteins were accepted based on the criterion that the number of forward hits in the database was at least 95-fold higher than the number of reverse database hits, thus resulting in a false discovery rate of less than 5%. Statistical analysis Experiments were performed at least in triplicate, and results are expressed as means Ϯ S.D. The statistical significance of differences between samples was assessed using an unpaired two-tailed student t test, Kruskal-Wallis test, or a two-tailed Spearman non-parametric correlation. A p value Ͻ0.05 was considered significant. Author contributions-E. L. planned and performed most of the experiments, collected and analyzed the data, made all the figures, and drafted the manuscript. A. D. and F. B. performed experiments, provided experimental advice, and revised the manuscript. A. F., G. B., S. M., and D. G. performed experiments and revised the manuscript. C. L. and R. L. designed the study, provided intellectual feedback, participated in interpretation of data, and revised the manu-script. All authors read and approved the final version of the manuscript and agree to be accountable for all aspects of the work.
9,588.6
2017-10-20T00:00:00.000
[ "Biology" ]
A Fractional-Order Discrete Noninvertible Map of Cubic Type: Dynamics, Control, and Synchronization In this paper, a new fractional-order discrete noninvertible map of cubic type is presented. Firstly, the stability of the equilibrium points for the map is examined. Secondly, the dynamics of the map with two different initial conditions is studied by numerical simulation when a parameter or a derivative order is varied. A series of attractors are displayed in various forms of periodic and chaotic ones. Furthermore, bifurcations with the simultaneous variation of both a parameter and the order are also analyzed in the three-dimensional space. Interior crises are found in the map as a parameter or an order varies. ,irdly, based on the stability theory of fractional-order discrete maps, a stabilization controller is proposed to control the chaos of the map and the asymptotic convergence of the state variables is determined. Finally, the synchronization between the proposed map and a fractional-order discrete Loren map is investigated. Numerical simulations are used to verify the effectiveness of the designed synchronization controllers. Introduction In the recent several decades, chaos is an attractive phenomenon in nonlinear dynamical systems, which has been extensively analyzed and studied deeply. It is well known that chaos was first detected in continuous nonlinear systems. Its characteristics and the existence in discrete dynamical maps have also been the interesting topics. Many discrete maps with chaotic attractors have been proposed, such as the Logistic map, Hénon map, and Lozi map [1][2][3][4][5]. In 1974, Diaz and Olser first put forward the fractional difference [20]. Up to now, fractional-order discrete maps have obtained more and more attention. In [21], a discrete fractional Hénon map was introduced, and its chaotic behavior was discussed. Dynamics, stabilization, and synchronization for several fractional-order maps, such as the Ikeda map, Loren map, and Lozi map, were studied in [22][23][24][25][26][27][28]. e discrete fractional calculus can avoid the tedious information or calculation error of the numerical discretization for the continuous ones due to the nonlocal property [29]. erefore, more and more discrete maps with fractional operators need to be presented, and more abundant and complex dynamics behaviors need to be explored. Besides, it is well known that fractional-order discrete maps are sensitive not only to the small disturbance of parameters and initial conditions but also to the variation of fractional orders [30], which are the unique advantages of fractional-order systems. For this reason, a fractional-order discrete map is more suitable for data encryption and secure communications. Furthermore, fractional-order discrete maps have simple forms and rich dynamics, which are good for system analysis and numerical computation. Based on these, investigation of a new fractional-order discrete map including dynamics, stabilization, and synchronization is necessary and important for the development of fractional calculus. In [31,32], a two-dimensional noninvertible map with cubic order nonlinearity, which was taken as a chaotic cryptosystem, was proposed and studied. e evolution of attractors and their basins have been analyzed deeply and explained thoroughly. A noncyclic chaotic attractor for the map was displayed in [33]. Based on these, we extend the map to the fractional case and study its dynamics. e stability of the equilibrium points for the map is examined. By the bifurcation graphs and phase diagrams, the dynamics of the fractional-order discrete map with two different initial conditions is displayed as a parameter or a derivative order varies. Furthermore, bifurcations with the simultaneous variation of both a parameter and the order are also analyzed in the three-dimensional space. Interior crises occur in the map with the variation of a parameter or the order. e main motivation of our work is to know whether bifurcations and chaos, which the integer-order discrete map possesses, also exist in the fractional-order counterpart. In fact, these dynamics behaviors do exist in the fractional-order map, and multifaceted complex dynamics is observed by means of the numerical simulations. For a chaotic system, control and synchronization are very important for its application in practical problems. In our work, we are also interested in studying the control and synchronization for the fractional-order map. Based on the stability theory of fractional-order discrete maps, a stabilization controller is proposed to control the chaos of the map. e synchronization between the proposed map and a fractional-order discrete Loren map is studied and realized. Discrete Fractional Calculus In this section, we will recall the definition and related theories for the discrete fractional calculus. In the following, the symbol C Δ υ a X(t) represents the υ order Caputo type delta fractional calculus of a function X(t): N a ⟶ R with N a � a, a + 1, a + 2, . . . { } [34], which is expressed as follows: where υ ∉ N is the order, t ∈ N a+n− υ , and n � ⌈υ⌉ + 1. In formula (1), the υth fractional sum of Δ n s X(t) is defined as [35,36] where t ∈ N a+υ , υ > 0. t (υ) represents the falling function which is defined according to the Gamma function Γ as Generally speaking, the following method is employed to compute the numerical solutions for a fractional-order discrete map. For an equation with the fractional calculus operator, the equivalent discrete integral one is . e following theorem is used to analyze the stabilization and synchronization for fractional discrete maps. For the proof of the theorem, please refer to the literature [37]. Theorem 1. e zero equilibrium of a linear fractional discrete system: where for all the eigenvalues λ of M. A Fractional-Order Discrete Map 3.1. Description of the Map. Firstly, the two-dimensional discrete map with cubic nonlinearity in [31][32][33] is described as follows: x(n + 1) � y(n), where x(n) and y(n) are the state variables and b and c parameters. e first-order difference of (8) is formulated as By employing the Caputo-like delta difference given in (1) with the starting point a, the corresponding fractional map is 2 Complexity Based on equations (4) and (5), we can obtain ). Based on this, the numerical solution for the fractional discrete map (10) can be obtained, which is as follows: In the rest of the paper, the low limit a is fixed as 0. Stability of Equilibrium Points. Now, we turn to study the stability of equilibrium points for map (10). By the simple computation, we can get the three equilibrium points: when b + c > 1. e map has only one equilibrium point E 1 (0, 0) when b + c ≤ 1. e Jacobian matrix of map (10) evaluated at an equilibrium point E * � (x * , y * ) is e corresponding eigenvalues for the equilibrium point /2). In this paper, we only consider the case of map (10) with positive parameters. erefore, the zero equilibrium point E 1 is unstable due to |arg λ 1 | � 0 < (υπ/2) on the basis of eorem 1. For a fractional-order discrete map, the stability of a zero equilibrium point can be determined easily based on eorem 1. erefore, we will use a very simple method proposed in [38] for handling with the nonzero equilibrium points. For more special details about the method, please refer to Remark 2.5 in the literature [38]. In order to analyze the stability of the nonzero equilibrium points E 2,3 , let rough the following variable transforms, we can get two newly maps with a zero equilibrium point: which are corresponding to E 2,3 , respectively. e Jacobian matrix of maps (16) and (17) evaluated at the zero equilibrium point is e corresponding eigenvalues of J 2 for the zero equilibrium point are When the parameters of map (10) are choosen as b � 2.2 and c � 0.95 and the order is taken as υ � 0.98, the corresponding eigenvalues are λ 3,4 � − 1.4976 ± 1.4343i. By simple computation, we can obtain which means the equilibrium point E 2,3 is unstable in this case according to eorem 1. Dynamics Analysis In this section, the dynamics of the fractional-order discrete map (10) with the variation of a parameter or the fractional order and the bifurcations with the simultaneous variation of both a parameter and the order υ will be analyzed in detail. Dynamics as the Parameter b Varies. When the order υ � 0.98 and parameter c � 0.95, the dynamics of map (10) with the variation of the parameter b is analyzed. e bifurcation diagrams and the corresponding largest Lyapunov exponents (LLE) spectrums with two different initial conditions x 01 � (0.8, − 0.4) and x 02 � (0.8, 0.4) are displayed in Figure 1. From which we can see that the dynamics of the map is abundant and shows a symmetry with different initial conditions in this case. e evolution of the trajectories for different b with x 01 is depicted in Figure 2. In the phase plane, there is a fixed point which means the map is period-1 for b � 1 (Figures 2(a) and 2(b)). e map has a limit cycle attractor for b � 1.5, see Figure 2(c), which means a Hopf bifurcation occurs as the parameter b increases from 1 to 1.5. e shape of the limit cycle changes as b increases further (Figures 2(d) and 2(e)). e map keeps chaotic when b varies from 1.75 to 2.2. In Figure 2(g), three small chaotic attractors appear in the phase plane when b � 1.9 and convert to one large attractor when b � 1.95 (Figure 2(h)). From Figures 2(i) and 2(j), we can see that the chaotic attractor becomes a large one suddenly, which implies that an interior crisis occurs as b increases from 2 to 2.2. e dynamics of the map with the initial condition x 02 , which is similar to Figure 2, is displayed in Figure 3. From the global dynamics perspective, the two chaotic attractors, which are depicted in Figures 2(i) and 3(i), collide with each other and convert to a large one (Figures 2(j) and 3(j)). Dynamics as the Parameter c Varies. e fractional order υ is fixed as 0.98 and the parameter b � 2.2, and the dynamics of map (10) Complexity Complexity 5 Complexity c � 0.79. When c � 0.8, the system is chaotic and the chaotic attractor is depicted in Figure 5(c). As c increases from 0.87 to 0.88, the chaotic attractor becomes a large one, which means an interior crisis occurs. e dynamics of the map with the initial condition x 02 is displayed in Figure 6. e map keeps chaotic as the parameter c changes in the interval of [0.2, 1], and the chaotic attractor has different formations. A chaotic attractor consists of four small parts in the phase space when the parameter c increases from 0.2 to 0.28 (Figures 6(a)-6(c)) and converts to a whole one when c � 0.29 ( Figure 6(d)). e system is period-5 when c � 0.4, and the phase diagram and the discrete time evolution of the state variable x(n) are plotted in Figures 6(e) and 6(f ). When the parameter c increases from 0.87 to 0.88, the chaotic attractor becomes a large one suddenly (Figures 6(g) and 6(h)), which implies that an interior crisis occurs. e stable region for the map in the b − c parameter plane with υ � 0.98 is plotted in Figure 7 in order to give a guidance of choosing values of the parameters. From this figure, we can see that map (10) is chaotic when b � 2.2 and c � 0.95. Dynamics as the Order υ Varies. e parameters are fixed as b � 2.2 and c � 0.95, the dynamics of map (10) is studied when the order υ is varied in this section. e bifurcation diagrams and the corresponding LLE spectrums with x 01 and x 02 are plotted in Figure 8. From which it can be seen that the dynamics of the map also shows a symmetry with different initial conditions in this case. For different values of the υ, the phase diagrams with x 01 are shown in Figure 9. e map has period-1 attractor for υ � 0.7 (Figure 9(a)), and a limit cycle for υ � 0.71 (Figure 9(b)), which means a Hopf bifurcation, occurs as the order υ increases. e shape of the limit cycle changes as υ increases (Figure 9(c)). e map exists as a multicycle attractor for υ � 0.83, see Figure 9(d). As the order increases to 0.84, the attractor becomes a chaotic one which is consisted by several small parts (Figure 9(e)). e small parts combine into one attractor when υ � 0.86 (Figure 9(f )). From Figures 9(g) and 9(h), it is clear that the chaotic attractor has three small parts in the phase plane when υ � 0.89, and these parts become a whole one when υ � 0.9. e chaotic attractor in Figure 9(i) becomes a large one (Figure 9(j)) when the order varies from 0.95 to 0.96. e phase diagrams with initial condition x 02 as the order varies from 0.7 to 0.96 are shown in Figure 10, which are symmetric with those of the map withx 01 . From the global dynamics perspective, the two chaotic attractors, which are depicted in Figures 9(i) and 10(i), collide with each other and convert to a large one (Figures 9(j) and 10(j)). Bifurcation with the Simultaneous Variation of Both Parameter and Order υ. In this section, the bifurcations of map (10) as a parameter and the order variation with the initial conditions x 01 and x 02 are studied. Firstly, the value of parameter c is fixed as 0.95. e bifurcation diagram of the map is depicted in Figure 11, when the parameter b ∈ [0.2, 2.2] and the order υ ∈ [0.5, 1] change simultaneously. Secondly, the value of parameter b is set as 2.2. Figure 12 shows the bifurcation diagram of the map with the variation of the parameter b ∈ [0.2, 2.2] and the order υ ∈ [0.5, 1]. From these figures, it can be seen that map (10) is periodic when the order is less than a certain threshold and appears chaotic behavior when the order is greater than the certain threshold. In other words, the dynamics of map (10) becomes regular as the derivate order υ decreases from 1 to 0.5 and complex as the derivate order υ increases from 0.5 to 1. Stabilization e stabilization of map (10) will be studied in this section. From here, the case of the equilibrium to be at the origin is considered. For convenience, the controlled map (10) is rewritten as the following form: where ω � t − 1 + υ and u 1 and u 2 are the stabilization controllers. Theorem 2. e fractional-order map (10) can be stabilized when the controllers are designed as the following form: Proof. rough substituting (22) into (21), map (21) becomes Map (23) can be rewritten in the compact form: where A � − 1 0 0 − 1 . Based on eorem 1, it is easy to see that the eigenvalues of A satisfy the conditions |argλ i | � π and |λ i | � 2 υ , for i � 1, 2, which implies that the chaos of map (10) can be controlled and the zero equilibrium of (24) is globally asymptotically stable. In the numerical simulations, the values of the parameters are fixed as b � 2.2 and c � 0.95, and the fractional order υ � 0.98. e controllers are used to stabilize map (10) when the iterations are chosen as n � 1000. e stabilization results are displayed in Figure 13. It is clear that the state variables x(n) and y(n) toward to zeros, which means the chaos of map (10) is stabilized and the results confirm the theoretical control results presented in eorem 2. Synchronization Now, we consider the synchronization of map (10). Firstly, a fractional Lorenz map is taken as the drive system: where 0 < υ < 1 and state subscript d denotes the drive system. Map (25) is chaotic when the parameters c � 1.25 and δ � 0.75 and the derivative order υ � 0.98. For more details about the dynamics of map (25), please refer to the literature [23]. Map (10) with synchronization controllers u 1 (ω) and u 2 (ω), which can be described by the following equations: where the subscript r denotes the response system. e error state variables are defined as e x (t) � x r (t) − x d (t) and e y (t) � y r (t) − y d (t). If all the error states variables tend to 0 as the time t ⟶ ∞, then maps (25) and (26) synchronized. e following theorem is given to ensure the synchronization between the two maps can be realized. Theorem 3. e drive and response maps (25) and (26) are synchronized when the controllers are designed as follows: Proof. e error dynamical system with fractional Caputo difference is By substituting controllers (27) into (28), we can obtain the error dynamical system: System (29) is rewritten in the compact form: where M � − 1 1 0 − 1 . It can be seen that the eigenvalues of the matrix M satisfy the following stability condition: Based on eorem 1, we can get that the zero equilibrium of (29) is globally asymptotically stable, which implies the two maps (25) and (26) are synchronized. In the numerical simulations, the parameters are fixed as c � 1.25, δ � 0.75, b � 2.2, and c � 0.95 and the order is υ � 0.98. e initial conditions of the two systems (25) and (26) are (x d0 , y d0 ) � (0.1, 0.1) and (x r0 , y r0 ) � (0.8, 0.4). From which, we can see that the error variables e 1 and e 2 converge to zero rapidly as n increases (Figures 14(a) and 11(b)). Meanwhile, the evolution of the state variables with time n for the two maps (25) and (26) are synchronized under the designed controllers (27) (Figures 14(c) and 14(d)). Conclusions and Discussion A fractional-order discrete noninvertible map with cubic nonlinearity is proposed in this paper. Firstly, the stability of the equilibrium points for the map is analyzed. Secondly, the dynamics of the map with two different initial conditions is studied by numerical simulation. Bifurcation diagrams and phase plots are obtained as a parameter or the fractional order varies. A series of attractors of the map in different forms, including equilibrium points, limit cycles, and chaotic attractors, are plotted. Furthermore, bifurcations with the simultaneous variation of both parameter and order are also analyzed in the three-dimensional space. From the global dynamics perspective, interior crises occur in the map as a parameter or the order varies. irdly, based on the stability theory of fractional-order discrete maps, the chaos of the map is controlled by the stabilization controllers. Finally, the synchronization between the proposed map and a fractional-order discrete Loren map is investigated. Numerical simulations are implemented to verify the effectiveness of the designed controllers. e results obtained in this paper reveal that chaos really exists in the fractional-order formation for the map proposed in [31][32][33]. More abundant local and global dynamics are found in the fractional-order map. It is worth mentioning that the mechanism of interior crises occur in map (10) cannot be displayed from a global perspective due to the absence of effective global dynamics computation methods for fractional-order discrete maps. erefore, developing the effective computation methods of global analysis for this kind of systems is our following work. Data Availability e data for the bifurcation diagrams used to support the findings of this study are included within the supplementary information files (available here). For the large of the data, we will supply the bifurcation data calculated by the software Matlab. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,650.8
2020-05-22T00:00:00.000
[ "Mathematics", "Engineering" ]
Amharic-English Speech Translation in Tourism Domain This paper describes speech translation from Amharic-to-English, particularly Automatic Speech Recognition (ASR) with post-editing feature and Amharic-English Statistical Machine Translation (SMT). ASR experiment is conducted using morpheme language model (LM) and phoneme acoustic model(AM). Likewise,SMT conducted using word and morpheme as unit. Morpheme based translation shows a 6.29 BLEU score at a 76.4% of recognition accuracy while word based translation shows a 12.83 BLEU score using 77.4% word recognition accuracy. Further, after post-edit on Amharic ASR using corpus based n-gram, the word recognition accuracy increased by 1.42%. Since post-edit approach reduces error propagation, the word based translation accuracy improved by 0.25 (1.95%) BLEU score. We are now working towards further improving propagated errors through different algorithms at each unit of speech translation cascading component. Introduction Speech is one of the most natural form of communication for humankind (Honda, 2003). Computer with the ability to understand natural language promoted the development of man-machine interface. This can be extended through different digital platforms such as radio, mobile, TV, CD and others. Through these, speech translation facilitates communication between the people who speak different languages. Speech translation is the process by which spoken source phrases are translated to a target lan-guage using a computer (Gao et al., 2006). Speech translation research for major and technological supported languages like English, European languages (like French and Spanish) and Asian languages (like Japanese and Chinese) has been conducted since the 1983s by NEC Corporation (Kurematsu, 1996). The advancement of speech translation captivates the communication between people who do not share the same language. The state-of-the-art of speech translation system can be seen as the integration of three major cascading components (Gao et al., 2006;Jurafsky and Martin, 2008); Automatic Speech Recognition (ASR), Machine Translation (MT) and Text-To-Speech (TTS) synthesis. ASR is the process by which a machine infers spoken words, by means of talking to computer, and having it correctly understand a recorded audio signal. Beside ASR, MT is the process by which a machine is used to translate a text from one source language to another target language. Finally, TTS creates a spoken version from the text of electronic document such as text file and web document. As one major component of speech translation, Amharic ASR started in 2001 . A number of attempts have been made for Amharic ASR using different methods and techniques towards designing speaker independent, large vocabulary, contineous speech and spontaneous speech recognition. In addition to ASR, a preliminary English-Amharic machine translation experiments was conducted using phonemic transcription on the Amharic corpus (Teshome et al., 2015). The result obtained from the experiment shows that, it is possible to design English-Amharic machine translation using statistical method. As the last component of speech translation, a number of TTS research have been attempted using different techniques and methods as discussed by (Anberbir and Takara, 2009). Among these, concatenative, cepstral, formant and a syllable based speech synthesizers were the main methods and techniques applied. All the above research works were conducted using different methods and techniques beside data difference and integration as a cascading component. Moreover, dataset and tools used in the above research are not accessible which makes difficult to evaluate the advancement of research in speech technology for local languages. However, there is no attempt to integrate ASR, SMT and TTS to come up with speech translation system for Amharic language. Thus, the main aim of this study is to investigate the possibility to design Amharic-English speech translation system that controls recognition errors propagating through cascading components. Amharic Language Amharic is a Semitic language derived from Ge'ez with the second largest speaker in the world next to Arabic (Simons and Fennig, 2017). The name Amharic (€≈r{) comes from the district of Amhara (€≈•) in northern Ethiopia, which is thought to be the historic, classical and ecclesiastical language of Ethiopia. Moreover, the language Amharic has five dialectical variations spoken named as: Addis Ababa, Gojam, Gonder, Wollo and Menz. Amharic is the official working language of government of Ethiopia among the 89 languages registered in the country with up to 200 different spoken dialects (Simons and Fennig, 2017;Thompson, 2016). Beside these, Amharic language is being used in governmental administration, public media and national commerce of some regional states of the country. This includes; Addis Ababa, Amhara, Diredawa and Southern Nations, Nationalities and People (SNNP). Amharic language is spoken by more than 25 million with up to 22 million native speakers. The majority of Amharic speakers found in Ethiopia even though there are also speakers in a number of other countries, particularly Italy, Canada, the USA and Sweden. Unlike other Semitic languages, such as Arabic and Hebrew, modern Amharic script has inherited its writing system from Ge'ez (gez) (Yimam, 2000). Amharic language uses a grapheme based writing system called fidel (âÔl) written and read from left to right. Amharic graphemes are represented as a sequence of consonant vowel (CV) pairs, the basic shape determined by the consonant, which is modified for the vowel. The Amharic writing system is composed of four distinct categories consisting of 276 different symbols; 33 core characters with 7 orders (€, ∫, ‚, ƒ, ", … and †), 4 labiovelars with 5 orders symbol (q, u, k and g), 18 labialized consonants with 1 order (wƒ) and 1 labiodental characters consisting 7 orders (€, ∫, ‚, ƒ, ", … and †). In Amharic writing system, all the 276 distinct orthographic representation are indispensable due to their distinct orthographic representation. However, as part of speech translation, speech recognition mainly deals with distinct sound. Among those, some of the graphemes generate same sound like (h, M, u and Ω) pronounced as h/h/. On the other hand, Machine translation emphasizes on orthographic representation which result the same meaning in different graphemes. As a result, normalization is required to minimize the graphemes variation which leads to better translation while minimizing the ASR model. Table 1 presents the Amharic character set before and after normalization. Table 1: Distribution of Amharic character set adopted and modified from As a result, graphemes that generate the same sound are normalized in to the seven order of core character. The normalization is based on the usage of most characters frequency in Amharic text document. This includes, normalization from (h, M, u and Ω) to h, (…, e) to …, (U, s) to s and (Õ, Ý) to Õ along with order. Tourism in Ethiopia Tourism is the activity of traveling to and staying in places outside their usual environment for not more than one year to create a direct contact between people and cultures (UNWTO, 2016). Ethiopia has much to offer for international tourists 1 ranging from the peaks of the rugged Semien mountains to the lowest points on earth called Danakil Depression which is more than 400 feet below sea level. In addition, tourism become a pleasing sustainable economic development that serves as an alternative source of foreign exchange for the counties like Ethiopia. Moreover, The 2015 United Nations World Tourism report (UNWTO, 2016) and the World Bank 2 report indicate that, in 2015 a total of 864,000 non-resident tourists come to Ethiopia to visit different tourist attraction. These include; ancient, medieval cities and world heritages registered by UNESCO as tourist attraction. Since the year 2010 until 2015, the average number of tourist flow increase by 13.05% per year. According to Walta Information Center 3 , citing Ethiopia Ministry of Culture and Tourism, Ethiopia has secured 872 million dollars in first quarter of its 2016/17 fiscal year from 223,032 international tourists. The revenue was mostly through conference tourism, research business and other activities. Majority of the tourists were from USA, England, Germany, France and Italy speaking foreign languages. Beside this, tourists express their ideas using different languages, the majority of the tourists can speak and communicate in English to exchange information about tourist attractions. Due to this, language barriers are a major problem for today's global communication (Nakamura, 2009). As a result, they look for an alternate option that lets them communicate with the surrounding. Thus, speech translation system is one of the best technologies used to fill the communication gap between the people who speak different languages (Nakamura, 2009). This is especially true in overcoming language barriers of today's global communication besides supporting underresourced language. However, under-resourced languages such as Amharic, suffer from having a digital text and speech corpus to support speech translation. So, after collecting text and speech corpora, moving one step further helps in solving language barriers problem. Therefore, this study attempts to come up with an Amharic-English speech translation system taking tourism as a domain. Data Preparation Nowadays, Amharic language suffers from a lack of speech and text corpora for ASR and SMT. Beside these, collecting standardized and annotated corpora is one of the most challenging and expensive tasks when working with under resourced languages (Besacier et al., 2006;. For Amharic speech recognition training and development, 20 hours of read speech corpus prepared by Abate et. al (2005) were used. However, due to unavailability of standardized corpora for speech translation in tourism domain, a text corpus is acquired from resourced and technologically supported languages particularly English. Accordingly, a parallel English-Arabic text data was acquired from the Basic Traveller Expression Corpus (BTEC) 2009 which is made available through International Workshop on Spoken Language Translation (IWSLT) (Kessler, 2010). A parallel Amharic-English corpus has been prepared by translating the English BTEC data using a bilingual speaker. This data is used for the development of speech translation cascading component such as, ASR and SMT. The corpus has a total of 28,084 Amharic-English parallel sentences. To keep the dataset consistent, the text corpus has been further preprocessed, such as typing errors are corrected, abbreviations have been expanded, numbers have been textually transcribed and concatenated words have been separated. Amharic speech recognition is conducted using words and morphemes as a language model with a phoneme-based acoustic model. Similarly word and morpheme have been used as a translation unit for Amharic in Amharic-English machine translation. Morpheme-based segmentation of training, development, testing obtained by segmenting word into sub-word unit using corpus-based, language independent and unsupervised segmentation for using morfessor 2.0 (Smit et al., 2014). Then, the 8112 (28.38%) test set sentences are recorded under a normal office environment from eight (4 Male and 4 Female) native Amharic speakers using LIG-Aikuma, a smartphone based application tool (Blachon et al., 2016). Accordingly, a total of 7.43 hours read speech corpus ranging from 1,020 ms to 14,633 ms with an average speech time of 3,297 ms has been collected from the tourism domain. Moreover, as suggested by , morphologically rich and under-resourced language like Amharic provides a better recognition accuracy using morpheme based language model with phoneme based acoustic model. Similarly, language model data for Amharic speech recognition has been collected from different sources. A text corpus collected for Google project (Tachbelie and Abate, 2015) have been used in addition to BTEC SMT training data excluding the test data. Like speech recognition, a total of 42,134 sentences (374,153 token of 8,678 type) English language model data have been used for Amharic-English machine translation. The data is collected from the same BTEC corpus excluding test data. Consequently, corpus based and language independent segmentation have been applied on a training, development and test set of Amharic SMT data. Morfessor is used to segment words to a sub word units. Table 3 presents summary of the corpus used for Amharic-English machine translation using word and morpheme as a unit. Likewise, the post-edit is conducted using a corpus based n-gram approach. Accordingly, a corpus containing 681,910 sentences (11,514,557 tokens) of 582,150 type data crawled from web including news and magazine. Then, the data is further cleaned, preprocessed and normalized. From this data, a total of 5,057,112 bigram, 8,341,966 trigram, 9,276,600 quadrigram and 9,242,670 pentagram word se- quences have been extracted after expanding numbers and abbreviation. System Architecture As discussed in Section 1, the state-of-the-art of speech translation suggest to apply through the integration of cascading components to translate speech from source language (Amharic) to a target language (English). As part of the cascading components, the output of a speech recognizer contains more and presents a variety of errors. These errors further propagates to the succeeding component of speech translation which results in low performance. Hence, in this study we propose an Amharic ASR post-editing module that can detect an error, identify possible suggestion and finally correct based on the proposal. The correction is made using n-gram data store using minimum edit disatnce and perplexity before the error heads to statistical machine translation. Figure 1 presents Amharic-English speech-tospeech translation (S2ST) architecture with and without considering ASR post-edit. The post-edit process mainly consists of three different phases; error detection, correction proposal and finally suggest correction as depicted in Figure 2. The first phase of post editing is to detect the error from ASR recognition output. Basically, to detect an error, recognized morpheme units are concatenated to form a word and its existence is checked in unigram Amharic dictionary. If the error is detected during the first phase, then the correction proposal phase takes the sentence with error mark and creates (w-n+1) n-grams after adding start "<s>" and end "</s>" symbol, where w is number of token in sentence and n specifies n-grams. Otherwise, the sentence is considered as correct. Once the candidate identified, the suggestion is made taking the minimum edit distance between Figure 2: Amharic ASR post-edit algorithm the error detected and suggestion selected. In this phase, the sum of maximum edit distance has been set experimentally to 16. The maximum edit distance 16 was selected to provide at least one suggestion per sentence and minimize the computation of perplexity. Table 4 depicts a sample of possible correction proposal for a sentence "Îs ¶³ …¡ -°È ¶Û°sã €ÔrÝ †∫". Finally, the suggestion is made primarly using minimum edit distance then by calculating the perplexity. The minimal edit distance is computed between the word "-°È ¶Û" and the underlined ngram based possible suggestion from a sentence of Table 4. Table 4: Sample n-gram based suggestion for a sentence "Îs ¶³ …¡ -°È ¶Û°sã €ÔrÝ †∫". If the edit distance is the same as a different suggestion, then the decision is made by selecting the one that result lower perplexity. Experimental results Speech translation experiments are conducted through cascading components of speech translation as discussed in Section 1. In speech recognition experiments, Kaldi (Povey et al., 2011), SRILM (Stolcke et al., 2002) and Morfessor 2.0 (Smit et al., 2014) have been used for Amharic speech recognition, language modeling and unsupervised segmentation, respectively. Morfessor based segmentation has been applied to segment training, testing and language model data for Amharic. In addition to this, Moses and MGIZA++ for implementing a phrase based statistical machine translation and Python is used for implementing the post-edit algorithm and to integrate ASR and SMT under the Linux platform. The entire ASR experiment is conducted using a morpheme-based language model with phonemebased acoustic model. Accordingly, the experimental result is computed using NIST Scoring Toolkit (SCTK) 5 and presented in terms of word recognition accuracy (WRA 6 ) and morph recognition accuracy (MRA). Thus, the Amharic speech recognition experiment shows a 76.4% for the morpheme-based. Then, after the concatination of morphemes to words, a 77.4% word-based recognition accuracy have been achieved. Consequently, Amharic-English SMT experiment have been conducted with and without considering Amharic ASR result. The first two experiments were conducted without considering ASR. Accordingly, a word-word system resulted in a BLEU score of 14.72 while morpheme-word brings about 11.24 BLEU. Combining Amharic ASR with Amharic-English SMT as cascading component resulted in a 6.29 BLEU score through 76.4% of recognition accuracy for Amharic morpheme and English word based translation. Similarly, Amharic word with English word based translation shows a 12.83 BLEU score using 77.4% recognition accuracy without using ASR post-edit. The result achieved by ASR can further be improved by applying post-edit on Amharic speech recognition. Table 6: Amharic-English Speech Translation result. Accordingly, the morpheme based recognition followed by post-edit resulted in a BLEU score of 13.08 at 78.5% of word recognition accuracy speech translation. The result obtained from the n-gram post-edit experiment shows an absolute advance by 1.42% from word recognition accuracy of 77.4% obtained by concatenating a 76.4% morpheme based recognition. Similarly, BLEU score evaluation advanced by 1.95% (from 12.83 to 13.08). Conclussion and Future work Speech translation research has been studied for more than a decade for resourced and technological supported languages like English, European and Asian. On the contrary, attempts for under resourced languages, not yet started, particularly for Amharic. This paper presents the first Amharic speech to English text translation using the cascading components of speech translation. For ASR, a 20 hours of training and 7.43 hours of testing speech were used consuming a morpheme-based language model with a phonemic acoustic model. Whereas for SMT, 19,472 sentence for training and 8112 sentences for testing used. Similarly to apply ASR post-edit using n-gram approach, a corpus consisting 681,910 sentences were used. Accordingly, speech translation through ASR post-editing resulted a 0.25 (1.95%) BLEU score enhancement from the word-based SMT. The enhancement seemed as a result of improving ASR by 1.42% using a corpus based n-gram post-edit. The current study shows the possibility of enhancing the performance of speech translation by controlling speech recognition error propagation using post-editing algorithm. Further works need to be done to apply postediting both at the recognition and the translation stages of speech translation.
4,081
2017-09-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Teaching a Catholic Philosophy of Interpersonal Communication: The Case for “Soul Friendship” While social justice education has a rich and ancient history within the Catholic Church, academic disciplines have only recently begun to make the idea of social justice relevant within courses for undergraduates. In the communication discipline, debate about social justice has been lively and varied over the last two decades, and has provided rich entry points for philosophical interpretation. This paper considers interpersonal communication from the vantage point of social justice in the Catholic intellectual tradition. While the importance of friendship for society is nothing new (Aristotle addressed this issue in the Nicomachean Ethics), contemporary cultural hindrances to a just or spiritual friendship are many in the United States. The essay discusses philosophies surrounding social justice, communication, and friendship–ultimately asking what a university course centered on “soul friendship” might look like. While social justice education has a rich and ancient history within the Catholic Church, academic disciplines have only recently begun to make the idea of social justice relevant within courses for undergraduates.In the communication discipline, debate about social justice has been lively and varied over the last two decades, and has provided rich entry points for philosophical interpretation.This paper considers interpersonal communication from the vantage point of social justice in the Catholic intellectual tradition.While the importance of friendship for society is nothing new (Aristotle addressed this issue in the Nicomachean Ethics), contemporary cultural hindrances to a just or spiritual friendship are many in the United States.The essay discusses philosophies surrounding social justice, communication, and friendship-ultimately asking what a university course centered on "soul friendship" might look like. C ourses in interpersonal communication are common in American colleges and universities.Typically taught at the introductory undergraduate level, in its most basic form interpersonal communication seeks to assist students in developing communication skills for managing one-on-one relationships.For Catholic colleges and universities where communication departments exist, there may be cause for greater purpose in interpersonal communication.The missions of Catholic institutions uniformly suggest that their faculty and students focus their attention on the dignity of the human person and on issues of social justice.These missions suggest too that a course like interpersonal communication ought to strive for more than "skill-building."Interpersonal communication, like all courses related to the humanities in some way, benefits at a Catholic institution from philosophical foundations. The suggestions that there are philosophical foundations to interpersonal communication, and that Catholic institutions of higher education might be the most imperative places for these philosophies to emerge, begins with the assumption that interpersonal communication is not just a course topic but also a field of scholarly inquiry.Interpersonal communication is a field within The Case for "Soul Friendship" the larger discipline of communication.Although communication's origins can be traced to the study of rhetoric and oratory, interpersonal communication is something quite different: Interpersonal communication emerged as a formal area of study for communication scholars in the 1960s as a result of several social and intellectual streams converging and bringing focus to the individual person.The communication forms analyzed are informal, dyadic interactions-not formal oratory. Since its beginnings 50 years or so ago, most scholarship in interpersonal communication has been quantitative (Knapp, Daly, Albada, & Miller, 2002), accompanied by philosophical approaches from the tradition of dialogue (Arnett, 1981;Ayres, 1984).These latter approaches are experiencing a resurgence of scholarly interest (e.g., Anderson, Baxter, & Cissna, 2003), although the study of friendship within interpersonal communication has not been deeply impacted.This essay is the beginning of a conversation between interpersonal communication and philosophies of social justice and friendship. Since the topic of social justice has been solidly incorporated into communication research over the last 15 years or so at least, and has been a tradition in Catholic thought for nearly two millennia, the social justice approach in this paper is not totally new.However, those two streams of thought-the ancient one of Christian social justice and the much newer one of communication research-have not yet converged, and that convergence is precisely what this paper sets out to do.Taking the concepts of social justice research in communication that have been previously published, I consider the areas of debate for social justice in communication and grapple with the ways in which interpersonal communication education in Catholic colleges and universities might help to reconcile power inequalities in communication through a focus on social justice in interpersonal interactions.This essay is above all interested in the ways in which specific approaches to love, embodied within interpersonal relationships, can ensure justice not just between individual persons in discrete interactions, but also how those approaches have implications for larger societal issues that pertain to justice and Catholic higher education.In order to provide depth of inquiry in this essay, a single area of interpersonal communication will be examined-friendship.This choice in itself speaks to issues of power in interpersonal communication, since friendship is often underanalyzed but plays a vital part in other human relationships and contexts such as family, romantic love (Eros), and the workplace. It seems that in the field of communication social justice, too, is underana-lyzed.The review of literature below suggests that communication scholars have not yet placed social justice within the intellectual tradition from which social justice emerged.Rather, communication scholars see it as a relatively new phenomenon.The purpose of the current research is to bring the Catholic intellectual tradition to the discussion of social justice in communication, and to expand the significance of interpersonal communication courses in students' lives.I bring the following four questions to this endeavor: What past inspirations about friendship do we find in the Catholic intellectual tradition?How do these inspirations pertain to social justice?Is soul friendship a viable philosophy for teaching interpersonal communication?Finally, what practical implications might there be?Before explaining specific approaches to understanding and teaching interpersonal communication from the vantage point of the Catholic intellectual tradition, however, I outline the perspectives on social justice that have come before within the field of communication.I also add to these perspectives with Christological and Trinitarian approaches in order to set a foundation for the rest of the essay. State of the Field: Interpersonal Communication, Concepts of Justice, and (Soul) Friendship Though communication scholars have yet to write directly about interpersonal communication from the Catholic intellectual tradition, social justice is firmly entrenched within the field of communications.It is, in the words of Julia Wood, "alive and well" (Wood, 1996).Communication journals were the sites of two special issues in the 1990s that addressed the topic of social justice: one in the Journal of Applied Communication Research ( JACR) and the other in Communication Studies.The majority of essays on social justice in the field approach the topic as it concerns the realm of research and scholarship-there are no essays solely dedicated to social justice as an important theme in communication pedagogy, or constituting a significant theme for communication courses in general (let alone interpersonal communication in particular).These essays on social justice and research are, however, essentially praxis-oriented.Pearce (1998) dedicates his contributions to the intersections between social justice as an idea and as a set of practices.Frey (1998) also describes social justice in terms of applied communication research, as is fitting JACR's special issue.Interestingly, though, the question of a praxis (theory-informed practice) The Case for "Soul Friendship" approach to social justice is precisely what sparks debate in the communication discipline throughout these two special journal issues.If communication as a discipline ought to be concerned with social justice, and these concerns lead to research with practical or applied implications, where exactly ought these research outcomes have their impact?For Pollock, Artz, Frey, Pearce, and Murphy (1996), communication finds itself caught between a "Scylla and Charybdis:" Social justice as a concept is often criticized within our discipline for being either too narrow or too general.This assessment is significant, for "social justice" is often a vernacular term whose meaning one presumes to understand without much reflection or investigation.While the Catholic intellectual tradition indicates that social justice is a topic of vital importance from the inception of Christianity, scholars writing outside of this tradition approach the topic as a relatively new idea.The lack of depth in some approaches may cause social justice to seem amorphous as a concept.Indeed, that is one challenge in the communication articles cited here: Social justice in its contemporary communication iterations is not rigorously interrogated.Within theology, unreflective allusions to social justice have been critiqued for their overreliance on Marxism rather than Christology (McGovern, 1989).Communication scholarship on social justice also reflects the Marxist approach: It is a general term that stands for the eradication of contemporary socioeconomic inequalities.In many scholarly examples, social justice is not clearly defined and there is an implicit assumption that it does not need to be.Again, social justice in communication is occasionally at risk of becoming empty language-an example of the kind of "broad statements that are so abstract and mean so little that they are virtually impossible to oppose" (Brooks, 2003, p. 20). This risk of positing social justice as a vague concept may affect undergraduate students.Without a philosophical basis or understanding of the Christological history behind it, social justice is a good they may know they ought to support, but they may be hard-pressed to define it without at least some guidance.Pollock et al. (1996) set the parameters of social justice as pertaining to ethics, and their definition of social justice requires not only that sources of inequality are investigated but also that the researcher do as much as possible to dismantle those sources.This is the crux of their praxis approach.These authors also understand social justice to mean that researchers will advocate directly for the oppressed (Pollock et al., 1996). My starting definition for social justice contains these criteria as well; I would not add or subtract from Pollock et al.'s four elements of ethics, inves-tigation, dismantlement, and advocacy (1996).However, working from the perspective of the Catholic intellectual tradition, I ground social justice in Christology and Trinitarian anthropology.From this perspective, social justice is the commitment to (1) the dignity of every human person in recognition of Christ in every person; (2) solidarity across the human family-despite cultural divisions-in recognition that human persons are created in the image of a Triune God and therefore flourish in community; and (3) working to ameliorate the structures of human society that undermine the first two goals listed here.The approach in this essay is therefore additive to "social justice" thus far articulated by communication scholars, whose approaches advocate a reversal of the societal structures that create inequalities.Again, these communication approaches are based in sociological critiques from the last half century or so, and omit philosophical or theological foundations for social justice.This essay's approach to interpersonal communication and social justice through the Catholic intellectual tradition is additive in another way.By bringing social justice to the specific realms of interpersonal communication and friendship, a new avenue opens between communication research and direct human experience.My juxtaposition of social justice and friendship is meant to enhance the idea of social justice for very particular practices that pertain to everyday life between private persons-not merely institutions in the public sphere.Issues of social justice are not limited to broad public issues, but are just as relevant to everyday relationships between friends. This assertion that social justice ought to be both public and private responds to another debate within the communication journals' special issues of the late 1990s.Specifically, Makau (1996) expressed concern that a preoccupation with social justice as focusing on structural change would negatively impact practices in interpersonal communication.She is not alone in these reservations.Much of the criticism of social justice practice in theology, for example, indicates that social justice (in this case, liberation theology) can become too instrumental in its focus on the political outcome of liberation and thereby neglect the need for compassionate interaction that respects the dignity of each unique human person (McGovern, 1989).Likewise, Olson & Olson (2003) are uneasy with Pollock et al.'s (1996) requirement that social justice research must always yield "usable knowledge."This criterion, they believe, infringes on the creativity and freedom of both researchers and laypersons, and unnecessarily restricts social justice in its significance for communication. This brief review should justify Wood's (1996) identification of social justice in communication as "alive and well."At the same time, in comparison to the The Case for "Soul Friendship" Catholic intellectual tradition on social justice, social justice is only vaguely defined in communication research.Many of the communication scholars' ideals of social justice are instrumental, seeking largely political and socioeconomic outcomes without robust attentiveness to interaction with individuals.Except for Makau's (1996) work, interpersonal communication is missing from communication discussions of social justice.By considering social justice's impact on interpersonal communication praxis as well as pedagogy, this essay attempts to shape the institutional/structural concerns of Pollock et al. (1996) to the interpersonal virtues that Makau (1996) stresses.Below, I explain the connection between this effort and philosophies of friendship. Friendship and Interpersonal Communication This project understands friendship, or philia, to be the love that exists between two persons whose love is based neither on familial relation nor sexual intimacy.This is not to say that friends may not be biologically "related" or that spouses are not friends.Rather, the definition arises from ancient concepts of philia.Ancient philosophers insisted on the external quality of friendship: Friendship must always be "about" something.It is neither familial obligation or comfort, nor sexual attraction.It should be stressed that in describing friendship as a "love," I am isolating a certain deep kind of relationship.Certainly "friends" are very often companions, for instance, who enjoy similar activities or interests.Rawlins (1992) has drawn a difference between "agentic" and "communal" friendships.Agentic friendships form when people share a classroom or workspace; they enjoy each other's company as long as they are "thrown together" for some fairly random reason.But once they graduate school or change jobs, the friendships fade.Communal friendships, on the other hand, tend to be lifelong.Friends may meet in school or the workplace, but the friendship is a genuine deep commitment: Regardless of how far apart they may be in the future, their communication remains lively and their bond remains strong. Rawlins' (1992) classification above is one example of the importance of distinguishing the many instances of friendship in human life.It shows that philia is unique in the category of friendship.In this love we call philia, friendship is a deep love indeed-more like the communal love identified above.In philia, friends are persons who "see the same truth," are focused on an external good, and whose closeness emerges over joint commitment to similar goals (Lewis, 1960).It is more than the desire for a companion in certain activities or a cure for general loneliness.Friends are committed to similar interests and goals, a "third thing" on which they focus.This good is always "between" and in front of friends.Even physical posture, according to Lewis (1960), distinguishes friendship from romantic love.Lovers "gaze into one another's eyes," but friends are "side by side" and shoulder to shoulder (Lewis, 1960).Taken to its most idealistic ends, being "shoulder to shoulder" implies a metaphor for solidarity and is especially significant for friendship and social justice, as I discuss later in the essay. Since friendship is a love between two persons, one might ask how it is a social good benefiting the public sphere?Aristotle knew the answer to this question well, and Lewis (1960) elaborates upon it.For Aristotle, friendship was a social good because friends encourage our best work in the prime of our lives.The companionship and positive energy between good friends who are also involved in the same project-engineers, inventors, doctors, and even literary artists like Lewis and his best friend J.R.R. Tolkien-spur each other to greatness.Lewis did not leave out the possibility that friends also spur each other to evil, if their "joint commitment" is not to an external good but is instead poisoned by their own exclusivity and belief in infallibility.But friendship by its definition is love that emerges out of joint commitment to a good.Aelred of Rievaulx (2010) speculated that when two so-called friends break apart over disagreement related to the good-if one violated the good, in other words-then no friendship ever actually existed between the two.For Aelred, goodness thus becomes almost a "prerequisite" of sorts for love in the public sphere. Few contemporary studies of friendship in interpersonal communication discuss the topic of moral and ethical goodness, friendship as a social good, or the potential for social justice in friendship.This is understandable, given the relative dominance of social scientific methods in communication (Knapp et al., 2002).Recent scholarship on friendship in interpersonal communication discusses the impact of new technologies and shifting social norms on communication behavior between friends.Intriguing new terms have been coined by writers interested in friendship, such as Watters' ( 2003) "urban tribe," which describes the roles and communication patterns surrounding groups of friends who are young, single, and living in American cities.Since 2005 many communication articles on friendship are preoccupied with new technologies that enable social networks (Kleinberg, 2008;Westerman, Van Der Heide, Klein, and Walther, 2008).Other recent works build on classic communication theories used to explain relationships with those outside our families, such as social exchange theory and social judgment theory.The Case for "Soul Friendship" This essay explores the possibility of another social theory-not about exchange or judgment, but about justice.Given the limited but healthy range of works on interpersonal communication, one might ask why it is important to consider a synthesis of social justice and friendship.I answer this in the next section of the essay, and then move to a description of friendship informed by philosophical notions of social justice. Friendship as Social Justice-and Vice Versa An attempt to integrate the study of friendship and social justice is worthwhile not merely because it is interesting to do so, but because the integration invites possibilities for enhanced human experience and for enhanced academic study.Both friendship and social justice are opened up by the question, for several reasons. In the first reason, one might revisit Makau's (1996) concern that a preoccupation with societal change detracts from our efforts at real ethical interpersonal action.Above, I likened this important point to the critique of liberation theology, which states that the goal of liberation runs the risk of becoming too instrumental and losing sight of real human persons (McGovern, 1989).By bringing concepts of social justice to the teaching of interpersonal communication philosophies of friendship, one begins with love between two persons.The love between two unique persons is not sacrificed for the good of the social order.On the contrary, as I argue later in this essay, unique aspects of friendship actually provide for positive social change.When two friends turn their commitment to social justice and work on it together, there is a greater possibility of their efforts bearing fruit-and simultaneously, their love for one another itself deepens. The second reason to merge social justice and friendship has direct bearing on scholarship and pedagogy in philosophies of interpersonal communication.Within the field, far more studies concern romantic love than friendship.Perhaps this imbalance in scholarship reflects some vernacular worldviews that there is little to learn or say about communication between friends.Simon (1997) writes of contemporary Anglo-America: "The relationships that are often the focus of our energies are romantic ones" (p.109).Friendship seems commonplace, and indeed it is-even in popular fictions and media that Simon could not have envisioned in 1997."Friend" is now not just a noun but a verb, as on Facebook where one individual can "friend" another online."Friend" also becomes a generic term rather than a specific one: in my toddler's daycare, everyone in the class is called a "friend."This is a nice sentiment and perhaps a way of getting around the stuffy term "classmates" for 2-year-olds, but toddlers are not the only ones who seem at a loss to describe the people they meet outside their families.At every level of society, American English has very few words to describe the people outside of familial or romantic relationships.In American English one is a "friend" or a "best friend" or, more recently, "BFFL" (best friend for life).Slang terms like "peeps" or "posse" come in and out of fashion, but these describe groups rather than dyads.These American English examples are particularly striking when contrasted with Japanese, which has over 10 different precise words to describe levels of companionship and commitment between nonrelated individuals who are not romantically involved (in other words, friends).These words are used explicitly in Japan, both internally (between the friendship partners) and externally (to explain the friendship to others).The special attributes of the commitment between friends are thus honored, whether they are casually companionable or very deep.Although scholars like Rawlins (1992) may introduce academic terms like "agentic" and "communal" to describe different levels of intimacy or commitment in friendships, these are not part of everyday American discourse. Another cross-cultural examination of perspectives on friendship may help to illustrate the American "generic" approach to friendship as potentially problematic.Without words to describe levels of friendship-and without the rigorous study or reflection needed to achieve these levels-the line between acquaintanceships and friendships is often blurred in Anglo-American culture.This is evidenced by Basso's (1990) work among the Apache.The Native Americans with whom Basso lived described their bewilderment at the "instant friendship" most whites tried to achieve with them, not taking the time to get to know Others as well as they should before interacting in friendly and informal ways.Basso (1990) concludes that the Apache regard most Anglo-Americans as insincere and condescending in their communication with Others.I offer this example not necessarily as an indictment of American friendliness in general, but instead as a caution against Anglo-American perceptions of friendship as simple and irrelevant for reflection.In Basso's (1990) study, the Anglo-Americans were no doubt "acting naturally"-but they were unaware that friendship communication arises from cultural philosophy, and their own worldview infringed on the interpersonal comfort of Others. The misunderstanding between Anglo-Americans and Apaches indicates that "friendship" is at least in part a cultural formulation, and it is to everyone's benefit to reflect upon what we mean by it and what we mean through our in-The Case for "Soul Friendship" teractions.In higher education, this reflection on friendship may not be consistently achieved in a formal sense.Why is friendship seen as commonplace, simple, perhaps even dull in both academic and vernacular spheres?Simon (1997) points to social norms and worldviews in the United States which tend to exalt romantic love as the most valuable and fulfilling of the human loves when compared to family relationships or friendship.The majority of "love stories" in popular culture, for instance, are preoccupied with romantic love (Simon, 1997, p. 109). This fact points to a third reason why this essay strives to bring together social justice and friendship in the philosophy of interpersonal communication: because the "love story" focus on romantic love is itself a potential interpersonal injustice.The exaltation of romantic love over friendship can cause a kind of "narrative disconnect" for persons who do not sustain long-term romantic love relationships.Stone (1975) describes the effect of passive fairy tale heroines on women she interviewed, for example.Interestingly, the original collection of fairy tales by Jakob and Wilhelm Grimm that forms the basis for most American collections (and Disney films) had only a handful of "passive and pretty" heroines (p.42).But Disney films of her generation, taken from children's literature collections published in the United States, saw the vast majority of women depicted either as villainesses or weak, passive protagonists.Stone's (1975) research subjects were preoccupied with the romantic nature of the tales in one way or another-either as youngsters, fantasizing about how their lives might one day change; or as older women, unhappy and dissatisfied with how the fairy tales related to their own real experiences. Stone's (1975) essay is just one example in a body of literature that offers a feminist critique of Disney films and fairy tales.But it speaks to a larger cultural issue: How is it that American editors chose only passive heroines for literary collections of Grimm tales (translated from the German), upon which the Disney films were ultimately based?These editorial choices speak to a particular cultural worldview of romantic love as life-changing and always positive.Certainly the heroines' lives are not changed for the better by family (especially stepfamilies), and friendships are vague in the stories.Indeed, friendships too are passive, especially in the Disney films, for friendships are forged with equally helpless animals or other creatures, many of whom do not speak. I consider this fairy-tale preoccupation with romantic love to be stemming from a particular cultural worldview because, as in the case with names for friendship, there are cross-cultural comparisons available.Baxter and Akkoor (2008) show how American notions of romantic love as a basis for marriage are a cultural construct, especially in comparison to the worldviews and thought processes that form a foundation for arranged marriages in India.Their research indicates that over long periods of time, spouses in arranged marriages are ultimately more satisfied with their relationships than are spouses who independently chose their partners for "romantic" reasons.This is because the value of compromise, foundational to arranged marriages, is a more realistic precursor to married life than is "romance" (Baxter & Akkoor, 2008). Cross-cultural comparisons like these are helpful for social justice, for they point out not only the presumptions and misconceptions one might have about Others, but also the faulty "reasoning" behind one's own cultural norms and attitudes.Simon (1997) attributes the faulty reasoning to an "undisciplined heart" that creates unrealistic fictions (fantasies) rather than imagining a realm of possibilities.The feminist critiques of popular cultural depictions of romantic love in the United States are a clear example of this.Unfortunately, the faulty reasoning here is that friendship is somehow less valuable than romantic love-especially to women.While I do not believe that consumers of popular entertainment media are by any means brainwashed by what they see (even at a very young age), perhaps there is some connection between the exaltation of romantic love in both popular culture and scholarship in communication.These parallel developments continue in vernacular language about friendship and the commonplace, casual attitudes that Anglo-Americans may sometimes take in everyday life toward friendship. A resultant "narrative disconnect" between the expectation of romantic love and the actual reality of lived experience can be distressing on two fronts.First, an examination of ancient and medieval philosophies of friendship indicates that the exaltation of friendship is in fact an aspect of Western worldviewand Western higher education.This honoring of friendship in the heritage of American universities and colleges began with the Catholic intellectual tradition.It has merely been lost amidst several societal shifts, including the overbearing nature of cultural representations of romantic love.This essay attempts to recapture those philosophical traditions concerning friendship, especially for Catholic education.The second front on which the narrative disconnect is troubling is more pragmatic: When we lose reverence for friendship, we lose opportunities to strive with others for social justice.This essay will address that as well, showing how friendship can ensure social justice not just for persons who are friends but also for persons who are neighbors-who live together in society.The Case for "Soul Friendship" By now the potential benefits of a philosophical integration between social justice and friendship should be clear.What does this integration look like when it becomes a praxis?As with Aristotle's view of friendship as a social good, we find that the ancients have already meditated upon the qualities necessary for friendship to serve social justice, and vice versa.These qualities converge in the idea of a soul friend, which is a concept expounded at least since the time of Cicero.I discuss the case for "soul friendship" as one permutation of the combination of friendship and social justice in the next section. Soul Friendship and the Anam Cara In the previous section I discussed the dominance of romantic love over friendship in both academic and vernacular discourses.Another example of this dominance occurs even in the Celtic term anam cara, which means "soul friend" but has been appropriated by New Age literature to mean "soul mate" (O'Donohue, 1998).One can purchase wedding rings with the Celtic phrase engraved on them, for example.This translation and appropriation is misleading (though not surprising, given Anglo-American preoccupation with romantic relationships).Anam cara refers not to a soul mate, a predestined spouse, but to a "soul friend."Many cultures traditionally speak of a search for a "soul mate," as in the Hebrew bashert.But the Celtic tradition of anam cara is not one of them.It has always been a philosophy of soul friendship (Hanlon, 2000;Leech, 1977;Murphy, 1997).Leech (1977) suggests that the idea of anam cara probably existed in pre-Christian Ireland, but one of its most celebrated proponents was St. Brigid of Kildare.The philosophy of soul friendship I wish to explore has a number of components, some of which emerge from ancient Greece and classical Rome.However, I begin with Brigid because her narrative provides an interesting hermeneutic entrance into the characteristics of soul friendship. Brigid was born in the fifth century.She was the daughter of a chieftain and one of his slaves, and more historians agree that she was probably about 8-years old at the time of St. Patrick's death.Since Patrick is the apostle to Ireland, it is obvious that Christianity was a fairly new movement even at the time of Brigid's coming of age (Reilly, 2002).She was raised as a Christian and there are wonders attributed to her even at a young age, most of them pertaining to her hospitality and generosity.She refused marriage after her father freed her, and instead dedicated her life to Christ (in today's terms, she became a nun).At that time nuns remained at home with their families, living in a kind of seclusion from society.They spent all their time in prayer or doing needlework and other crafts to decorate the new Christian churches.This was a difficult life, most especially because it was lived in solitude away from other like-minded women and because many nuns' families disapproved of this choice to refuse marriage (Curtayne, 1954).Certainly it would have been most difficult for Brigid, whose father sought to increase his wealth and power through her marriage and who by all accounts was regularly exasperated with her habit of giving away his household goods to beggars (Reilly, 2002).His wife, who was not Brigid's mother, also felt less than affectionate toward Brigid.So Brigid made a radical move: She decided to establish a community of nuns, the first of its kind.She and eight other women made a commitment to live together in community and were received by the Bishop of Kildare, given property, and began their life in their own self-sufficient monastery (Curtayne, 1954). The image of the convent or cloister or even monastery for females seems so familiar to us today that we miss the significance of it for Brigid's philosophy on the anam cara, the soul friend.Brigid believed that dedication to Christ and lives together in community were one and the same thing-not merely because life alone in a house (often with nonbelievers) was dreary and painful.She wrote compellingly of the pitfalls one faced with a solitary life: The hermits, she wrote, were prone to pride in their own asceticism and a surety in their righteousness that no one else could test.The itinerant preacher, on the other hand, spent so much time in conversation that he or she scattered all their contemplative energy to the winds (Curtayne, 1954).If nuns lived together, they could form soul friendships-they would take care of one another's souls in a mutual commitment to truth (Leech, 1977). Though soul friendship exists outside of Christianity (Leech, 1977) and though we have precious few details of Brigid's philosophy (Reilly, 2002), her narrative nonetheless opens up the significant themes of soul friendship.First, one might ask what is meant by "soul."Again, while the anam cara was solidified as a Christian concept, the soul friend existed long before that.In Christian tradition the soul is immortal, but one's sense of immortality can be distorted without a commitment to the good.For instance, William Shakespeare's play Othello aptly captures a shift in European thinking from heavenly destiny to earthly reputation (Roberts, 2007).In the play, Michael Cassio laments in true humanistic fashion the loss of his reputation: "the immortal part of myself " (Shakespeare, .Thus, the soul is not just that which "lives on" after one's death.The soul is that part of oneself that is accountable The Case for "Soul Friendship" to questions of the common good and social justice.Certainly Brigid and her nuns shared this.What other aspects of soul friendship are clarified by even this brief account of their lives?The following list describes the basic themes of soul friendship. Friendship begins with mutuality. In the starting definition of friendship for this paper, I cited Lewis (1960) as stating that friends "see the same truth."Brigid and her fellow nuns saw the same truth not only about their chosen life paths, but about the nature of God and love.Cicero put it well: Friends are two people "in agreement in things human and divine, with good will and charity" (Amic.6.20).Leech (1977) describes the history of the soul friend tradition as being steeped in the necessity of orthodoxy, obtained through discernment.Practicing discernment together, soul friends achieved mutual agreement in human and divine matters. The soul friend is a particular commitment of relation. While all friend- ships that are truly loving are based in the above concept of mutuality, not all friendships are soul friendships.Soul friendship requires a particular commitment, and unlike other friendships that are ever expanding (Lewis says that two friends "always invite a third," for instance), soul friends might be better served to remain in a dyad.De Guibert (1956) describes soul friendship as different from spiritual direction, but still best accomplished between two persons.The greatest reason for this is the necessity for each friend to confront the enemies of the other's soul, as Aelred states quite strongly.One friend loves his friend's soul as much as his own: This love of one's own soul, and protection of the other's, can only arise among persons committed to the good (Aelred, 2010).Not only is this different from the youthful "carnal friendship" described by Augustine, or the "companionship" described by Lewis; it is a much deeper commitment than philia alone.Brigid shared friendship with all her companions in the convent, but encouraged each to have one particular soul friend.She herself did, and the two died within days of each other and were said to be inseparable (Hanlon, 2000).For Brigid and her nuns the necessity of discernment concerns heaven and how one might get there-which leads to the next aspect of soul friendship. The soul friend is a personal guide. The soul friend keeps the Other on the "right path."This kind of spiritual guidance is not uniquely Chris-tian, as Leech (1977) points out: He identifies the Chimbulei in South Africa, the shaman in multiple cultures, and most especially the Hindu guru.In Brigid's case this was the path to God.As Nouwen (1977) writes: "It is to God and only to God that the soul has to be led by the soul friend" (p.ix).Later Christians living in monastic communities echoed this aspect of anam cara, emphasizing as St. John of the Cross did that one cannot reach God on one's own: A director, a guide, a friend is needed (Leech, 1977).Thus, the next aspect of soul friendship was very important also.4. Soul friends live in community.I described above in the story of Brigid that her decision to live with other nuns in community was shockingly new to Christianity in Ireland-something that had never been done before.Her narrative thus emphasizes the communal nature of care of the soul: Again, one cannot and should not go it alone.This is basic to Christian anthropology, where God is one in three persons, but it also arises from pre-Christian Celtic notions of the soul friend (anam cara, or anmchara).Celtic chiefs had druid advisors, who after the advent of Christianity were replaced by clerics.These were counselors and guides, not in sacramental terms but in interpersonal ones.Leech (1977) traces this Celtic history of the soul friend/anmchara from the Welsh periglow back to the Greek syncellus, which means "one who shares the cell" (Leech, 1977, p. 50).This reference to "cell" is one of a monastic order, the rooms that Brigid and her nuns would have inhabited.Thus the anam cara finds a particular manifestation in medieval Christianity, though its philosophy is older than that.The benefits of living in community were crucial for social justice, as the next point illustrates. A community of soul friends is not passive or internally focused. Brigid's monastery at Kildare, like other monastic communities, was highly active in prayer-and Aelred (2010) points out that this above all was the task of the soul friend, to pray for the Other.The communal life was also a protection against evil, for as Ignatius of Loyola praised, one cannot keep secrets in community (Leech, 1977).Indeed, for Cicero, Ambrose, and Aelred, the very definition of a soul friend was one to whom one could "pour out one's heart freely" (Aelred, 2010).Aelred agrees with Cicero's pre-Christian view, and then adds a new element for the medieval soul friend.Aelred explains that when Christ revealed all to his apostles, he concluded by saying they were "no longer slaves: I call The Case for "Soul Friendship" you friends . . .because I have made known to you everything I have heard from my Father [ Jn 15:15]" (p.108).This was the model of Christian soul friendship.Again, however, one can look to Brigid's life to understand more deeply the nature of community.Despite the stereotype of "cloistered" monasteries in medieval history, Brigid and her fellow religious traveled a great deal out to other communities.This was particularly necessary during the fifth and sixth centuries in Ireland where Christianity was still new (Curtayne, 1954;Reilly, 2002).So the community not only contained the model for soul friendship; it also contained a model for social justice.The nuns and brothers did not look inward for peace: They were, as Thomas Merton has pointed out, some of the earliest social critics (Leech, 1977).The point can be made: Soul friends take care of each other's soul not just for the soul's sake, but for the world's sake.Roszak (1972) asserts on the topic of spiritual direction that if our souls wither, so will the world.Soul friends will not hesitate to confront one another over aspects of evil, to confront the enemies of each other's souls.They do this to "bear witness against the world," to "stand before the storm and the fire" (Leech, 1977, p. 45). These elements added together make for a unique philosophy of friendship.But the final point, using Brigid's monastery as a model of community committed to social justice, begins to achieve the synthesis for interpersonal communication and social justice for which one might hope in this project.The soul friend/anam cara is a Christian concept, shaped from ancient Greek and classical Roman philosophies (Aelred, 2010).It was lived out as praxis in medieval life and philosophies, from the Eastern Desert Fathers to the monasteries of Brigid and many others (Leech, 1977).How does the uniqueness of a soul friend speak to social justice in our own moment for philosophy of interpersonal communication?That is the topic of the final section below. Building a Philosophy of Social Justice in Friendship for Catholic Education As I noted earlier in this essay, my approach to social justice is well in line with Pollock et al.'s (1996) four elements of justice, structural investigation, action for change, and advocacy.Because the idea of the soul friend/anam cara incorporates both ancient and medieval (specifically Christian) philosophies, I also draw on the Catholic philosophical tradition of social justice.In this vein, focusing social justice on friendship, I stress two elements: first, that society contains inherent inequalities that should be investigated and understood with the purpose of healing.This is done in the name of the Trinity, in whose image we are created and by whom we are created for community.Second, every human person is called to honor the dignity and unique humanness of every other, in the name of Christ who died and was resurrected for all.Much of the work on soul friendship cited earlier in this essay fulfills these elements.For instance, feminist critiques of American "love stories" often posit responses to inequalities between men and women in society.Yet soul friendship, being an act of the will and not simply a descriptor of a relationship, completes the integration of social justice and interpersonal communication, as elaborated below. Some writings on the soul friend over the centuries have given stringent proscriptions for how communication can be enacted.Jean Grou, a Jesuit writer in the 18th century, listed five rules for spiritual direction in the context of anam cara: 1.For soul friends not to meet except from necessity and then to speak only of the things of God 2. Mutual respect, courtesy, and gravity 3. Never to conceal anything 4. Measureless obedience 5. To look beyond the friend, and see only God in him; only to be attached to the friend for God's sake, and to be always ready even to give him up if God requires it (Leech, 1977, p. 106). Some of these rules seem impossible to keep-an unrealistic kind of friendship for those outside of the monastery.But nonetheless it is anchored powerfully in a profound ideal.Commitment to truth trumps all human questions; it is an impossible infinite.On the other hand, perhaps the human striving toward these practices of communication is much different.The anam cara is very practical, very finite, and very human.It is the mutual humanness between two soul friends that allows them to succeed: They can easily see each other's faulty reasoning, being guilty of it often themselves; they can call one another to humility in light of the truth.This essay offers only a brief introduction to the soul friend, but perhaps it inspires us to look differently at friendship as a kind of social justice.When two friends walk toward the same truth together, then all of society benefits: They will commit themselves The Case for "Soul Friendship" to social justice.Aelred (2010) goes so far as to say that friendship is impossible unless people are themselves good.If one of them forsakes goodness and truth, the relationship between the two was never friendship in the first place.It was a farce, for only someone wholly committed to goodness and truth can be a friend to another. Throughout Lewis's work on human love, he emphasizes that one should not become so preoccupied with any other human being that s/he becomes the center of one's life.If a relationship takes over someone's life, she makes the love her "god" and in so doing, it has become a demon (Lewis, 1960).Friendship can become a "demon" when one is preoccupied with the friendship and does not want to lose it.Anam cara, as I articulate it here, is an embodiment of social justice because it loves the person and the external good-not the friendship for friendship's sake.An anam cara respects and loves the friend, not the friendship.As Aelred of Rievaulx (2010) wrote in the 11th century: "We delight not in any blessing won through friendship so much as the true love of a friend" (p.85). My students' work in interpersonal communication at a Catholic university indicates that one of the challenges of friendship is to take care of the other person, regardless of the consequences.This is much like the prescription of soul friendship, which compels a friend to confront the enemies of the Other's soul at all costs.The nature of love in friendship is unique, for a friend is neither biologically related to the other, nor are they the sole lover of that other (as would be the case in erotic love).So love in friendship is potentially problematic: one must walk a narrow ridge between seeking what is best for the other, and appreciating the other's difference from oneself.The anam cara, however, steps in where social justice is infringed or where selfdestructive behavior ensues.For instance, students have related in their papers instances where their friends' problems with substance abuse required their direct intervention.Almost unanimously, these interventions disrupted-and in some cases permanently ended-the friendships for my students.However, to have chosen not to act would have been an act of injustice.These students truly loved their friend, even to the point of losing the "blessings won through friendship," as Aelred (2010) puts it. Friendship is also just in its fundamental existence: Loving and appreciating someone who is outside one's family is a unique choice to enter into relationship.Students report in their assessments of their friendships, too, that they are committed to social causes more readily when those causes affect one or more of their friends.Students report strengthened or renewed commit-ments to support gay marriage, for example, or to fight against racism, when they develop friendships with people very different from themselves.Like the anam cara described by Leech (1977), two friends committed to the good can form a powerful "witness against the world" (p.96).Friendship is indeed a social good in and of itself, when it shapes the ethical commitments individuals can make to support the dignity of every human person. These opportunities to question indignities and injustices are important witnesses against the world, and as Thomas Merton pointed out, it is the role of soul friends living in community to critique society when necessary (Leech, 1977).While many people find comfort and solace in friendship, the soul friendship runs deeper.Leech (1977) writes that it is a worse thing for the world if we only use friendship for our own comfort and happiness-for we will not take action and fight for what is right and good.Like Aristotle and like Lewis (1960), Elliott (1975) argues that friendship is a social good because friends committed to a cause will spur each other to remarkable heights.The achievement of peace through friendship is indeed possible-but as Elliott (1975) colorfully argues, this peace is "not the peace of the dairy cow, but the peace of God" (p.138).Contemporary soul friendship, like the mutuality shared by Brigid and her nuns in their cells, is not just an interpersonal project, but a wholly (and holy) social one. This concept of anam cara has begun to shape the idea of social justice within my courses in interpersonal communication at a Catholic university.While the basic tenets of social justice articulated by Pollock et al. (1996) are directly discussed, in examining anam cara I have also added the ancient and Christian ideas regarding friendship, dignity of the human person, and the importance of social action.Nonetheless, the bridge between interpersonal communication and social justice is still beginning.It is an interesting moment for teaching these concepts, and one ought to be inspired by the history of social justice within communication to bring philosophical concepts of justice and friendship to bear on a field that has typically considered interpersonal communication in light of more behavioral outcomes than choices of external goods.While the idea of anam cara is pragmatic and finite, and has proscriptive communication philosophies attached to it, it always begins with an external good-belief in a soul and its rightful destiny. Implications for Teaching Interpersonal Communication Given these foundations, there are possible implications for the teaching of in-The Case for "Soul Friendship" terpersonal communication courses at Catholic colleges and universities.First, as the literature review early in this essay bore out, some fields of inquiry in communication would benefit from a broadened attention to previous scholarship in the humanities.In that review of communication essays on social justice, it was clear that "social justice" has not been clearly defined for communication and instead takes a broad, sometimes Marxist view toward general inequalities.Two thousand years of Catholic intellectual tradition stands in stark contrast.So, likewise, interpersonal communication instructors need not content themselves with social science research and the textbooks of the field.While all of these are good and useful, they are made even more so when supplemented by readings in Catholic philosophy and theology.In terms of friendship, many of the citations from this essay by Brigid, Aelred, Leech, and others would be suitable. Of course, teaching in a Catholic institution means bringing a sense of ecumenism to one's students.The readings in Catholic philosophy are not provided as a means of proselytization, but as a means of exploration.Though Lewis's The Four Loves seems a "dated" source (1960), I have been consistently and pleasantly surprised by the way students connect to it, especially in comparison with more recent theories regarding technology and social networking.They find Lewis rich in philosophical approach because he posits each of the loves, including friendship, as strivings for an ideal form of human existence and flourishing.Though students often come from different faith perspectives (and sometimes no faith perspective at all) at my university, they find encouragement in Lewis to identify the ideal through which they will attempt to love others (just as Lewis found his in Christ). In addition to supplemental philosophical readings, a second option in retooling the interpersonal communication course is to ask-from a social justice standpoint-who is underrepresented in communication scholarship and publications.This is important in terms of authorship, as it is in most fields in higher education.But here I especially refer to the subject matter of communication publications.Interpersonal communication is especially challenging in its overall tendency to suggest that there are norms in human interactions.These norms are announced without regard, in most cases, for differences in race, ethnicity, nationality, sexual orientation, or ability.For instance, when it comes to Eros, very little is written for undergraduates about same-sex relationships, leading to a heterocentric bias in the field.In another example, models of nonverbal communication research certainly omit persons with disorders on the autism spectrum, for their use of nonverbal cues may be different.Some of the most popular work has come from Tannen's (1990) hypotheses about differences in male and female communication styles in interpersonal communication.Yet Tannen used as the basis for her research only white, American, upper-middle-class couples.It is worth asking if her description of the passive "female" communication style is valid for women of all cultures.If social justice is about attempts to identify injustices (however unintended), the first place interpersonal communication can look is at its own structures-including the "canon" of assigned readings.There are important human persons who are omitted when scholars attempt to announce "norms" of human communication. Finally, the way students are assessed in interpersonal communication courses can be attempted with a renewed sense of social justice.It is not enough to offer readings in philosophy without providing students opportunities to practice it themselves.Students in my course undertake a "humanities project" that allows them to focus on a friendship and to produce an expression of it in some art form.In so doing they are searching for the essence-the soul-of the other person.They also are required to work in groups to produce a presentation on friendship that reflects on modern technological means of interpersonal communication, including texting, social networking, and the like.Through this assignment, students isolate potential challenges these technologies pose to friendships, as well as additive benefits.These are analyzed according to contemporary research as well as much more time-tested philosophies of friendship such as those cited in this essay. Conclusion The philosophical foundations and the practical implications discussed above are intended to come together as a kind of praxis for social justice in interpersonal communication.The anam cara provides a good model for this.Rethinking the interpersonal communication course seems especially significant because it is a popular course for nonmajors, making it one of very few opportunities they have to reflect on relationships and social justice. Based on the literature review that began this essay, it seems that the "Scylla and Charybdis" identified by Pollock et al. (1996) may still be present in the communication discipline whenever the topic of social justice is broached.However, for the field of interpersonal communication, it may yet be possible to begin to articulate how the bases of our relationships when grounded in justice can serve both the bonds between individual persons and the larger sphere The Case for "Soul Friendship" of ethical human life.In other words, our interactions with others can-when done reflectively-build a bridge between what is interpersonally good, and also what is socially just.This paper articulates just a few ideas for how this might begin to happen, especially in response to unique cultural problems and potential injustices in the United States.It is my hope that the conversation may continue, beginning most robustly in Catholic institutions of higher education where the long tradition of social justice can announce itself more strongly to a new generation of thinkers.
12,174.4
2012-09-14T00:00:00.000
[ "Philosophy" ]
A mobile app to capture EPA assessment data: Utilizing the consolidated framework for implementation research to identify enablers and barriers to engagement Introduction Mobile apps that utilize the framework of entrustable professional activities (EPAs) to capture and deliver feedback are being implemented. If EPA apps are to be successfully incorporated into programmatic assessment, a better understanding of how they are experienced by the end-users will be necessary. The authors conducted a qualitative study using the Consolidated Framework for Implementation Research (CFIR) to identify enablers and barriers to engagement with an EPA app. Methods Structured interviews of faculty and residents were conducted with an interview guide based on the CFIR. Transcripts were independently coded by two study authors using directed content analysis. Differences were resolved via consensus. The study team then organized codes into themes relevant to the domains of the CFIR. Results Eight faculty and 10 residents chose to participate in the study. Both faculty and residents found the app easy to use and effective in facilitating feedback immediately after the observed patient encounter. Faculty appreciated how the EPA app forced brief, distilled feedback. Both faculty and residents expressed positive attitudes and perceived the app as aligned with the department’s philosophy. Barriers to engagement included faculty not understanding the EPA framework and scale, competing clinical demands, residents preferring more detailed feedback and both faculty and residents noting that the app’s feedback should be complemented by a tool that generates more systematic, nuanced, and comprehensive feedback. Residents rarely if ever returned to the feedback after initial receipt. Discussion This study identified key enablers and barriers to engagement with the EPA app. The findings provide guidance for future research and implementation efforts focused on the use of mobile platforms to capture direct observation feedback. Electronic supplementary material The online version of this article (10.1007/s40037-020-00587-z) contains supplementary material, which is available to authorized users. Introduction The adoption of competency-based frameworks has highlighted the need for workplace-based assessment (i.e., "what doctors actually do in practice") with a dual focus on the assessment of learning (i.e., summative feedback) and assessment for learning (i.e., formative feedback) [1][2][3]. As a result, direct observation of a trainee-patient encounter has become an increasingly prominent feature of assessment. Direct observation tools have been developed for general clinical skills (e.g., miniCEX) and for focused tasks, such as electromyography, teamwork, laparoscopy, ultrasound-guided anesthesia, handoff, and follow-up visits [4][5][6][7][8][9][10][11]. Implementation of workplace-based assessment has encountered significant challenges. One com-mon barrier has been the lack of time and competing demands such as clinical workload that interfere with the ability of faculty to complete these assessments [12,13]. In order to facilitate more efficient capture, delivery, and aggregation of assessment data, mobile applications have been developed and tested in multiple specialties (e.g., pediatrics, surgical specialties, internal medicine) and with multiple frameworks (milestones, competencies, and entrustment scales) [14][15][16][17][18][19][20][21]. A second important barrier has been challenges with the assessment frameworks; the competencies and milestones used on workplace-based assessments are viewed by some as too numerous, too granular, and/or too abstract for educators to use [22]. Entrustable professional activities (EPAs) have emerged as an assessment framework that translates competencies into clinical practice in a more holistic fashion compared with milestones [23]. Multiple specialties have developed and implemented EPAs [24][25][26]. Very little has been published on mobile apps that utilize the EPA framework to capture assessment data, though there are numerous initiatives underway. Most of the published apps utilize milestones or competencies as the assessment framework. For example, the surgical specialties have developed two related assessment approaches, the O-SCORE and SIMPL. The O-SCORE utilizes levels of supervision anchors (e.g., "I had to talk the trainee through . . . ") for each of nine components of any surgical procedure (e.g., case preparation, postoperative plan) with a final yes/no determination of whether the trainee is ready to perform the procedure independently [27]. It does not use a level of supervision scale for the overall activity (which is not necessarily an EPA). SIMPL is a mobile platform that incorporates three questions, one of which uses a level of supervision scale for the overall activity but the activities typically are not EPAs [18]. Finally, Warm et al. have published on large WBA datasets captured by mobile devices; this work employs "observable professional activities" (i.e., often tasks nested within an EPA) and has not focused to date on the mobile platform itself [28]. In an effort to bring together the EPA framework with smartphone technology, we designed and implemented a WBA that employs a mobile app to assess EPAs based on direct observation. An initial study indicated that the app generated high quality narrative feedback and entrustment scores that correlated with resident experience [29]. While this and other EPA apps are being implemented across numerous settings and specialties, we know little about implementation barriers and enablers. To date, most studies of assessment apps have examined apps that use frameworks other than EPAs and have focused on outcomes such as end-user satisfaction via surveys (e.g., attitudes), the quality of the feedback (e.g., specificity), and feasibility (e.g., time to complete) [15,[18][19][20][30][31][32][33][34][35][36]. A few of these studies have identified barriers (e.g., competing demands on faculty time and lack of a physician champion) and enablers (perceived value) to implementation [15,17,19]. No study to date has used implementation science frameworks to focus on the implementation process itself. If EPA apps and, more generally, smartphone-based applications, are to be successfully incorporated into programmatic assessment, a better understanding of implementation barriers and enablers will be necessary. To address this gap and improve subsequent implementation, this study explored the barriers and enablers of adoption of an EPA-based app. The study used the Consolidated Framework for Implementation Research (CFIR), a "meta-theoretical" framework that provides an overarching typology of implementation and is commonly used to assess implementation of evidence-based interventions in a variety of settings, including medical education [37][38][39]. Study design and ethics This is a qualitative study that applied the CFIR methodology [37]. Because our focus was on identifying implementation barriers and enablers in order improve how other programs incorporate similar apps in the future, we chose a methodology from implementation science [12,39]. The CFIR examines implementation across five major, interacting domains: intervention (e.g., perceptions about the relative advantages of the intervention), inner setting (e.g., the clinic in which the supervisory encounter between faculty and residents occurs), outer setting (e.g., department and hospital policies, priorities, incentives and culture), individual characteristics (e.g., knowledge and beliefs about the intervention, personal use of the app), and implementation process (e.g., strategies and tactics such as engaging appropriate stakeholders) [40]. Ethical approval was obtained from the Institutional Review Board at Northwell Health (IRB#: 19-0011). Setting and participants This study was conducted in a psychiatry resident outpatient continuity clinic of a large, academic teaching hospital. Residents spent one half day a week in the clinic during their second and third year of training with the same attending. Each week, the attending directly observed second year residents for two hours and third year residents for one hour as they conducted new patient evaluations and follow-up visits. Faculty had no other obligations during the clinic other than working with their assigned resident. Prior to the implementation of the EPA app, faculty used a paper-based direct observation tool that included a comprehensive 27-item checklist, an overall EPA rating, and prompts for both reinforcing and corrective comments. This tool had been studied in several settings with evidence for validity and generates, on average, five highly specific comments with a 3:2 ratio of reinforcing to corrective [11,[41][42][43]. All faculty agreed to participate. Two residents who were invited declined to participate. Intervention Design features of the mobile app and the quality of the assessment data generated have been reported in a prior study [29]. In brief, we developed the app for the iOS platform in Xcode, Apple's suite of software development tools. The app was written in the Swift programming language. Data were uploaded to Google's Firebase cloud service, and emailed directly to residents via a Firebase Cloud Function. To make the interface as intuitive and hassle free as possible, we adhered closely to the iOS Human Interface Guidelines, a set of documents published by Apple, whose aim is to improve the user experience. Iterative refinements were made based on field testing performed by the study authors and one other faculty member. A pilot study of the EPA app determined that faculty required 70 seconds to complete an assessment and each assessment generated, on average, a single, behaviorally specific, high quality corrective comment [29]. The EPA app was implemented in August 2017. Faculty were asked to use the mobile app to complete one evaluation during each continuity clinic in which the resident saw at least one patient. After observing the patient encounter, the app required faculty to select the relevant EPA-in this context either a "diagnostic interview" or a "medication management visit"-and then complete a two-part assessment: 1) assign a level of supervision that the resident requires based on the single observation; and 2) provide a comment in response to the prompt "one thing the trainee can do to advance to the next level". Once the faculty completed the assessment, a copy was immediately emailed to the resident and faculty member. Approximately once a month, use of the app was monitored by study authors via a dashboard that summarized the number of completed assessments for each dyad. When the dyad had been inactive during the prior time interval an email reminder was sent to the faculty, encouraging them to continue using the app. When a dyad was inactive for several time periods, the study authors reached out to the faculty member to see if they needed help with the app. Faculty development consisted of written instructions and a 30-minute one-on-one meeting to install and practice using the app. In addition all faculty attended three 1-hour trainings prior to the start of the intervention on direct observation, EPA-based assessment, and narrative feedback, respectively. Residents also received a single 30 minute orientation to the EPA app and the expectations for its use. Interview content Separate interview guides were created for faculty and for residents. CFIR contains 26 constructs across the five domains. Not all constructs are relevant to a given context. Sample interview questions for each construct are available on http://cfirguide.org. The research team selected constructs and questions relevant to the EPA app implementation from four of the domains: intervention characteristics, individual characteristics, inner setting, and outer setting. The constructs from the process domain apply primarily to those who are responsible for the planning and execution of the program, i.e., the residency program and clinic leadership. Because our focus was on the faculty and resident experience, we did not interview the EPA app implementation leaders and therefore did not include any of the process domain constructs in the guide. The questions were then tailored to gather specific information relevant to how faculty and residents experienced the app. Following a pilot interview with a faculty member and with a resident, minor changes were made for clarity. Data collection Structured interviews were conducted by a study author (RS) from February to March 2019. We invited faculty members and residents in the intervention clinic to participate. Written informed consent was obtained from each participant. The study author explored participants' reactions to each question. Each interview was audiotaped, transcribed, and deidentified. Data analysis Anonymized transcripts were uploaded to Dedoose (version 8.2.14 for Windows) for data analysis and management. Two authors (RS, JS) conducted directed content analysis, a deductive process that applies an existing theory or framework to guide initial coding [44,45]. Transcripts were independently coded in iterations of two. Each transcript was segmented into excerpts, i.e., a linguistic unit (e.g., sentence or paragraph) that expressed a single idea. Each excerpt was then deductively assigned to one of the four CFIR domains used in this study (i.e., intervention characteristics, individual characteristics, inner setting, and outer setting). The same two authors then inductively assigned codes (e.g., technical interface or competing demands) to each excerpt. After independently coding each batch of two transcripts, two authors (RS, JS) compared how they segmented the data into excerpts and the CFIR domain and codes they assigned to each excerpt. Although interview guide questions were organized by CFIR domain, we coded excerpts independently of the domain under which the question was categorized in the interview guide. Differ-ences were discussed with the lead author (JQY) until consensus was reached both with respect to the excerpt segmentation, the assigned CFIR domain, the coding scheme itself and the assignment of codes to a given excerpt. In subsequent meetings, all study authors examined the codes within each CFIR, identified relationships between the codes, combined codes into categories and then constructed themes relevant to each domain. After eight faculty and 10 resident interviews, three authors (RS, JS, JQY) all perceived that conceptual sufficiency had been reached, i.e., the codes appeared to capture the essence of the phenomenon without requiring further modification [46]. The first author was also the program director for the psychiatry residency in which this intervention took place. Participants were informed of the first author's role. Several steps were taken to ensure that faculty and residents participated voluntarily and openly. The first author did not participate in recruitment, interviews, or initial coding. He only participated in coding when the two primary coders disagreed and only viewed excerpts that had been de-identified. Reflexivity The study authors are all engaged in assessment in graduate medical education and have experienced the challenges of gathering WBA data via paper forms. We anticipated that smartphone-based apps for competency-based assessment may be a much easier interface, especially for faculty. This positive bias could have influenced analysis. We also expected that the smartphone-based platform may result in fewer and perhaps lower quality comments compared with the paper forms we had used in the past. This assumption had the potential to provide a negative bias to the analysis. To manage the influence of these assumptions on our analysis, study team members were asked to routinely reflect on their assumptions and to verbalize how they may be affecting the process of creating codes and themes. Results Below, we describe the major themes within each CFIR domain and provide exemplar quotes to illustrate how the themes were communicated. The perceived advantages and disadvantages of the EPA app compared with the pre-existing paper-based assessment tool are described in each relevant CFIR domain. CFIR domain-Intervention characteristics This refers to how participants perceived the quality of the app, including design and ease/difficulty of use, and positive and negative effects of the app. Ease of use: Faculty felt the app was easy to use and intuitive, from initial setup to routine use. Faculty experienced few, if any, bugs or technical problems. One faculty commented: "I'm not very good with the phone . . . I found [EPA app] easy to use. I had no issues with it." (Faculty_3) No faculty member cited an aspect of the app that was confusing or frustrating. Similar to faculty, residents experienced no technical challenges. Faculty and residents preferred the electronic format over paper. For faculty, the paper-based forms required remembering to bring the form to an observation and to then submit once completed while for residents they cited the hassles of storing and retrieving the completed paper forms. Feedback timeliness and frequency: All faculty and residents reported that the EPA app facilitated timely and frequent feedback. Faculty attributed this impact to the quickness and ease with which an assessment could be completed: "[EPA app] made giving feedback still formal and objective, but also quicker, so you could spend more time interacting with the resident and doing more verbal feedback . . . it allowed for more face-to-face feedback . . . " (Faculty_7) With regards to ease of use and time to complete, faculty much preferred the EPA app to the longer, paperbased assessment tool: "I think [EPA app] is much more user friendly and much more likely to be completed and much more efficient [ Feedback quality: Most faculty appreciated that the app prompted only for corrective (and not reinforcing) narrative feedback. In particular, faculty described how the corrective prompt served as a "forcing function" to do the hard work of constructing such feedback: "It's helpful because every interaction there's usually at least one area for improvement and this forces you to identify that." (Faculty_5) A few faculty felt discomfort with not also having a prompt for reinforcing feedback; they worried that their feedback may be misperceived as discouraging or unsupportive. All faculty described how the act of constructing written feedback within a smartphone-based app resulted in much briefer feedback compared with the paper-based tools where they might write multiple feedback points, each one with considerably more narrative. Faculty perceived this design feature to be beneficial as it required them to distil their feedback into a single, brief point. Some faculty thought a single, brief point may even be more beneficial than several, longer points. However, for both faculty and residents, there were drawbacks to the concise, succinct, and easy to digest characteristics of the EPA app's written feedback. Both groups believed that the EPA app, compared with the longer paper-based checklist with prompts for both reinforcing and corrective comments, generated feedback that was less nuanced, comprehensive, and balanced. One resident said: "I guess the app feedback is less detailed and that made it not as helpful." (Resident_3) Another commented: "I'm not sure the app asks for something positive for the comments, which means in some senses the feedback's a little less thorough." (Resident_2) Similarly, a faculty member noted: "I think the longer, paper-based form is more effective, because it's more comprehensive." (Faculty_7). Both faculty and residents resolved these tradeoffs with the notion that an ideal assessment program would include both types of assessments, as captured by these excerpts: "I think something would be lost if only one was used to the exclusion of the other. I think it might be an ideal mix of primarily using the phone because of its ease of use and it's the easy way of generating a lot of data, but then periodically doing the paper one because it reminds us of some trees, not just the forest." (Faculty_1 Some faculty wondered if the more comprehensive paper-based tool might be especially preferable early in a resident's training or faculty's teaching when both may benefit from the checklist which makes explicit the standards of competency and may facilitate more specific feedback. CFIR domain-Characteristic of individuals This refers to how the user's own beliefs or characteristics may affect the intervention. These characteristics include how they use the app in practice and their confidence in doing so. Faculty used the app shortly after providing verbal feedback to the resident, either in the presence of the resident, or shortly after they left, typically no more than 20 minutes after observing the encounter. Confidence and excitement: Most faculty and all residents had a positive emotional reaction to using the app. Faculty had a high level of confidence in using the app, regardless of their general level of confidence with technology: "Since I've had smartphones and have been using apps for a bunch of years now, I'm fairly comfortable with new technology in general." (Faculty_7) Residents expressed a similar positive disposition to use of an electronic rather than paper format: "I come from a generation where everything is done electronically . . . it's easier for us to access that because that's what I grew up with." (Resident_6). Use of EPA app in presence of patients: Some faculty expressed discomfort with the use of the EPA app during a patient visit. These faculty worried that patients would perceive use of the EPA app as lack of interest, not paying attention, or even rude: "I found it very weird being on my phone with a patient in the room. Computer's one thing because it's the electronic medical record . . . but the phone just feels rude because people don't know what I'm looking at." (Faculty_4) Instead, most faculty took brief notes on paper during the patient encounter so they could remember key points and examples when providing the verbal and written feedback. In this respect, faculty found the pre-existing paper-based tool more seamless. Residents did not comment on this. Resident engagement with the feedback: All residents appreciated receiving the written feedback electronically soon after the patient encounter. While some faculty assumed that residents would return to the feedback in order to see how they were progressing, almost all of the residents indicated that they never looked at the emailed feedback after the initial view. "I may be looked at it for like a second, however long it takes to read a sentence . . . then I probably would have deleted it." (Resident_8). CFIR domain-Inner setting In this study, the inner setting referred to the ease of implementation within the clinic itself. Clinical demands: Most faculty identified clinical demands as the main barrier to implementation. Faculty had no clinical obligations of their own when precepting the residents; this feature of the program facilitated the direct observation and assessment activities. Yet, faculty still experienced interruptions. And, more significantly, clinical demands from the resident's panel could impact engagement with the EPA app (e.g., a patient with a brief appointment presents with unexpected acuity leading to a backlog in the resident's clinic). When faculty experienced competing demands, they prioritized verbal feedback over completing the assessment in the app. These perceptions are represented well by the following two excerpts: CFIR domain-Outer setting In this study, the outer setting refers to how the app did or did not meet the needs of the hospital's clinical enterprise and the department's educational program. Organizational values: A majority of faculty and residents felt that the app was a good fit with the values and norms of the organization, as well as their own values and norms. Faculty and residents perceive the organization as prioritizing innovation in clinical and educational practice. A faculty member commented: "I think [EPA app] does fit with the value of being a forward-thinking, progressive kind of educational environment . . . " (Faculty_3) Similarly, a resident observed: "I think it fits well. We value high quality education, learning, research, making changes in residency." (Resident_8) Many cited that the department had established a clear expectation that supervising faculty should use the mobile app or other assessment instruments. Moreover, faculty and residents described how the EPA app aligned especially with the department's visible efforts to use digital technology to improve access to high quality care (e.g., smartphonebased cognitive behavioral therapy). Discussion This study identified enablers and barriers to engagement with the EPA app that have implications for future iterations of this and other EPA apps (Tab. 1). A number of enabling factors were identified. Both faculty and residents found the app easy to use, glitch free and efficient in facilitating feedback soon after the observed patient encounter. None of the participants experienced the EPA app as burdensome or difficult to navigate, a common complaint of online WBA tools [47,48]. Faculty appreciated how the EPA app forced them to distill their feedback into a single point. Both faculty and residents expressed positive affective reactions. These important enabling factors highlight the critical importance of the design process used in developing an assessment app. The design of the EPA app followed user-interface guidelines and prioritized a simple and efficient user interface which meant accepting certain compromises such as not collecting information about the patient complexity, only having a single text box for corrective comments, and placing detailed anchor language in information buttons to reduce the text on the main screens. The design process also incorporated revisions based on feedback from testing sessions with several faculty members. In addition, the protected time for faculty to observe and complete the assessments without clinical obligations of their own was crucial. The barrier of competing demands has been consistently reported in studies of other workplace-based assessment strategies [12,13,15,19]. These findings have led many to advocate for models that provide faculty with dedicated (compensated) time for direct observation and feedback [12,13,49]. This recommendation, as implemented in our study, facilitated engagement with the EPA app. Moreover, faculty and residents perceived the app as aligned with the hospital's and department's philosophy which illustrates the importance of linking any EPA app implementation to the underlying values and sources of pride in an organization. However, several important barriers to engagement were identified. Most faculty expressed inadequate understanding of the scale or framework. Prior research has identified inadequate understanding of the performance dimensions and frame of reference as dominant problems in WBA programs [13]. While proponents of the EPA framework contend that EPAs are intuitive for clinical faculty compared with the milestones framework, the faculty in this study applied the EPA framework inconsistently. Despite expressing satisfaction with the training, faculty evidently required more training and support over and above the Table 1 Facilitators and barriers to engagement with the EPA app CFIR domain Facilitators Barriers Intervention characteristics -Sufficient training prior to use -Few, if any, technical challenges -EPA app intuitive and easy to use, especially compared with paper-based assessment tools -Feedback timely and frequent -Feedback quality high-behaviorally specific and salient -User interface forced succinct feedback with a single take home message for the resident -Residents and faculty see the value of assessment tools (such as the paper-based form also used in the clinic) which generate more comments that are more detailed, nuanced, and comprehensive -The absence of a checklist, while making the app easier to use, led to less systematic observation and feedback -No reinforcing comments -Most faculty did not understand the entrustment scale and/or the EPA framework -Faculty prefer paper-forms for discretely jotting down feedback points while observing 30 minute orientation to the app and the three onehour trainings which covered EPAs and entrustment scales [50]. This is consistent with the general finding in WBA research that repeated trainings are necessary to establish and maintain a shared mental model among raters [51,52]. This may serve as a caution to not under-estimate the effort and time it takes for faculty to learn how to use the EPA framework. Even if the EPA framework is easier for clinical faculty to initially grasp, how faculty use the framework in practice may be problematic. In addition, some faculty did not use the EPA app as frequently as intended due to clinical demands that took precedence. In the setting of our study, it is somewhat surprising that this barrier persisted, though perhaps to a lesser extent than reported in other studies, given how seemingly "little" time the EPA app takes (70 seconds on average), how high the enthusiasm for the app was, and that faculty had no other obligations when supervising their resident in the continuity clinic. Even with the faculty time protected, faculty still were interrupted with concerns related to their panel of patients. Moreover, demands of patients from the residents' own panels also disrupted the direct observation and feedback. For example, if a resident were fully booked and then had an unscheduled acute patient present, the faculty and resident would dispense with the feedback. Neither of these disruptions were anticipated. This highlights how even with faculty protected time, faculty may still encounter significant interruptions and, in addition, steps (e.g., longer appointment times or blocked off appointment slots) must be taken to establish buffers within the residents' clinics against unexpected demands that might interfere with feedback conversations and app completion. Moreover, residents report that they rarely, if ever, referred to the emailed feedback after an initial brief review. This contrasts with the faculty expectation that the emailed feedback would be revisited at future time points. This finding is concerning and represents a significant threat to the impact on learning and, ultimately, the validity of a competency-based assessment program such as the EPA app [49]. Providing feedback, even if purely formative, is not enough to stimulate growth. Learners must review, reflect, discuss, and apply the feedback [49,[53][54][55]. Yet, medical students and residents typically are not self-regulated learners who engage in reflection and self-improvement on their own accord, a finding seen in both formative and summative assessment [56,57]. Two interventions seem relevant. Aggregating and visualizing the performance data onto a dashboard may help trainees perceive trends and more easily find value in re-visiting feedback they have received over time [54]. In addition and more important, residents may need longitudinal coaches that create a safe place in which they learn how to identify growth edges and set action plans [49,54]. Finally, while faculty and residents appreciated the concise, single-point feedback facilitated by the EPA app, both also noted the value of the pre-existing systematic, paper-based tool that generated more comprehensive, balanced, and nuanced feedback. This outcome stemmed from the intentional design decision to limit the assessment to a single rating and a single text box in order to maximize efficiency and ease of use. Longer direct observation tools have been shown to generate multiple comments per observation [43,48,58]. We do not know what the optimal number of comments is from a learning and behavior change perspective, but this finding suggests that an overall program of workplace-based assessment may want to include a mix of assessment tools that generate both brief and more detailed comments. Moreover, this finding raises several questions about the design of the EPA app, such as whether a comprehensive checklist and/or a second comment field that prompts for reinforcing feedback should be added. We would recommend adding the second narrative field but are reluctant to add a checklist, especially a 27item one, which would make completion of an assessment much more burdensome, especially on the smaller screen of a smartphone. Limitations of this study include a small sample size from a single outpatient clinic at a single institution. Implementation barriers and enablers are inevitably related to local contextual factors. The lessons from this study may not be generalizable. At the same time, many of the findings are congruent with studies of other types of mobile apps which makes us more confident in our findings. In addition, the interview questions and coding processes were shaped by a specific theoretical framework that may not have captured important dimensions of the EPA app experience. However, we believe this approach was appropriate given our focus on the implementation enablers and barriers. Conclusion In summary, this qualitative study using the CFIR framework identified key enablers and barriers to faculty and resident engagement with the EPA app. The findings support ease of use and utility but also highlight important barriers such as competing demands, variable faculty understanding of the assessment framework, lack of resident use of the feedback beyond initial receipt, and salient tradeoffs when comparing comments generated by the app versus longer, more detailed paper-forms. Educators should utilize app development guidelines that optimize the user interface. Future research and implementation efforts should especially focus on how best to train faculty and to catalyze residents to engage in ongoing review and reflection with the support of a coach. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
7,421.4
2020-06-05T00:00:00.000
[ "Education", "Medicine", "Computer Science" ]
Development of an individualized risk calculator of treatment resistance in patients with first-episode psychosis (TRipCal) using automated machine learning: a 12-year follow-up study with clozapine prescription as a proxy indicator About 15–40% of patients with schizophrenia are treatment resistance (TR) and require clozapine. Identifying individuals who have higher risk of development of TR early in the course of illness is important to provide personalized intervention. A total of 1400 patients with FEP enrolled in the early intervention for psychosis service or receiving the standard psychiatric service between July 1, 1998, and June 30, 2003, for the first time were included. Clozapine prescriptions until June 2015, as a proxy of TR, were obtained. Premorbid information, baseline characteristics, and monthly clinical information were retrieved systematically from the electronic clinical management system (CMS). Training and testing samples were established with random subsampling. An automated machine learning (autoML) approach was used to optimize the ML algorithm and hyperparameters selection to establish four probabilistic classification models (baseline, 12-month, 24-month, and 36-month information) of TR development. This study found 191 FEP patients (13.7%) who had ever been prescribed clozapine over the follow-up periods. The ML pipelines identified with autoML had an area under the receiver operating characteristic curve ranging from 0.676 (baseline information) to 0.774 (36-month information) in predicting future TR. Features of baseline information, including schizophrenia diagnosis and age of onset, and longitudinal clinical information including symptoms variability, relapse, and use of antipsychotics and anticholinergic medications were important predictors and were included in the risk calculator. The risk calculator for future TR development in FEP patients (TRipCal) developed in this study could support the continuous development of data-driven clinical tools to assist personalized interventions to prevent or postpone TR development in the early course of illness and reduce delay in clozapine initiation. INTRODUCTION About 15-40% of patients with schizophrenia are considered to have treatment-resistant schizophrenia (TRS) [1][2][3] and were found to have 3-to 11-fold higher direct healthcare costs [4,5], as well as poorer functional outcomes [1,6].Clozapine is among the most effective antipsychotics for TRS patients [7] and is considered the first-line pharmacological treatment for TRS in many countries [8].Despite its efficacy, there are often years of delays in clozapine initiation with multiple antipsychotic trials prior to the clozapine initiation [9,10], which was found to be related to poor response to clozapine [1,11].Identification of patients who are at higher risk of developing treatment resistance (TR) may reduce the delay of clozapine initiation.Though about 22% of patients are considered to be TR in their first-episode of illness [12], which is likely to have distinctly different mechanisms than those who develop TR after multiple episodes [13], the median time of TR development is up to 10 years [14,15].Dopamine hypersensitivity has been suggested as a possible mechanism in the development of TR [16].Therefore, identification of individuals who have higher risk of developing TR, particularly in the early stage of the illness, would be the first step to facilitate personalized and targeted interventions to prevent or postpone the development of TR. Though multiple factors have been explored in prospective studies as possible predictors of TRS, only 12 studies have been identified in a recent systematic review and found early age of onset as the most consistent predictor reported [17,18].About half of the included studies had five years or less follow-up period.Use of integrated prediction models in TRS prediction has been advocated [19].However, there are only four studies attempting to develop a prognostic prediction model to predict TR development using machine learning (ML) methods [20][21][22][23] with three being in patients with first-episode psychosis (FEP) [20,21,23].Most studies used LASSO logistic regression or forced-entry models with area under curve (AUC) ranging from 0.59 [21] to 0.67 [23].These studies are initial attempts to establish a predictive model using ML approaches, and results suggest that more advanced ML models may be needed to improve prediction performance.Most of these studies had a moderate follow-up period (<5 years) that might have restricted the predictive performance of the established model.Furthermore, these studies and other general studies on the predictors of TRS, only included demographics and baseline information without considering treatment outcomes and clinical characteristics during the early stage of the illness, which have been related to the development of TR [1,17].With few previous studies, it is difficult to determine the optimal ML model to be used.Therefore, to develop a data-assisted clinical tool to estimate individual risks of TRS development, a larger pool of state-of-the-art ML models should be considered.Automated machine learning (autoML) is a process that automates the tasks of applying machine learning, including optimizing algorithm selection and hyperparameter optimization, to maximize the predictive performance of the model. Clozapine prescription is only recommended for TRS patients in most countries and regions, including Hong Kong [8], and has been considered a proxy for TR status in many population-based studies [21,24].Therefore, the current study used clozapine initiation as a proxy of treatment resistance status.The aims of the current study are to establish a prediction model of future clozapine use, a proxy of TR development, among the FEP population over 12-17 years of follow-up using clinical information at baseline and over the initial three years of the treatment with autoML.Prediction models with baseline, 12-month, 24month, and 36-month information were established separately.An individualized risk calculator for treatment resistance development of FEP patients (TRipCal) was established using the significant features identified with the autoML model.Results of the current study may provide support to the development of personalized interventions in improving outcomes of patients with FEP. Data source and study sample The sample of this study was originally included in a study comparing three-year outcomes of patients with first-episode psychosis (FEP) who were treated by early intervention services (EIS) for psychosis and those who received standard care services (SCS) [25].A total of 700 FEP patients who were consecutively enrolled in the EIS [26] between July 1, 2001 and Fig. 1 Overview of probabilistic classification model workflow.A We split our data using random subsampling.The former approach was the main analysis of the current study.We randomly split the participants into train (75%) and test data sets and repeat this procedure 100 times to obtain a stable performance.The latter approach aims to examine the generalization of our models.B AutoML was implemented in Python using the TPOT package.C Bagging procedure is added when re-training the best model from autoML.D Calibration was performed using Platt scaling.E Evaluation of the performance of test data includes area under the receiver operating characteristic curve (AUROC), decision curve analysis and feature importance. were excluded from the initial case identification.Detailed medication history of all patients (N = 1400) from their first service visit (EIS or SCS) to June 2015 (follow-up period: 12-17 years) were retrieved from the centralized electronic hospital database (Clinical Management System [CMS]).After excluding 2 patients with missing data for clozapine use, we identified 191 out of 1398 patients (13.7%) who have ever received clozapine prescriptions during this follow-up period.The CMS is an electronic clinical record system of the HA in Hong Kong which covers over 90% of the psychiatric care of severe mental illness patients [27].All inpatient and outpatient clinical information including hospitalization, consultation records, medication prescription were included in the CMS.Institutional ethical approval was obtained from all Hong Kong hospital clusters for the current study.Data analysis and development of the calculator was conducted between December 2022 and March 2023. Outcomes and features Clozapine use was considered as a proxy indicator of TR and the outcome in the current study.All features were obtained from case notes of each enrolled patient using a standardized CMS data entry form [28]. Features of interest at baseline included age at first service contact to the EIS or SCS, sex, years of education, any life events prior to the service entry, smoking status, diagnoses, age of illness onset, received EIS or not, duration of first episode, length of hospitalization at first-episode, duration of untreated psychosis (DUP), suicidal attempts (SA), non-suicidal self-injury (NSSI) during DUP and presence of psychiatric comorbidities.Furthermore, the clinical notes of patients were examined to summarize monthly clinical features including symptoms, functioning, other clinical information, and medication use for the first three years of clinical services.Symptom features included positive and negative psychotic symptoms assessed by the Clinical Global Impressions-Schizophrenia (CGI-SCH) scale [29] and depressive symptoms measured by the Clinical Global Impression scale (CGIS) [30].Social functioning of patients was assessed by the Social and Occupational Functioning Assessment Scale (SOFAS) [31].These variables were further summarized into mean and mean of the squared successive differences (MSSD) [32].Other clinical information included SA, NSSI, substance abuse, Accident & Emergency visit, out patient departments visit, hospitalization, default from outpatient appointments, and relapse.Medication and intervention features included daily defined dose (DDD) of antipsychotic medication [33], and whether anticholinergic, antidepressant, benzodiazepine, mood stabilizer, or electroconvulsive therapy were prescribed.Information on types of antipsychotics and daily dose were used for DDD calculation and monthly average DDD of antipsychotics were determined.Operational definition of the features and the quality assurance of the data including interrater reliability are in the supplementary documents. Model development and validation followed the guidelines of Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD, Table S1).An automated machine learning (autoML) approach was implemented to automate the selection of ML algorithms and hyperparameters using the Tree-based Pipeline Optimization Tool (TPOT) package in Python [34,35].TPOT optimizes the search process for prediction models through employing genetic programming and evolutionary algorithms (see http:// epistasislab.github.io/tpot/for more details). To develop the probabilistic classification models, the sampling process was stratified based on the outcome variable (i.e., clozapine use) with the sample being randomly divided into training (75%) and testing (25%) sets (Fig. 1A).Missing data were imputed using median replace and the features were standardized by subtracting the mean and scaling to the unit variance in the training data.The derived preprocessing steps were then applied to test data.TPOT was set to run for 10 generations with a population size of 50 pipelines (Fig. 1B).For each generation, the best model was selected based on the area under the receiver operating characteristic (AUROC), evaluated through a 5-fold cross-validation (CV) within the training data.The training data was re-fitted with the best model plus bagging and calibration procedures (Fig. 1C, D) to minimize overfitting and improve out-of-sample model performance.To obtain a stable performance and avoid overfitting to a particular subsample, we repeated the above procedure 100 times.For each repeat, the AUROC, calibration performance measured by Brier score, decision curve analysis and feature importance were calculated (Fig. 1E).Detailed approaches of decision curve analysis are in the supplementary methods. The overall model performance was obtained by averaging AUROC and Brier scores across 100 repetitions.We ranked the feature importance for each repetition as different algorithms may be selected for each repetition.In scikit-learn, feature importance represents the relative importance of each feature in a trained model for predicting a target variable.Average feature importance rank for each variable was calculated across 100 repetitions for further interpretation.Four probabilistic classification models were developed by incorporating features of different duration (i.e., baseline or first month, 12, 24 and 36-month).For each model, we removed patients with clozapine use within the period of the features (Baseline n = 1398; 12-month n = 1387; 24-month n = 1379; 36-month n = 1363). Finally, the features were reduced to a reasonable number by refitting the data using the above procedures with top 10, 15 or 20 features and comparing their performance to determine an optimal number of features.The bold values are significant value after the false discovery rate correction for multiple testing. A risk calculator was developed with the optimal number of features to calculate predicted probabilities of future clozapine use of FEP patients. Sample characteristics Table 1 displays the comparison of basic demographics between patients with and without clozapine use.In general, patients with clozapine use compared to their counterparts had younger age of first service contact, with a lower education level, were more likely to have a schizophrenia diagnosis, and younger age of illness onset.The mean duration of first use of clozapine from the first service contact was 83.9 months (7 years) (SD = 48.9,median = 76.7,range = [2.17,201.2]). Probability classification and predicted probability Figure 2A shows distribution of the AUROC with a mean and standard deviation (SD) over 100 repeated random subsampling.Figure 2B shows that each model had a low Brier score (<0.12), suggesting moderate to good agreement between observed and expected risk.The Brier scores were 0.113 (SD = 0.0027, 95% CI = [0.113,0.114]) for the baseline model, 0.105 (SD = 0.0033, 95% CI = [0.105,0.106]) for the 12-month model, 0.0994 (SD = 0.0039, 95% CI = [0.0986,0.100]) for the 24-month model, and 0.0906 (SD = 0.0034, 95% CI = [0.0899,0.0913]) for the 36-month model.Longer longitudinal information improves the Brier scores of probabilistic predictions (Kruskal-Wallis χ 2 = 346.1,df = 3, p < 2.2e-16).Figure 2C displays that all models outperformed the two extreme strategies of intervening in all or none of the patients, as indicated by the higher net benefits.Generally, the models with longer longitudinal information had better performance in terms of net benefits.The performance of the models at various thresholds is presented in Table 2 (Supplementary material for examples). Average feature importance rank of each variable over 100 repeated random subsampling is displayed in Fig. 2D.For the baseline model, the most important features were age at first service contact, schizophrenia diagnosis, age of onset, duration of first episode, days of hospitalization during first episode, days of DUP and DDD at baseline.For the 12-, 24-and 36-month models, longitudinal features, including number of months with relapse (Relapse [sum]), mean DDD, and number of months of anticholinergic use (Anticholinergic [sum]) were the most important features.Mean and MSSD of positive symptoms and SOFAS as well as polypharmacy were also important features. Figure 3 presents that patients with higher predicted probability had a higher chance of clozapine use after a threshold of 0.1 for all the models.With a progressively higher predicted probability, the proportion of clozapine use in patients increased.These patterns again suggested that our models were able to differentiate patients with and without future clozapine use. We evaluated our models with only top 10, top 15, or top 20 features selected by feature importance.Results suggested that models with top 10 features performed similarly in terms of AUROC and Brier score compared to the model with all features (Fig. S1).The baseline model with top 10 features performed slightly better than that with all features.Therefore, the final probability calculator was developed using only top 10 features with all our samples.A description of the calculator program can be found in the supplementary materials. DISCUSSION In this population-based cohort study using intensively collected clinical data over 12 years in Hong Kong, we developed an individualized risk calculator to predict clozapine prescription, a proxy for TR status, using autoML.About 13.7% of FEP patients were prescribed clozapine, similar as in a previous populationbased cohort study using Danish registry data (13.2%)[24].The ML models identified future clozapine users with AUROC ranging from 0.676 (with baseline information) to 0.774 (with 36-month information).The AUROC of models with information across more than 12 months were all over 0.7, suggesting that the models with longitudinal clinical information have an acceptable prediction ability.Models with the top 10 features were found to have similar performance in terms of AUROC and Brier score compared to the full model and were thus used to establish the individualized risk calculator for development of TR in FEP (TRipCal). Our models performed better than the previous attempts of using machine learning approaches in predicting the development of TR in psychosis 21].It is likely that the autoML allows for the optimal selection of the machine learning pipelines and hyperparameter optimization, and thus better handles the more complex real-life prediction needs of the psychiatric population.Furthermore, models with longitudinal clinical information performed better.These highlighted that longitudinal clinical information reflecting the dynamics of clinical characteristics and medication treatment over time may be more powerful in predicting the development of TR.The increased use of electronic health records (eHR) and the development of technology such as natural language processing in extracting relevant clinical information from the eHR would allow the automated use of longitudinal clinical information in individual risk calculators.This effort could develop into a data-driven clinical assistant system to support clinicians in tailoring individual patient interventions to postpone or prevent TR development as well as reduce delay in clozapine initiation.Some of the predictive features identified in the current study are in line with the previous studies [17,18], such as younger age of onset, schizophrenia diagnosis and relapse [1,17,18].Duration and hospitalization of the first episode as well as the average DDD were found to be prominent features of the baseline prediction model.Having poor response to first-line antipsychotics early in treatment may reflect a different dopamine system function and would be an important indicator in predicting future TR development.This is aligned with findings of neuroimaging studies that patients with TRS have normal dopamine synthesis capacity [36,37].The significant role of DDD of antipsychotics and the number of relapses in predicting future TR status suggest the possibility of dopaminergic hypersensitivity as one of the mechanisms of development of TR [16,38].One notable finding of the current study is the increasing significance of the use of anticholinergic drugs in predicting future TR status.This may reflect the use of high antipsychotic dose leading to more extrapyramidal side effects, thus more use of anticholinergic medications.On the other hand, the loss of cholinergic neurons has been hypothesized as a possible pathogenesis of tardive dyskinesias, antipsychotic hypersensitivity and refractory status to antipsychotic treatment in patients with schizophrenia in earlier reports [39,40].Studies of other cohorts would be needed to replicate these risk factors of TR. One of the key limitations of the study is the use of clozapine as a proxy for TR.Clozapine may be used to alleviate other conditions such as recurrent suicidality [41] and tardive dyskinesia [42].However, over 90% of patients who were prescribed with clozapine were considered to have fulfilled the criteria of TRS [1,43].Furthermore, there are also individuals who had TR but were not on clozapine, that was estimated to be about 4% in our previous study of similar follow-up duration [1].This group might impact the performance of the model development.Patients with a wide range of baseline FEP diagnosis were included in this study though 87% of patients on clozapine had a baseline diagnosis of schizophrenia.This approach, though not able to focus specifically on schizophrenia diagnosis and limit the interpretation of the results from a theoretical perspective, may have better translational value as results could be more readily integrated into the current practice of FEP service.Future larger sample studies could focus on the examination of predictors of treatment-resistant schizophrenia, particularly the possible differential predictors of TRS in the first episode and those after multiple episodes.Quality of the data retrieval, particularly the clinical symptoms, depending on the quality of the clinical record, could have contributed to information bias.Third, this study cohort has a limited age range and has a relatively low rate of comorbid substance use.Therefore, results might not be generalizable to other populations, validation studies with cohorts of different countries and characteristics are needed.Finally, a lack of external validation may limit the generalizability of the trained models.Future effort should focus Fig. 3 Frequency distributions of predicted risks of an individual whether prescribed with clozapine (true or false) of the baseline, 12-, 24-and 36-month probability classification models.As the predicted risk increases from 0.10 or higher, there is a proportional increase in the number of individuals with clozapine use compared to those without in each subsequent risk class. on collecting additional data from diverse sources to validate the model's performance and ensure its robustness and applicability in real-world scenarios. In conclusion, our study presented the development of a risk calculator of future clozapine use, a proxy of TR, in FEP patients (TRipCal) over 12-17 years, using both baseline and longitudinal clinical information in the first 36 months of treatment.This work demonstrated the importance of longitudinal clinical information in predicting development of future with acceptable accuracy using the AutoML approach and thus the possibility of establishing data-driven tools assisting clinicians for earlier detection of individuals with higher risk of future TR development.The individual calculator developed using the top 10 features identified in the current study could be used to personalize the interventions to prevent, postpone TR development and reduce the delay of clozapine use.Future validation studies in different populations and settings are required. Fig. 2 Fig.2Performance of the baseline, 12-, 24-and 36-month probability classification models.A Area under the receiver operating characteristic curve (AUROC).B Brier score.C Decision curve.The results indicate that models with longer longitudinal information performed better.D The average feature importance rank was calculated across 100 repetitions of the autoML procedures, where a higher value indicates greater importance of a feature.Dx diagnosis, EIS early intervention service, DUP duration of untreated psychosis, SA suicide attempts, NSSI non-suicidal self-injury, OCD obsessive-compulsive disorder, SOFAS Social and Occupational Functioning Assessment Scale, A&E accident & emergency, OPD out patient departments, DDD daily defined dose, ECT electroconvulsive therapy. Table 1 . Sample characteristics of first-episode psychosis (FEP) patients with clozapine use compared to those without. a Mean (SD); n (%).b Welch Two Sample t-test; Pearson's Chi-squared test.c False discovery rate correction for multiple testing. Table 2 . Performance measures for a range of dichotomous risk score cutoffs.
4,947.6
2024-01-22T00:00:00.000
[ "Medicine", "Computer Science" ]
Threat Modelling of Cyber Physical Systems: A Real Case Study Based on Window Cleaning Business Threat modelling Cyber-Physical System built on cloud infrastructure to monitor and manage the window cleaning operation using Window Cleaning Warehouse as a case study. Focusing on IoT data collection and cloud infrastructure security and the connections with the Cyber-Physical System. External dependencies and trust levels are defined before using trust boundaries and data flow diagrams to highlight attack surfaces. Expected scenarios from the data flow diagrams are discussed to identify violated intended use of the system using STRIDE threat classification. A risk assessment of assets that may be of interest to an adversary aid the discovery of more security risks that are then prioritised using the DREAD methodology. The results of the research present a comprehensive breakdown of vulnerabilities associated with IoT data security for route optimisation ranging from GPS spoofing, to Firestore vulnerabilities in the real-time database to Bluetooth Low Energy vulnerabilities in the IoT hardware, all of which could be common risks in cyber-physical systems designed by SME businesses. The research concludes various security risks applicable to SME businesses adopting industry 4.0 to alleviate the risk of new security breaches to the business through this adoption, increasing the likelihood of successful adoption of industry 4.0. Introduction Industrial revolutions have governed the success of businesses for centuries with the prospect of business' success being heavily dependent on adopting progressive business models. Pioneering technology has been a catalyst for industrial revolutions with the most recent being the fourth in the form of cyber-physical systems. With technology becoming ever more intertwined with the core success of a business the systems must be threat modelled to alleviate security catastrophes and breaches. Using Window Cleaning Warehouse (WCW) as a case study the design of the cyber-physical system (CPS) is threat modelled to identify and mitigate inherent risks. WCW is a window cleaning equipment supplier looking to adopt industry 4.0 (I4.0) technologies to monitor and optimise the window cleaning operation (WCO). High tech van systems with on-board water purification systems have revolutionised the WCO compared to a bucket and ladder approach. WCW is striving to take this technology further and develop a CPS to monitor and optimise the WCO. This is achieved through the medium of real-time data exchange between internet of things (IoT) hardware in specialised van systems and machine learning (ML) models deployed in a cloud architecture to optimise route and resources. The research will use WCW's theoretical framework as a case study of an innovative digital supply chain for the commercialisation of real-time data. The research will focus on the digital supply chains encompassment of security and privacy along with the electronic and physical security of the hardware for real-time data. WCW propose a novel cyberphysical system that can infer dynamic resource and routing based on real-time window cleaning operation data, such as water usage. WCW will be developing the IoT hardware in house and use the digital supply chain to market business intelligence of the pure water usage hotspots. The novel aspect of the research is the threat modelling of the digital supply chain's real-time data in the IoT based route and resource optimisation context and the security policy as a course of action to circumvent security risks. The threat modelling process follows the Microsoft security development lifecycle to identify potential security threats in the design and strategize risk management to reduce inherent risk severity. This process involves defining the external dependencies of the CPS for the Google Cloud Platform (GCP) cloud functions since they have a direct impact on the security of the system. External entities and their privileges to access the system is then discussed to determine their trust levels for the system, thus setting a precedence of acceptable access privileges among expected entities. To represent the system schematically and represent how data flow is expected, data flow diagrams are used with trust boundaries bordering a change of privilege in the system, to highlight attack surfaces. From the dataflow diagrams the expected data exchange scenarios are documented to clearly define intended use of the system thus making it simpler to distinguish violated intended use of the system using STRIDE threat classification. Because attackers usually act with intent an assessment of assets that may be of interest to an adversary give context to the impacts these risks can cause which is summarised using the DREAD methodology before risks are prioritised and managed. Aims and Objectives This section outlines aims and objectives defining the success criteria of the research project. This will be achieved by focusing on the following aims; 1. Defining the entry and exit points of the system for realtime data 2. Defining the external entities and their trust level in the digital supply chain 3. Defining the intended use of the system and its real-time routing data 4. Defining the external dependencies that are interoperable with the real-time data 5. STRIDE threat classification to identify risks relating to digital supply chain innovation 6. DREAD risk assessment of the novel real-time route and resource optimising IoT data risks 7. Specify the security measures for the identified criticalities and policy to be implemented to curb the problem To fulfil the objective of novel research in threat modelling real-time IoT data for route optimisation based on the physical and electronic security and the digital supply chains encompassment of security and privacy of real-time data. Scope and Constraints The research will focus on the novel aspects which are to threat model the innovative digital supply chain of real-time IoT data for route and resource optimisation. The delimitations of the research are aspects not associated with the cloud infrastructure, IoT hardware, data exchange between these systems and the cyber-physical system. These delimitations include but are not limited to, risks associated with the underlying Flutter framework and the multiple operating systems it supports. Related Work The contextual background that the research will be conducted against includes; 1. The novelty of real-time IoT data for resource and route optimisation 2. The novelty of small and medium-sized enterprises (SME) digital supply chain's encompassment of security and privacy of real-time data for IoT route optimisation 3. The novelty of electronic and physical security of IoT devices to monitor resources affects the efficiency of a route Literature review of related work to IoT data being used for resource and route optimisation is summarised as route optimisation of freight logistics based on vehicle capacity, customer time-window, the maximum travelling distance, the road capacity and traffic data [11]. Other IoT based route optimisation is based on how planned routes are performed using IoT devices to monitor vehicles and drivers to learn preferences [12]. Research has also been conducted on IoT use in waste management routing problems [13,14]. Research objectives aligning with the literature review for real-time IoT data for route and resource optimisation is that the literature does not consider the security of the IoT data for routing problems. This is a significant knowledge gap since attacks could occur to IoT routing such as physical denial of service attacks on roads by routing all vehicles towards congested areas if the integrity of the data SN Computer Science is breached or a breach of confidentiality of the routes could lead to digital supply chain loses of marketable real-time data for WCW in the context of this study. According to a review on cyber risk analytics and artificial intelligence in the industrial IoT and I4.0 supply chains [1] there are knowledge gaps for Small and midsize enterprises (SMEs) since; "the SME's digital supply chains need to encompass the security and privacy, along with electronic and physical security of real-time data", "the SMEs need security measures to protect themselves from a range of attacks in their supply chains, while cyber attackers only need to identify the weakest links" and "the weakness of existing cyber risk impact assessment models is that the economic impact is calculated on organisations stand-alone risk, ignoring the impacts of sharing supply chain infrastructure". The research expressed [1] stresses the lack of knowledge for research objectives 1-3 and how this case study will add to the body of knowledge since it is very important for SMEs looking to adopt I4.0 to have real-time data infrastructures for a more efficient production process and economies of scale [6]. The synthesis of the literature review for objective 2 is that convolutional neural networks (CNN) have been used to detect cross-site scripting attacks (XSS) in SME IoT network payloads after applying data preparation methods [4]. Critical analysis is that the CNN is used on fog compute nodes which require the integration of CNN inference and data pre-processing into self-hosted compute units. This method is expensive to develop and there are also cloud solutions readily available such as Google Cloud Armor which would be cheaper ($0.75 per million requests), easier and quicker to deploy and develop by security experts. Bluetooth Low Energy (BLE) will be used to connect a mobile device to the IoT hardware to monitor variables affecting the route optimisation. In line with research objective 3 an exploration of prior work has revealed case studies, where unauthenticated BLE devices have been exposed allowing anyone to connect to the BLE device using a BLE sniffer. There have also been researched studies on bypassing the passkey authentication in BLE [2] and the exploration of BLE security [3]. This case study will add to the body of literature by exploring the WCW case study and look at these security risks in the context of real-time data exchange in the WCO. After comparing and contrasting the literature to identify knowledge gaps it is clear there is a gap in knowledge about the use of real-time IoT data for resource and route optimisation that this paper will address. This paper's contributions will also be in the form of building knowledge for SME's digital supply chain's security and privacy of real-time IoT data for route optimisation. The final contribution to knowledge gaps in the electronic and physical security of IoT devices is to monitor resources affecting the efficiency of a route which is presented in this paper. Research Methodology To fulfil the research objective the data is collected through a non-probabilistic convenience sample using WCW as a case study of a theoretical CPS design. The data analysis method is grounded theory [5] which is a systematic method of constructing hypotheses, and theories of possible security risks based on the threat modelling of the design of the CPS. Since ideas and concepts of security risks become apparent from the qualitative threat model data they can then be succinctly summarised with codes and grouped into threat classifications before being analysed further to discuss risk severity, impacts and mitigations. Entry and Exit Points The confidentiality, integrity and availability of the realtime data are important since it is a fundamental part of the CPS and the digital supply chain. Figure 1 shows an abstract view of the architecture consisting of the components used to monitor the WCO. Round-Control's data is to be stored on GCP's FireStore which is a real-time NoSQL database. Only authorised WCW staff can create, read, update, delete (CRUD) and make backups of the data through the GCP console. It is encrypted automatically by GCP but is decrypted to read in the Firebase console through an authenticated admin account. The IoT hardware is composed of Arduino components consisting of an HM-10 Bluetooth Low Energy (BLE) transceiver enabled microcontroller, inflow and outflow Hall Effect sensors, temperature sensor, fill level sensor, total dissolved solids (TDS) inflow and outflow and a Global Positioning System (GPS) sensor. The Flutter application on the mobile device can connect to the hardware via BLE and forward the real-time data to the platform-specific Firebase app endpoint. The communication with the Firebase app endpoints is authenticated via Firebase Authentication which requires validation of email ownership. The Flutter application is authenticated to use the Firebase app using the Firebase app credentials for each platform. Changes made to the Firestore is broadcasted to all authenticated users signed in that have access to that user's data in the Firestore so the real-time data of the IoT hardware is updated in real-time across Android, iOS, Linux, Windows, macOS and web derived apps. It is important to define the external entities and their trust levels to access the system. The expected entities are presented in Table 1. Figure 1 illustrates the data flow and trust boundaries but does not intuitively describe the expected scenarios and the intended use of the system to identify deviations. The intended use of the system is presented in Table 2. Scenarios deviating from the intended scenarios of the system in Table 2 help identify violated deployment of the application and intended use of the system thus impacting the security of the system. External Dependencies The external dependencies are directly interoperable with the system. The external entities relating to the real-time data are presented in Table 3. STRIDE Threat Classification The qualitative data collected about WCW's adoption of I4.0 is analysed in this section for risks by analysing the intended use, external dependencies, and the descriptions of the data flow diagrams. The qualitative data can IoT hardware be succinctly summarised through a thematic grouping of threats into STRIDE classifications. STRIDE is an acronym for spoofing, tampering, repudiation, information disclosure, denial of service and elevation of Privilege. The derived data from the data flow diagram in Fig. 1 can be succinctly summarised as threat classifications of STRIDE as presented in Table 4. Risk Assessment The identified risks severity is quantified using the DREAD risk assessment model. DREAD is an acronym for damage, reproducibility, exploitability, affected users and discoverability. Each category is given a rating from 1 to 10, where 10 is the worst. The sum of the ratings helps to prioritise risks. The DREAD methodology for risk assessment is typically inconsistent among assessors and ratings tend to be subject to debate so the rationale for the ratings is provided ( Table 5). Discussion of Risks and Mitigation The high prioritised risks and their mitigation is discussed in this section and their novel contribution to real-time IoT data security for route and resource optimisation. The novel aspects of the research is the threat modelling of real-time IoT data for route optimisation for the physical and electronic security of the system as well as the digital supply chains encompassment of security and privacy of real-time data. Through literature review, the common variables used for routing are presented in Table 6. To address risks 18, 19 and 20 the Bluetooth module should enforce pin pairing where the pin for the van system hardware is generated differently for each van installation and provided to the customer. The HM-10 BLE module should be genuine by having a crystal fitted alongside the bottom four solder connections otherwise you cannot add pin authentication. Conclusion The researcher set out to bridge the identified knowledge gap through threat modelling real-time IoT data for route optimisation based on the physical and electronic security of the system as well as the digital supply chains encompassment of security and privacy of real-time data using WCW as an SME case study. The main points of the research summarise that numerous cyber security vulnerabilities have been found with particular focus on real-time data exchange that other SMEs can consider when designing CPSs. The results are significant, since IoT data transmission enabled by 2 g is Tampering with the data in Firestore through the User Interface (UI) by accidentally deleting data (1). Non-repudiation without the Firestore rules, because the allowed actions should be limited (2) 2 g sim Physical denial of service through being covered to not omit 2 g signal (8). 2 g was created in 1991 and encryption between tower and device can be cracked in real-time to disclose information, since HTTP POST is used and the users' Firebase Authentication details and password and email is not encrypted before sending over 2 g (9). Nonrepudiation is caused by the system not being able to have enough evidence to prove that it should deny a malicious process, since there is no authentication between the tower and 2 g enabled hardware (10). Spoofing is an issue, since a man in the middle attacks can happen as someone posing as a 2 g tower is possible due to no authentication between device and tower (11). Tampering with the data is possible with a man in the middle attacks (12) IoT sensors Due to the significant number of sensors the attack surface is quite broad for compromising sensors for example if the sensor is unauthenticated then a spoofing attack can occur where false sensor signals are injected causing malicious data input like the considered by Huang et al. [7] (14). An example of spoofing attacks on IoT sensors is using laser microphones [8] where oscillating laser signals from a fixed location can deflect off of the microphone receiver and cause vibrations mimicking audio signals. Tampering with the Hall Effect sensors could compromise the validity of the water flow which might happen if there are incentives to reduce water usage on jobs (15). Denial of service attacks can happen on the path between the sensors, since they are exposed in the van system by delaying or blocking the transmission aiding stale data attacks (16) GPS mobile GPS spoofing on the mobile device is easy using free PlayStore and App Store apps (21) Arduino controller Tampering with the Arduino controller is possible, since it is easily accessible so compromising the controller can send incorrect control signals to the actuators [9] (13). A denial of service attack to the user can happen through compromising actuators through zero dynamics attacks, since the actuators are exposed so the actuator will execute a different command than what was intended by the controller [10] (17) BLE Denial of service since only one BLE connection at a time (18). Connecting to the BLE and operating the actuators in the van while not being the owner of the van system is a spoofing attack (19). If the IoT device is operated without a mobile device and there is currently no signal then data is stored in temporary memory on Arduino which could be erased if an adversary connects and modifies the data by beginning a process (20) Shopify targeted advert Information disclosure and spoofing, since API Gateway API keys are programmed into Flutter code and not encrypted so access to API key allows an adversary to pose as an email address owner to find their purchase history (3) API Gateway Denial of service, since the number of invocations of cloud functions is not capped so spamming requests to a cloud function could yield a big bill (4). Spoofing since API keys are sent in the HTTP POST request URL and can be obtained (5) Hardware Data Update Pyrebase is required, since using API key alone to access API gateway is unsafe, since the API key is quite easy to obtain through social engineering or other vulnerabilities discussed. Authentication with Firebase Authentication is required before being able to change data in the database but Pyrebase is not an official Google package so could have vulnerabilities allowing for tampering (6). Not restricted to a number of invocations so denial of service through large compute bill (7) likely to be considered for real-time data exchange by other SMEs, since it is low cost but the vulnerabilities discussed are significant. The significance of the risks found with BLE is also likely to apply to many other SME CPS projects. The technical achievement of the paper is its identification of security vulnerabilities for novel IoT route optimisation variables and the proposed security measures and policies to circumvent, manage and monitor the risks. It is difficult to reproduce the attack, since Firestore makes you type in to confirm the deletion 2 10 4 6 10 2 32/50 The damage would be high, since Firestore CRUD operations could be performed. The affected users would be WCW customers and their customers and WCW 3 8 7 3 10 8 36/50 Quite difficult to decompile into dart code from Android application Package (APK). High damage to WCW 4 10 10 8 10 9 47/50 Uncapped cloud function invocations could yield an extortionate GCP bill for WCW and might be too much to pay to cause bankruptcy affecting the whole supply chain 5 10 10 9 8 10 47/50 Getting a user's API key could mean that an adversary can use the systems services and the API key owner would be billed 6 8 9 9 10 9 45/50 If Pyrebase was illegitimate then they would have full access to the Firebase services for the project 7 10 10 8 10 9 47/50 Same as risk 4 8 1 10 10 1 10 32/50 The attack is easy to reproduce and does not take much technical knowledge 9 10 7 5 9 8 39/50 The attack is severe, since users Firebase credentials are exposed. The affected users would be the user, their customers and WCW 10 10 7 5 9 8 39/50 The phone can't authenticate a legitimate 2 g receiver so and the same rationale as risk 9 applies 11 10 8 5 9 8 40/50 The same rationale as risk 10 12 10 9 4 10 8 40/50 The attack is easier to reproduce, since the HTTP packet structure is consistent and once the data structure is found data can be modified automatically through a malicious parsing program 13 2 10 10 1 9 32/50 The controller is easily accessible 14 Availability of Data and Materials The authors confirm that the data supporting the findings of this study are available within the article [and/or] its supplementary materials. Code Availability Code is maintained on a private GitHub repository and is property of Window Cleaning Warehouse. Conflict of Interest The authors are involved in a KTP. It is in the interest of the corresponding author to ensure the success of the KTP, exhaustive of cyber-security. Ethical Approval No ethics approval is needed, since research does not deal with human participants or human tissue, and is not sensitive, deceptive or covert according to the University Ethics Committee at Cardiff Metropolitan University. [11,13,14] and this study 5, 8, 9, 10, 11 and 12 Risks 5 can be addressed by storing the API key associated with the user in the Firestore database to then check that the API key used is associated with the authenticated user Risks 8, 9, 10, 11 and 12 can be completely averted using a more secure mobile network protocol such as 3 g, 4 g or 5 g, but measures against 2 g downgrade attacks should be considered to stop downgrade back to 2 g. For risk 9 the user's password should also be encrypted before transmission as an extra security measure Travel distance [11][12][13][14] and this study 4 and 7 To calculate travel distance based on roads the GCP Directions API is commonly used. The billing for this is $5(~ £3.67) per 1000 requests. Risks 4 and 7 relate to uncapped Cloud Function invocations. WCW is looking to build a digital supply chain for business intelligence that is fully scalable. The security measures to mitigate risks 4 and 7 is to cap the number of requests per minute from an Internet Protocol (IP) address and cap the number of function instances that can be invoked in parallel. Monitoring the risk would be in the form of monitoring the user base growth to ensure that the CPS is not hindering genuine requests by limiting the number of function instances Road traffic data [11][12][13][14] and this study 21 Security measures to circumvent risk 21 of spoofing GPS location of a mobile device to simulate standstill traffic on a popular road would be to implement mobile device side code to detect mock locations. In Android 17 this can be done through Setting. Secure to detect if ALLOW_MOCK_LOCATION is enabled. On Android 18 and above the Location.isFromMockProvider() API can be used. On iOS it is possible to detect if the iPhone is jailbroken that suggests the user could be spoofing their location Historic preferences [12] and this study 1 and 2 A policy to circumvent tampering with data in the firebase UI (risk 1) is to set up a Cloud Scheduler to publish a topic every specified duration that a Cloud Function is subscribed to. It would then be possible to write a Cloud Function in Node.js to back up a copy of the Firestore database to a Google Storage Bucket for disaster recovery, with the added benefit of having an offline data set you could export to a Comma-Separated Values (CSV) file using GCP's Big Query. The monitoring of the risk would be to check the Cloud Function logs to ensure it is being invoked routinely. Security measures addressing risk 2 are to implement Cloud Firestore Security Rules to restrict read and write access to authenticated users with verified emails. Further restricting the privileges of the users to roles is recommended so not any logged-in user has read and write access to entire database. Firebase Admin Software Development Kit (SDK) and Cloud Functions can still access the database regardless of closed access. It is, therefore, recommended to restrict access to public Cloud Function endpoints using API Gateway to enforce an OpenAPI specification with security definitions for API keys and authentication. Monitoring the risk would be in the form of setting up local unit tests using JavaScript version 9 SDK Consent to Participate No humans were involved in the participation of the research, since it was a theoretical threat analysis of the design of a cyber-physical system. Consent for Publication Window Cleaning Warehouse give consent for publication. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
6,179.6
2022-01-20T00:00:00.000
[ "Computer Science" ]
Event-triggered robot self-assessment to aid in autonomy adjustment Introduction: Human–robot teams are being called upon to accomplish increasingly complex tasks. During execution, the robot may operate at different levels of autonomy (LOAs), ranging from full robotic autonomy to full human control. For any number of reasons, such as changes in the robot’s surroundings due to the complexities of operating in dynamic and uncertain environments, degradation and damage to the robot platform, or changes in tasking, adjusting the LOA during operations may be necessary to achieve desired mission outcomes. Thus, a critical challenge is understanding when and how the autonomy should be adjusted. Methods: We frame this problem with respect to the robot’s capabilities and limitations, known as robot competency. With this framing, a robot could be granted a level of autonomy in line with its ability to operate with a high degree of competence. First, we propose a Model Quality Assessment metric, which indicates how (un)expected an autonomous robot’s observations are compared to its model predictions. Next, we present an Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm that uses changes in the Model Quality Assessment above a threshold to selectively execute and report a high-level assessment of the robot’s competency. We validated the Model Quality Assessment metric and the ET-GOA algorithm in both simulated and live robot navigation scenarios. Results: Our experiments found that the Model Quality Assessment was able to respond to unexpected observations. Additionally, our validation of the full ET-GOA algorithm explored how the computational cost and accuracy of the algorithm was impacted across several Model Quality triggering thresholds and with differing amounts of state perturbations. Discussion: Our experimental results combined with a human-in-the-loop demonstration show that Event-Triggered Generalized Outcome Assessment algorithm can facilitate informed autonomy-adjustment decisions based on a robot’s task competency. Introduction Selecting a level of autonomy in line with an autonomous robot's capabilities is a critical challenge to safe, reliable, and trustworthy robotic deployments.Giving a robot large amounts of autonomy in an environment in which it struggles could lead to damage to the robot and, in the worst case, mission failure.On the other hand, giving a robot less autonomy in an environment where it is quite capable may unnecessarily increase the workload for a human supervisor who must spend more time directly managing or controlling the robot.Robots that can self-assess and report their competency could provide human-robot teams the ability to make informed autonomy-adjustment decisions directly in line with the robots' abilities. Consider a scenario where a search-and-rescue (SAR) team is searching for survivors after a disaster.The environment is quite dangerous, so the team decides to employ a semi-autonomous robot to perform a ground search.A human supervisor monitors the robot's progress from a safe, remote location outside the disaster area.The supervisor receives telemetry data and video from the robot and must use that information to decide whether to take manual control of the robot or allow it to drive autonomously.The supervisor's decision to trust the robot is based on their perception of the robot's abilities.However, if there is misalignment between the supervisor's perception of the robot's abilities and the robot's actual competency, the supervisor may inadvertently push the robot beyond its competency boundaries (Hutchins et al., 2015).Unexpected robotic failure can also lead to lower overall trust in the robot, which could lead to the supervisor being less likely to rely on the robot in the future, regardless of the robot's ability to accomplish the task (Dzindolet et al., 2003;de Visser et al., 2020).To make appropriate autonomy-adjustment decisions, the supervisor needs to understand the robot's competency and how it may change during the mission. Recent work has shown that robots that report a priori competency self-assessments can align operator perception with the robot's actual competency, thus improving decision-making and performance in a variable-autonomy navigation task (Conlon et al., 2022b).However, in dynamic and uncertain environments, like the SAR scenario outlined above, an a priori confidence assessment can quickly become stale due to environmental changes such as falling debris or unexpected obstacles.In this work, we first develop a metric to monitor in situ competency with respect to the robot's model-based predictions, which we call Model Quality Assessment.We then propose an algorithm called Event-Triggered Generalized Outcome Assessment (ET-GOA), which uses the Model Quality Assessment to continually monitor for unexpected state measurements and selectively trigger a high-level self-assessment of the robot's task objectives.We present evaluation results across several simulated and live robot navigation scenarios, where the environment was unexpected with respect to the robot's a priori knowledge.We next discuss a small demonstration showing how that in situ robot competency information can be reported to a human supervisor to facilitate informed autonomyadjustment decisions.We close with a brief discussion of our results and directions for future work for competency-based autonomy adjustment. 2 Background and related work Variable autonomy Variable autonomy is a paradigm in robotics where the level of control of a robotic system can change at different points during a task.The level of control can range from a robot with autonomous capabilities acting and making decisions under its own control at one extreme to a human supervisor fully controlling (i.e., teleoperating) and making decisions on behalf of the robot at the other extreme.These levels can be discrete, such as those based on vehicle capabilities found in the autonomous driving literature (Rödel et al., 2014), or can be more fluid and based on the capabilities of the collective human-autonomy team (Methnani et al., 2021).The autonomy level (and adjustments thereof) can also be a function of other factors, such as the distance or data link between the supervisor and robot, the robot's autonomous competence, or the supervisor's trust in the robot.Changes to the autonomy level can occur at any time throughout a task or mission.These changes can be initiated by the robot (robot-initiative) (Costen et al., 2022;Mahmud et al., 2023), the human supervisor (human-initiative), or from either the supervisor or the robot (mixed-initiative) (Chiou et al., 2021).While our experiments and demonstration in this manuscript focus on a human-initiative system, robot competency assessment for autonomy adjustment can be applied to mixed-initiative and robot-initiative systems as well. Autonomy-level changes can be pre-planned and designed into the task (Johnson et al., 2014) or ad hoc due to robot degradation or unexpected events (Fong et al., 2002).This work focuses on the latter: to understand when a human supervisor should adjust the autonomy level during the mission due to unplanned events.It is important to note that unplanned events could have a positive or negative impact on the mission, making it more or less difficult for the team to accomplish the mission goals.Previous work in this domain investigated methods for the robot to monitor itself and call for help when necessary (Fong, 2001;Basich et al., 2020;Kuhn et al., 2020) or for the human to take a more active role in monitoring the robot's abilities and adjust autonomy when necessary (Ramesh et al., 2022;2021).We take a more collaborative approach that seeks to enable both the robot and the human to monitor metrics of the robot's task competency to better understand 1) when the robot is more or less capable than previously predicted and 2) when supervisor-initiated autonomy adjustments are necessary.Our approach enables the team to not only understand when the robot is less competent than planned and may need assistance but also realize when the robot is more competent and may be able to operate with increased autonomy. Robot competency self-assessment Competency self-assessment enables autonomous agents to assess their capabilities and limitations with respect to task constraints and environmental conditions.This critical information can be used to improve internal decision-making and/or can be communicated to a human partner to improve external decision-making.Pre-mission (a priori) self-assessments enable an autonomous agent to assess its competency before the execution 10.3389/frobt.2023.1294533 of a task or mission.These methods generally compute agent selfconfidence based on simulation (Israelsen et al., 2019;Ardón et al., 2020) or previous experience (Frasca et al., 2020).Our recent work showed that reporting of a priori self-assessments lead to better choices of reliance (Conlon et al., 2022c) and improvements in performance and trust (Conlon et al., 2022b).However, in dynamic environments, a priori assessment is a poor predictor of the agent's confidence due to factors that are not accounted for before execution, such as environmental changes, task changes, or interactions with other agents.Running a priori methods online (periodically) could conceivably capture dynamic competency changes.However, such assessments can waste computational resources if competency has, in fact, not changed or may be too expensive for certain kinds of decision-making agents (Acharya et al., 2022;Conlon et al., 2022a;Gautam et al., 2022). In mission (in situ) self-assessment enables an autonomous agent to assess (or reassess) its competency during task execution.Popular methods such as online behavior classification can identify poor behavior and trigger the agent to halt the operation and ask for help in the event of a failure (Fox et al., 2006;Rojas et al., 2017;Wu et al., 2017).These methods, although able to capture dynamic competency changes, require examples of both good (competent) and poor (incompetent) behaviors, which may be difficult or impossible to acquire in many real-world applications.Another method of in situ self-assessment involves monitoring features of the agent's current state.For example, Gautam et al. ( 2022) developed a method to monitor deviations from design assumptions, while Ramesh et al. (2022) used the "vitals" of a robot to monitor its health during task execution.Both methods provide a valuable instantaneous snapshot of the agent's state at a given time, which can indicate performance degradation online; however, neither predicts higher-level task competency (for example, does the degradation actually impact the task outcome?). In contrast, we propose an algorithm that enables the assessment and communication of high-level task outcome competency both a priori and in situ.We leverage the method of Generalized Outcome Assessment, which was originally developed as an a priori method due to computational cost, to assess a robot's task outcome competency.We then develop a method of in situ Model Quality Assessment that monitors the alignment between the robot's model predictions and state observations to intelligently choose when the robot should (re-)assess and (re-)communicate task outcome competency.We argue that understanding when and how the robot's competency has changed will help human supervisors make improved autonomy-adjustment decisions. 3 Algorithm development Modeling the world We take a model-based approach to competency selfassessment.We define a world model, M, as a stochastic model of the robot, its dynamics, and its environment from which trajectories can be sampled.M could take the form of a Monte Carlo-based planner (Israelsen, 2019), a black box neural network (Ha and Schmidhuber, 2018;Conlon et al., 2022a), or a highfidelity simulation environment (Koenig and Howard, 2004;Michel, 2004).Similar modeling paradigms have been referred to as model-based in the reinforcement learning literature (Acharya et al., 2023;Moerland et al., 2023) and digital twins in the simulation literature (Girletti et al., 2020;Phanden et al., 2021;Xu et al., 2021).Within the framework of Factorized Machine Self-Confidence (FaMSeC), the purpose of M is to enable the robot to simulate itself, executing the task in a representative environment.Sampling from M results in a predicted distribution of trajectories through the robot's state space, which can be analyzed by our FaMSeC assessments to understand how capable the robot is expected to be. Factorized Machine Self-Confidence To help facilitate informed autonomy-adjustment decisions, we capture changes in robot competency with FaMSeC.FaMSeC is a computational framework that enables an autonomous robot to self-assess across five different dimensions that are theorized to impact competency.A diagram of the FaMSeC framework adapted from Israelsen (2019) is given in Figure 1. FaMSeC assumes a planning and execution flow commonly found in autonomous systems, which is shown as tan boxes with black lines in Figure 1.First, the user issues a command (or a task) to the robot.Next, the robot must interpret or translate that task.The interpreted task, along with the robot's world model, is then used by the robot's solver to generate a plan, and that plan is then executed through interactions with the environment. The assessment mechanism-shown as rounded blue boxes connected with blue dashed lines-evaluates each planning and execution component to varying degrees, which captures the robot's overall competency.Interpretation of Task Assessment (ITA) assesses how well the agent has interpreted the user commands.Model Quality Assessment (MQA) assesses how well the agent's world model aligns with reality.Solver Quality Assessment (SQA) assesses how well the agent's solvers and policies align with optimal or trusted solvers and policies.Generalized Outcome Assessment (GOA) assesses the plan to predict the degree to which the agent will achieve userdefined outcomes.Past Experience Assessment (PEA) assesses how the robot has performed in previous and similar problem instances by analyzing expected and actual outcomes.A combination of some or all factors can be reported to a human user to calibrate their mental model to the robot's predicted competency, with respect to a given task and environment.For a thorough treatment on Factorized Machine Self-Confidence, please refer to Israelsen (2019). To date, only the GOA and Solver Quality Assessment (SQA) metrics have been fully implemented and validated.Outcome Assessment is a powerful tool to calibrate users to robot capabilities but only as an a priori metric.Toward in situ competency assessment, this work first proposes a novel MQA FaMSeC metric.We then develop an algorithm that combines MQA with an existing GOA to enable fast online monitoring and selective assessment of task outcome competency. Assessing task outcome competency To assess task outcome competency, we leverage Generalized Outcome Assessment (GOA) (Conlon et al., 2022a), an extension of the original Outcome Assessment proposed by Israelsen (2019).For brevity, we refer to this as Outcome Assessment or GOA henceforth.GOA begins by simulating task execution by sampling state predictions from a world model-based distribution π(s t+1 |s t , a t ), which results in a distribution of predictions for a target outcome of interest.Examples of target outcomes include goals accomplished or task completion time. The outcome predictions are then ranked according to their desirability to the user such that for outcomes z i and z j , z i < z j if z i is less desirable than z j , and all outcomes less than a threshold z * are considered undesirable.For example, a user might desire a task completion time of no more than z * = 60 s, where a prediction of a late completion at z i = 65 s is less desirable than a prediction of an early completion at z j = 45 s.Next, GOA analyzes the ranked predictions according to the ratio of the upper partial moment to lower partial moment (Wojt, 2009): The UPM LPM statistic weights the probability of an outcome by its distance from z * .Because it ranges from negative to positive infinity, it is transformed to the range [0, 1] through a standard sigmoid function: The value of GOA is an indicator of the robot's confidence in achieving outcomes equal to or more desirable than z * .We expect that if the robot's world model predictions are generally above z * , then, GOA tends toward 1 (higher confidence), and conversely, if the world model predictions are generally below z * , we expect GOA to tend toward 0 (lower confidence).The value of GOA can be reported as a raw numeric ∈ [0, 1] or mapped to a semantic label indicating confidence.For our quantitative experiments covered later in this manuscript, we analyzed raw numerical GOA confidence values.For our qualitative human-in-the-loop demonstration, we reported the semantic labels using the mappings very bad confidence (GOA ∈ [0, 0.25)), bad confidence (GOA ∈ [0.25, 0.4)), fair confidence (GOA ∈ [0.4,0.6)), good confidence (GOA ∈ [6, 0.75]), and very good confidence (GOA ∈ [0.75, 1.0)). It is important to note that GOA can be an expensive operation to run online due to sampling of potentially complex world models.To understand how task outcome competency has changed in situ, a designer must cope with a trade-off between an accurate understanding of task outcome competency and the computational expenditures needed to assess the said task outcome competency.One way to address this trade-off is to provide the robot the ability to intelligently choose to self-assess based on predicting when its competency has potentially changed. Developing the Model Quality Assessment To estimate when competency may have changed, we implemented a new metric for Factorized Machine Self-Confidence called MQA.The MQA was defined by Israelsen (2019) as assessing "how well measurements and events predicted by an autonomous system model align with what it actually should observe in reality." We argue that MQA should be thought of as a distance function between an autonomous agent's model prediction and real observations or measurements gained through interacting with the environment.Furthermore, we believe that MQA should be bounded within a small range to align with the other FaMSeC factors.In order to assess based on live measurements, model quality should also be fast enough to run online.We propose a general form of Model Quality Assessment as follows: where MQA should tend toward 1 if there is high alignment between the model predictions ŷ and real measurements y and tend toward 0 otherwise. One promising method that fits our requirements is the surprise index (SI).SI is defined as the sum of probabilities of more extreme (or less probable) events than an observed event given a probabilistic model (Zagorecki et al., 2015).For a given event, e, modeled by probability density function π, SI is computed by summing over the probabilities of more extreme events in π: The surprise index can be thought of as how (in)compatible an observation e is given a set of possible events predicted by π.This is similar to the more well-known entropy-based surprise (Benish, 1999;Baldi and Itti, 2010).However, whereas entropy-based surprise is unbounded, the surprise index is bounded between zero (most surprising) and one (least surprising).SI also shares similarities with the tail probability or the p-value, given the hypothesis that e is from the distribution π.A large p-value (SI tending toward one) indicates that e may have been sampled from π, while a small p-value (SI tending toward zero) indicates strong evidence to the contrary. For an autonomous agent with multivariate state s t ∈ S at time t, we define the Model Quality Assessment at time t as the minimum of the surprise index across the state marginals, given a state observation s t and state prediction π t .Here, π t is predicted by the agent's world model, and s t is the state observation received at time t.We marginalize the state and compute the metric over marginals included in indicator list I.I is a list of designer-defined state elements that should be monitored to assess competency; for example, the robot's (x, y, z) position may be included in I, while its current control state "teleoperation" or "autonomous" may not.This gives us a succinct surprise index-based MQA formulation using only essential state elements: Continuous monitoring of the MQA during task execution can provide information about how competent the agent is in a given environment and how that may change in situ.Moreover, we can use the instantaneous value of the MQA as a trigger for the agent to re-assess the higher-level outcome of the task.In other words, a waning MQA indicates that the agent's world model predictions have diverged from measurements and can be an indicator that the agent's higher-level task competency has changed.In this work, we say that the agent should reassess higher-level task outcomes if the MQA falls below a designer-defined threshold δ.Additionally, it is important to note that MQA and GOA operate over world model predictions of the same form, (s t , a t , s t+1 ).Trajectories sampled by GOA to assess outcomes can be used by MQA, along with real observations, to assess model quality. Putting it together: the Event-Triggered Generalized Outcome Assessment algorithm We combine the Model Quality Assessment and the Generalized Outcome Assessment into an algorithm for in situ competency self-assessment called ET-GOA.The algorithm is presented in Algorithm 1 and can be broken up into two components: (1) a priori (or before task execution) and (2) in situ (or during task execution). Before task execution (lines 1-5): Line 1 takes as input a world model M, a task specification T, a set of GOA thresholds Z, an MQA threshold δ, and a set of MQA indicators I. Next (line 2), M is used to simulate the execution of task T, given an initial state s 0 .This results in a set of predictions [π t ] t=0:N for each time step t.The predictions for each time step are stored in an experience buffer (line 3) and then used to compute the initial Generalized Outcome Assessment (line 4), which can be reported to an operator (line 5). During task execution (lines 6-16): The agent receives state observation s t at time t (line 7).It then retrieves the state predictions π t from the experience buffer (line 8).Next, the algorithm computes the MQA (line 9).If mqa t is below the threshold δ, an anomalous or unexpected state observation has been received, and task outcome confidence should be reassessed (line 10).In this case, a new set of predictions π t+1:N is sampled from simulating M (line 11) and saved in the experience buffer (line 12).A new GOA is then computed using the newly updated experience buffer (line 13), and the associated competency is reported to the operator (line 14).If, on the other hand, mqa t is above the threshold δ, this indicates that the agent's predictions align with its observations, and no confidence update is needed at this time (line 16).This loop (line 6) continues for the duration of the task, comparing state prediction π t to the state observation s t and (if necessary) computing and reporting updates to the robot's task competency. The flowchart of the Generalized Outcome Assessment algorithm is shown in Figure 2. The left side of the figure shows the a priori portion of the algorithm: given a user command or task, the system generates an initial plan, which is assessed by Generalized Outcome Assessment, and then communicated to a user to improve a priori decision-making.The world model state predictions for each time step, π 0:N , generated by GOA, are also saved in the experience buffer.The right side of the figure shows the in situ portion of the algorithm: at time t, the state measurement from the environment, s t , and world model prediction, π t , are used to compute the MQA.If the assessment is less than threshold δ, then GOA is executed and reported to the user for improved in situ situational awareness and decision-making.After GOA is run, the experience buffer is updated with new world model predictions π t+1:N .If the assessment is greater than the threshold, then no reassessment is needed.The loop of continuous Model Quality Assessment and selective Generalized Outcome Assessment continues for the duration of the task. Experimental design To validate the Model Quality Assessment and the Event-Triggered Generalized Outcome Assessment algorithm, we designed and executed experiments across both simulated and live scenarios. Research questions We developed four core research questions to analyze Model Quality Assessment and the Event-Triggered Generalized Outcome Assessment algorithm: 1. How does the Model Quality Assessment respond to different types of perturbations?We hypothesize that the MQA will be lower for unexpected state measurements and higher for expected state measurements.This would indicate that MQA can capture misalignment between the robot's world model and reality.2. How does the triggering threshold impact the accuracy of ET-GOA? We hypothesize that GOA accuracy will increase proportionally to the δ threshold.We believe that a higher δ threshold should increase the sensitivity of MQA and increase the number of GOA triggers.3. How does the triggering threshold impact the computational complexity of ET-GOA?We hypothesize that the computational complexity of the ET-GOA algorithm will increase proportionally to the δ threshold.The increase in threshold will increase the number of GOA triggers, which will result in higher computational complexity.4. Does ET-GOA perform similarly in simulation and on a live platform?We hypothesize that the ET-GOA algorithm on a live platform will respond to state perturbations similar to the simulated robot.We expect to see the same general trends in MQA behavior between our simulated and live experiments. Robot state, planning, and competency self-assessment For both the simulated and live experiments, the robot's state took the form (x, y, z) t , where (x, y, z) is the position in meters in a global frame and t is the time in seconds.To generate waypoints, we used a rapidly exploring random tree (RRT) planner, 10.3389/frobt.2023.1294533which is a common stochastic method for generating motion plans in an obstacle-rich environment (Lavalle, 1998).We used a simple proportional derivative (PD) controller to compute velocity actions between individual waypoints.To avoid confounding our experiments, we did not include any autonomous replanning or obstacle avoidance maneuvers.In other words, our robot had only limited competency while driving autonomously.If any predetermined path was blocked, the robot was programmed to automatically stop to prevent physical collision with obstacles. The robot's world model was an instance of the Webots highfidelity simulator (Michel, 2004).The simulator was programmed with a copy of the robot as well as copies of all obstacles the robot had knowledge of at the current time step.If the robot sensed an obstacle in the execution environment using its front-facing camera, the world model was updated with a simulated obstacle of similar size and position.When the ET-GOA algorithm was active, the robot had the ability to query the world model on demand for self-assessments. Generalized Outcome Assessment was computed using 10 Monte Carlo rollouts of the world model robot navigating from the "real" robot's current location along the waypoints, given all known obstacles.Our experiments focused on a single outcome of interest for GOA: autonomous driving time.This was the time the robot spent in an autonomous driving state.To facilitate a more detailed analysis of our ET-GOA algorithm, we separated the time spent running a GOA assessment from the time spent autonomously driving in the environment.Thus, the autonomous driving time outcome was equal to the time spent driving minus the time spent running GOA.We specified a maximum desired autonomous driving time of 60 s.This meant that GOA was parameterized by z * = 60, and the assessment returned the robot's confidence in successfully navigating to the goal within 60 s of the cumulative time spent in the autonomous driving state.For our experiments, the raw GOA value in [0,1] was used, and for the demonstration, we mapped the raw GOA value to a semantic label, as in Section 3.3.Note that we could have instead looked at other outcomes of interest such as minimum or maximum velocity, obstacle hit, or exceedances of operational thresholds.The world model predictions generated during GOA were saved in an experience buffer for use by MQA. Model Quality Assessment was run each second using the current world model predictions in the experience buffer and state indicators I = [x, y, z].The choice of indicator set is of critical importance and should include state elements that the designer foresees as being impacted by changes in competency.The world model predictions at each time step for each indicator i ∈ I were modeled as a normal distribution, N (μ i , σ i ).For a real observation x i , the surprise index was then the sum of the lower-and upper-tail CDF, F(x|x < o i ∪ x > o i ).In other words, the sum of probabilities of all observations more extreme than o i .If any in situ re-assessments occurred, the experience buffer was flushed and updated with the latest world model predictions. Simulation experiment overview Our simulation experiments were run on a custom-built Windows 10 PC with an Intel Core i7 3.4 GHz CPU, 32 GB RAM, and NVIDIA RTX 3060 GPU.We used the Webots simulator customized for our specific use case.The simulation environment was a 4 × 10 m space with a single ground robot of approximate size, shape, and capability as a Clearpath Jackal.The robot was equipped with a notional sensor capable of sensing obstacles within a 2 m radius from the robot.The robot also received accurate position information at all times from the simulation.The robot's physical capabilities were limited to basic waypoint following and emergency stopping, and deviation from the planned waypoints would require human control.In other words, our robot was only moderately competent. To investigate our first research question, we tasked the robot with driving from point A to point B along a fixed set of waypoints.We varied the amount and type of state measurement perturbation the robot experienced as well as how well those perturbations were captured in the robot's world model.We developed the following three conditions: 1. Accurate world model: The actual execution environment contained an area of high transition noise, which we programmed to 1) reduce the robot's intended speed by 50% and 2) added random Gaussian noise (μ = 0, σ = 0.5 m s ) to the robot's velocity actions.The robot's world model was provided accurate a priori information about all obstacles in the environment.This condition represented the baseline case where the robot's world model accurately captured all information about the environment.2. Unexpected transition noise: The actual execution environment contained an area of high transition noise, which we programmed to (1) reduce the robot's intended speed by 50% and (2) added random Gaussian noise (μ = 0, σ = 0.5 m s ) to the robot's velocity actions.Neither could the robot directly sense the area nor did the robot's world model contain any a priori information about this area.This condition represented a case where the robot experienced increasingly unexpected position measurements in situ due to the impact of the area.3. Unexpected blocked path: The actual execution environment contained a wall blocking the robot's ability to navigate to the goal.The robot's world model had no a priori information of this wall.This condition represented a case where the robot experienced both an instantaneous unexpected obstacle measurement when it sensed the wall and increasingly unexpected position measurements as it was unable to continue the navigation due to the wall. All three conditions are shown in Figure 3.It should be noted that the only difference between the accurate world model and unexpected transition noise conditions was the a priori knowledge provided to the world model. To investigate our second and third research questions, we tasked the robot with driving from start (S) to goal (G) along a fixed set of waypoints.The execution environment was a combination of the transition noise and blocked path conditions.Here, the robot's path first took it over the high-transition-noise area, followed by the blocked path area en route to the goal.The robot's world model had no a priori knowledge about the obstacles.We endowed the robot with a sensor capable of sensing the transition noise area and the wall.As the robot navigated through the environment, it added previously unknown obstacles to its world model as they were sensed.GOA was computed using 10 Monte Carlo runs of the robot's world model, which was updated with all a priori information and in situ observations.For MQA, we restricted the world model's minimum state variance to 1 m to prevent the ET-GOA algorithm from sampling a degenerate (or constant) state distribution in the event the world model was overly confident.Our conditions consisted of seven ET-GOA triggering thresholds, δ = (0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0). Live experiment overview Our live experiments were conducted in the University of Colorado Boulder Smead Aerospace Engineering Sciences Autonomous Systems Programming, Evaluation, and Networking (ASPEN) Laboratory.We used a Clearpath Jackal equipped with an onboard computer, wireless communication, and front-facing camera for basic object detection.We affixed AR tags on the obstacles, which gave the robot accurate measurements of the obstacle type and reduced any confounds relating to the accuracy of detection algorithms.The ET-GOA algorithm as well as all task planning was conducted off board on an Ubuntu 20.04 laptop with an Intel Core i7 2.3 GHz CPU, 16 GB RAM, and NVIDIA RTX A3000 GPU.All communication between the robot, camera, and laptop was conducted over the Robot Operating System (ROS), a popular robotics middleware (Quigley et al., 2009).The robot's mission area was a 4 × 10 m area within the ASPEN Laboratory equipped with a VICON camera system, which was used for accurate position estimates.Similar to the simulated robot, the live robot's physical capabilities were limited to basic waypoint following and emergency stopping, and deviation from the planned waypoints would require human control. Our live evaluation of MQA mirrored the simulation conditions covered in Section 4.3.Our first two conditions evaluated expected and unexpected transition noise.Instead of simulated transition noise, the real environment contained a set of uneven sandbags along the robot's prescribed path, which created transition disturbances as the robot drove over them.Under the accurate world model condition, we provided the robot's world model accurate a priori information about the sandbag location and impact to velocity.Under the unexpected transition noise condition, we did not provide the robot any a priori information about the sandbags.Our third condition, unexpected blocked path, evaluated the MQA response when the robot could not continue on its prescribed path.Instead of a simulated wall blocking the path, the real environment contained a large cardboard box blocking the prescribed path.An ARTag was affixed to the box, which helped the robot identify the obstacles and update its world model accordingly. For each episode, the robot attempted to drive approximately 4 m, during which it experienced the condition.All three conditions are shown in Figure 4. Similar to our simulation experiments, the accurate world model condition was identical to the unexpected transition noise scenario, except that the world model had full knowledge about the sandbags. Measures We used three primary measures to evaluate MQA and the ET-GOA algorithm: 1. Model quality response: The goal of this measure was to understand how well MQA captured state measurement perturbations at various times throughout the task execution and with varying amounts of alignment between the world model and reality.We measured the raw MQA value at each time step and computed the mean across the task.2. Computational cost: The goal of this measure was to understand the computational efficiency of ET-GOA.We measured the time in seconds that the robot spent executing a selfassessment (Model Quality Assessment and Generalized Outcome Assessment) during the duration of the task.3. Outcome assessment accuracy: The goal of this measure was to understand how accurate the ET-GOA-triggered Generalized Outcome Assessments were.We computed the mean squared error (MSE) between the ET-GOA-triggered Generalized Outcome Assessments and a ground truth periodic GOA at each time step.For time steps where ET-GOA did not trigger an updated Outcome Assessment, we reused the last ET-GOA-computed Outcome Assessment in the MSE computation. We utilized statistical significance testing in our analysis of Research Question 1.For analyzing a main effect across conditions, we performed a one-way analysis of variance (ANOVA), measuring the effect size by partial eta-squared (η 2 p ).To analyze differences between individual conditions, we utilized Tukey's honestly significant difference (HSD) test with Cohen's d effect size measure.For all statistical testing, we set α = 0.05.For research questions 2 and 3, we analyze the correlation across thresholds using Pearson's correlation.For research question 4, we provide a higher-level analysis and discussion of our results. Model Quality Assessment response in simulated scenarios We executed 20 episodes per condition and measured the MQA response at each time step and then computed the mean MQA across each task.A one-way ANOVA indicated a significant main effect across the three conditions, F(2, 57) = 2464.6,p < 0.001, η 2 p = 0.99.Further analysis using Tukey's HSD test showed a significant decrease in MQA between the accurate world model condition (M = 0.88) and the unexpected transition noise condition (M = 0.16), p < 0.001, d = 18.2, a significant decrease in MQA between the accurate world model condition and the unexpected blocked path condition (M = 0.08), p < 0.001, d = 20.1, and a significant decrease in MQA between the unexpected transition noise condition and the unexpected blocked path condition, p < 0.001, d = 1.9.This supported our hypothesis that MQA could capture unexpected perturbations to the robot's state.Additionally, these results showed that different types of perturbations elicited different MQA responses.A plot of the mean MQA response across the three simulated conditions is given in Figure 5. ET-GOA response in simulated scenarios To understand the accuracy of the ET-GOA algorithm, we computed the MSE between the ET-GOA-triggered confidence Frontiers in Robotics and AI 09 frontiersin.org FIGURE 5 Boxplot showing the MQA response across three simulated conditions.The whiskers indicate the first quartile ±1.5× interquartile range.We observed that the unexpected transition noise and unexpected blocked path conditions showed significantly lower MQA than where the robot was given an accurate world model. FIGURE 6 Plot showing ET-GOA evaluated across seven triggering thresholds.The orange triangles indicate the mean squared error, which decreases with an increasing threshold.The purple squares indicate the percentage of task time spent running the ET-GOA algorithm, which increases with an increasing threshold.There is a trade-off between ET-GOA self-assessment accuracy and the computational cost of executing said self-assessments. predictions and a ground truth GOA at each time step.Note lower MSE equates to more accurate predictions.The results of Pearson's correlation indicated that there was a significant negative correlation between the δ threshold and GOA error, r(138) = −0.75,p < 0.001, R 2 = 0.57.This can be seen as the orange triangles in Figure 6.The lower thresholds rarely triggered GOA, while higher thresholds captured in situ competency changes due to triggering GOA more often.The higher the rate of triggered GOA, the more in line with the ground truth the robot's in situ performance predictions were.These findings supported our second hypothesis that GOA accuracy would increase (error would decrease) proportionally to the triggering threshold. We measured the computational cost as the percentage of task time in seconds that the robot spent running the ET-GOA algorithm in the periodic MQA assessment and the triggered GOA portions.We found a significant positive correlation between the δ threshold and computational cost, r(138) = 0.86, p < 0.001, R 2 = 0.74.This can be seen as the purple squares in Figure 6.Lower thresholds led to less computational cost, and higher thresholds led to higher computational costs.At the extremes, a threshold of δ = 0 leads to exactly one a priori self-assessment because the sensitivity of the trigger is at a minimum, and a threshold of δ = 1 induced behavior similar to a periodic assessment because the sensitivity of the trigger was at its maximum.Further analysis revealed that the perobservation MQA was quite fast (M = 0.003 s), while the triggered GOA was the computational bottleneck (M = 46.17s) of the algorithm.This is not unexpected as the GOA algorithm executed several Monte Carlo rollouts of a high-fidelity simulator each time it was triggered by MQA.Additionally, the goal of MQA is to limit the computational expenditure of robot competency assessment by triggering outcome assessments only when necessary.These findings supported our third hypothesis that the computational cost of ET-GOA increased proportionally to the triggering threshold. Figure 7 shows all of our simulation data in a single plot.Each subplot shows the data for each threshold we examined.For a given threshold, we plotted the MQA and GOA over autonomous driving time, along with the 1σ error bounds.Recall that autonomous driving time ignores the time the robot spends running GOA and allows for a straightforward comparison of the ground truth and ET-GOA-triggered assessments at each step.The δ threshold for the event triggering is shown as a red dotted line on the Model Quality Assessment plot.The periodic GOA (ground truth) is shown as a series of black circles on the Outcome Assessment plot.The ground truth indicates that the robot should initially be quite confident in task success.However, at approximately time t = 20 s, when the robot encounters (and learns of) the high-transition-noise area, its confidence should decrease because Monte Carlo simulations of that area result in lower probability of success.At approximately t = 40 s, the robot encounters a previously unknown wall.With knowledge of the wall blocking the path, the robot's confidence is now essentially zero.Moving left to right and top to bottom, we can see that the number of triggers increases as the threshold increases.The increase in triggers causes an increase in in situ GOA assessments.The higher frequency of assessments leads to a more accurate overall picture of task outcome confidence but at a cost of increased computation time, as we discussed earlier.The increase in accuracy can be seen in the plots as the triggered GOA tightly enveloping the periodic ground truth GOA. Our simulation experiments helped us answer our first three research questions.First, we found evidence supporting our hypothesis that MQA could capture unexpected state perturbations across three scenarios, where the world model had varying knowledge of the environment.Second, we found evidence supporting our hypothesis that GOA accuracy would increase proportionally to the triggering threshold.Third, we found evidence supporting out hypothesis that computational cost would increase proportionally to the triggering threshold.In investigating the second and third hypotheses, we found that there is a distinct tradeoff designers must make with respect to computational cost and GOA prediction accuracy.Additionally, both the MQA and the ET-GOA algorithm showed the behavior we were expecting within the experiments. Model quality response in live scenarios We executed 20 episodes per condition and measured the MQA response at each timestamp and then computed the mean MQA across each task.We found a significant effect of the condition on MQA, F(2, 57) = 232.7,p < 0.001, η 2 p = 0.89.Further analysis using Tukey's HSD test showed a significant decrease in MQA between the accurate world model condition (M = 0.91) and the unexpected transition noise condition (M = 0.60), p < 0.001, d = 3.3, a significant decrease in MQA between the accurate world model condition and the unexpected blocked path condition (M = 0.26)p < 0.001, d = 6.8, and a significant decrease in MQA between the unexpected transition noise condition and the unexpected blocked path condition, p < 0.001, d = 3.5.This supports our hypothesis that the MQA response on a live robot captured unexpected perturbations to its state.We observed behavior similar to that of our simulation experiments, albeit with much smaller effect sizes.These smaller effect sizes are most likely due to the mean MQA being slightly closer across the live scenario conditions than across the simulated FIGURE 8 Boxplot showing the MQA response across the three live conditions.The whiskers indicate the first ±1.5×interquartile range.We observed that the unexpected transition noise and unexpected blocked path conditions showed significantly lower MQA than when the robot was given an accurate world model.conditions.A plot of the mean MQA response across the three live conditions is given in Figure 8. Our live experiments helped us answer our research question 4. We found evidence to support our hypothesis that MQA would show a similar response between the simulation and live studies.We observed that MQA did, in fact, perform in line with our expectations across the three scenarios.This reinforced our belief that MQA and the ET-GOA algorithm could be used on live robot platforms to calibrate a human supervisor's understanding of robot competency and facilitate supervisor-initiated autonomyadjustment decisions. A demonstration and discussion of ET-GOA for autonomy adjustment Our experiments showed that 1) the Model Quality Assessment can capture events that were unexpected with respect to the robot's world model and 2) that the ET-GOA algorithm can respond to changes in Model Quality Assessment with updated task outcome assessments.However, to understand how ET-GOA may add value to human-robot teaming, we must evaluate with a human in the loop.To that end, we developed a proof-of-concept scenario where a human-robot team was tasked with navigating the robot from point A to point B. The team would be faced with in situ events that impacted the robot's competency, and the human supervisor would need to make autonomy-adjustment decisions in order to achieve the task goal. The role of the human supervisor was played by the first author of this manuscript.We used the same Clearpath Jackal from our previous experiments.The robot had access (via ROS) to the ET-GOA capability, which enabled it to autonomously and selectively assess and report the outcome competency to the human supervisor.The scenario required the robot to autonomously plan and follow a set of waypoints through a virtually constrained space.There were two autonomy levels available to the team: 1) autonomous control, where the robot's autonomy generated velocity commands between waypoints and 2) human control, where the supervisor teleoperated the robot using video from the front-facing camera and PlayStation controller.A single in situ popup obstacle blocked the robot's ability to complete the navigation task.In other words, the robot was not capable of navigating around this obstacle and would require assistance.This could be similar to real-life situations, where falling debris or unexpected craters block the robot's path.The popup obstacle put the team in the position of needing temporary ad hoc autonomy adjustment: from autonomous control to human control, as the supervisor helps the robot around the obstacle and from human control to autonomous control once the robot is back on a traversable path.Figure 9 provides an annotated image of the demonstration area.The robot is shown in the foreground, and the approximate positions of the start (S) and goal (G) are shown in orange ovals.The approximate initial path is shown as a black dashed line.A red oval depicts the area of the popup obstacle.The obstacle itself was a cardboard box with a set of AR tags that the robot used to detect the obstacle.The box was placed to block the robot when it was approximately at the second waypoint.The yellow arrow depicts the approximate location and direction of the temporary teleoperation by the supervisor to help the robot around the obstacle. Robot supervisory control, planning, and competency assessment reports were presented to the supervisor through the user interface, as shown in Figure 10.The left panel shows the robot's live-state telemetry data, which included position, orientation, the next waypoint, and the current mission time in seconds.There was input for choosing the goal location, generating a waypoint plan, and selecting the autonomy level (robotic autonomy or teleoperation).The center panel displayed a simple map of the mission area, the waypoints are denoted by black circles connected by a black line, the robot's current position is denoted by a blue circle, and the goal location is denoted by a green circle.The right panel showed the robot's real-time self-confidence for both the GOA and MQA through the ET-GOA algorithm.The right panel also had buttons to set the GOA outcome threshold and to manually query the autonomy for a competency report based on GOA.The supervisor was also given access to a PlayStation controller for ad hoc teleoperation. We executed 10 demonstration episodes.Each episode utilized the same start/end point but a unique set of waypoints generated by the RRT planner.We chose the ET-GOA threshold of δ = 0.05 because it showed a reasonable trade-off between GOA accuracy and algorithm runtime in our initial studies.The GOA outcome of interest was total task time, the robot's self-confidence that it could navigate to the goal in z * = 100 s.The total task time was equal to the time spent driving plus the time spent assessing.We chose to investigate this outcome to facilitate a higher-level analysis of ET-GOA in a human-in-the-loop system.Figure 11 shows the aggregate data from our demonstrations, where the numbers indicate the approximate locations of events and autonomy adjustments during the task.The robot begins the task in autonomous control mode at t = 0.At 1), the popup obstacle is placed in front of the robot.The robot's camera detects the obstacle and updates the world model by emplacing a simulated obstacle approximately in front of the robot.Additionally, the robot executes a temporary stop, while the obstacle is blocking it.At 2), the ET-GOA algorithm triggers due to the increasingly unexpected state measurements: the world model's initial predictions had the robot on a continuous trajectory, while the robot's real state was stationary because it was blocked by the obstacle.The robot then executed GOA, which took into account the new obstacle added to the world model.At 3), the robot reported "very bad confidence" in navigating to the goal, at which point the supervisor changed the autonomy level to teleoperation control and drove the robot around the obstacle.At 4), the supervisor returned control to the robot, which then executed GOA for an updated assessment.At 5), the robot reported "very good confidence, " and the supervisor approved it to continue the remainder of the task in autonomous control mode.At 6), the robot arrived safely and successfully at the goal.This live, albeit proof-of-concept demonstration showed that the robot equipped with the Event-Triggered Generalized Outcome Assessment algorithm could monitor and report its in situ competency in a dynamic environment.The supervisor could, in turn, make informed autonomy-adjustment decisions based on the robot's reported competency. Discussion Factorized Machine Self-Confidence is a framework and set of metrics that can enable autonomous robots to self-assess and communicate competency to human supervisors.Our results indicate that the proposed MQA metric coupled with a higher-level Generalized Outcome Assessment (GOA) may be a valuable method to improve ad hoc autonomy adjustments and general decisionmaking within a human-robot team.We found that the Model Quality Assessment can detect in situ misalignment between state measurements and world model predictions in both simulations and real robot operations.When there was alignment between predictions and reality, MQA tended toward 1, indicating high alignment.This was evident in the case where there were known obstacles modeled by the world model (i.e., the baseline conditions).Conversely, when there was misalignment between predictions and reality (i.e., in the other conditions), MQA tended toward 0, indicating high misalignment.We take misalignment to indicate that the robot's predictions may not be valid, and as such, the robot may not be as competent as previously believed. MQA provides us an indication of when competency may have changed but not how it has changed.To understand the how, we developed an ET-GOA algorithm, which uses MQA as a trigger for the robot to analyze higher-level task outcomes using GOA.Because GOA samples from a possibly complex world model, it can be computationally expensive, so we want to limit re-assessments during task execution.McGinley (2022) proposed a potential avenue to reduce computational complexity of GOA through approximation and selective sampling of the world model; however, whether these techniques translate to live platforms and dynamic operational environments is an open question. The strong reliance on the world model paradigm presents several challenges as well.First and foremost is the existence of a world model.We provided several examples of world models in Section 3.1; however, most were a significant simplification of the "real world." Future work toward competency assessments using world models will have to contend with trade-offs.On the one hand, simple environments with simple dynamics are easier to develop and simulate, but that simplicity may lead to inaccurate assessments.On the other hand, world models with realistic environments and dynamics could facilitate accurate robot self-assessments but may be difficult to develop and simulate.Given a world model, a second challenge is how to efficiently update it.In this work, we utilized a camera to help the robot detect and place obstacles within its world model, but this assumes that the robot is capable of detecting and understanding these obstacles in the first place.(3) GOA completes, the robot reports "very bad confidence," and the supervisor takes control to help the robot around the obstacle; (4) the supervisor returns control to the robot, and the robot begins GOA; (5) the robot reports "very good confidence," and the supervisor allows the robot to continue autonomous operation; and (6) the robot successfully arrives at the goal. Other common approaches, such as LIDAR and RADAR, might have better detection ability but may not capture contextual and semantic information about the obstacle (for example, a concrete block and cardboard box may be the same size, but one can be driven over more easily than the other).An interesting direction could be for the supervisor to help the robot fill in any blanks caused by model simplifications or sensor limitations.This could provide a more finegrained and collaborative way for the robot to understand the world around it and how that world impacts its competency. We found that there are trade-offs in the choice of ET-GOA parameters a designer must make.In our experiments, a δ threshold close to zero provided a good trade-off between the accuracy of the triggered outcome assessments and the computational cost involved in computing those assessments.However, the choice of triggering threshold could be mission-dependent, based on how cautious the designer requires the robot to be.For example, a rover on Mars might have a higher ET-GOA threshold (more sensitive) to capture unexpected events early and often, while a food delivery robot in San Francisco might have a lower ET-GOA threshold (less sensitive) to prevent unneeded delays due to overly cautious re-assessments.Additionally, we did not vary the number of type of indicators used for ET-GOA.We believe that these may be task-dependent as well.Future work could investigate the impact of different ET-GOA indicators, as well as how to choose appropriate triggering thresholds.The choice of the indicator and threshold could even be chosen dynamically, based on factors such as mission needs or obstacle locations (Theurkauf et al., 2023). Lastly, it is important to understand how knowledge about a robot's competency impacts decision-making.This work is focused on improving a human supervisor's autonomy-adjustment decisions 10.3389/frobt.2023.1294533within a human-robot team, i.e., calibrating the supervisor as to when they should rely on, or trust, the robot to operate with some degree of autonomy.Our human-in-the-loop demonstration shows that the ET-GOA algorithm can report changes in competency, which in turn may help the supervisor understand when the robot should and should not operate autonomously and thus when the robot would and would not need human assistance.Future work is needed to perform full human-subject studies in more complex deployments to validate the ET-GOA algorithm, the interaction it facilitates, and the general usability of competency assessment in realistic, live scenarios.Our work here utilized two autonomy levels: human control and robot control.However, one could imagine that monitoring changes in robot competency may help autonomy adjustment at a fine-grained level.For example, the robot reporting "fair" confidence could be a signal that the supervisor should monitor the robot's progress more closely but not necessarily execute a control takeover.Future work could investigate how competency reporting may facilitate more fluid autonomy adjustments. Conclusion In this work, we investigated using self-assessed robot competency information to facilitate ad hoc autonomy adjustments for human-robot teams operating in dynamic environments.We presented a new MQA metric for the Factorized Machine Self-Confidence framework.We then developed an Event-Triggered Generalized Outcome Assessment algorithm, which used real-time computations of MQA to trigger a GOA of the robot's confidence in achieving high-level task objectives.The GOA can be used to assist human supervisors in making in situ, ad hoc autonomy-adjustment decisions.We presented simulated and live results showing that MQA could capture unexpected perturbations to the robot state and that the ET-GOA algorithm could provide accurate online self-assessment capability for an autonomous robot.We concluded with a proof-of-concept demonstration and discussion of using ET-GOA in a human-in-the-loop system, which enabled a human supervisor to make ad hoc autonomy adjustments based on the robot's reported competency.We believe that robot self-confidence can provide future human-robot teams with valuable information about the competency of the robot, which can in turn improve human decision-making and enable more effective human-robot teams. FIGURE 1 FIGURE 1Factorized Machine Self-Confidence framework adapted fromIsraelsen (2019).Planning and execution components (tan boxes, black arrows) are assessed across five factors (blue rounded boxes, blue dashed lines) to assess the robot's competency. FIGURE 2 FIGURE 2 System diagram for the Event-Triggered Generalized Outcome Assessment algorithm.The left side shows the a priori or before-task behavior showing the initial outcome assessment (GOA), competency reporting, and storing of world model predictions.The right side shows the in situ or during-task behavior showing the Model Quality Assessment at each time step, and, if triggered, in situ outcome assessments, competency reporting, and updating of world model predictions. FIGURE 3 FIGURE 3 Simulation study environment configurations with the approximate path shown in a black dotted line from start (S) to the goal (G).(A) The robot's world model had accurate knowledge about an area of high transition noise.(B) The robot's world model did not have accurate knowledge about an area of high transition noise.(C) The robot's world model did not have knowledge about a wall blocking the path to the goal.The contents of the robot's world model can be seen in each robot's thought bubble.(A) Accurate world model condition.(B) Unexpected transition noise condition.(C) Unexpected blocked path condition. FIGURE 4 FIGURE 4 Live study environment configurations with the approximate path shown in a black dotted line from start (S) to the goal (G).(A) The baseline condition contained a set of sandbags that the robot's world model was given full knowledge about.(B) The area contained a set of sandbags that the robot's world model did not have knowledge of.(C) A popup obstacle was placed in front of the robot shortly after the task began, which the robot's world model had no a priori knowledge about.The contents of the world model can be observed in each robot's thought bubble.(A) Accurate world model condition.(B) Unexpected transition noise condition.(C) Unexpected blocked path condition. FIGURE 7 MQA and GOA measured across several thresholds during task executions.The black dashed line indicates the MQA threshold for triggering GOA. FIGURE 9 FIGURE 9Annotated image of the ET-GOA demonstration area.The robot was tasked with navigating from the start (S) to the goal (G).A popup obstacle, shown in red, caused the ET-GOA algorithm to report low confidence.A temporary autonomy adjustment to teleoperation occurred approximately along the yellow dashed line that helped the robot around the obstacle.The thought bubble shows the robot's world model at t = 0 prior to the robot learning of the obstacle and at t = n after the popup obstacle was observed and added to the world model. FIGURE 10 FIGURE 10User interface for the ET-GOA demonstration.The left panel shows the robot real-time state and provides controls for planning as well as changing the autonomy level from robot control (Auto) and human control (Telop).The center panel shows the mission area, waypoint/path as black circles, the robot as the blue circle, and goal as the green circle.The right panel shows the robot's self-assessment metrics mapped to a semantic label and includes controls to manually trigger GOA.A PlayStation controller and video interface (not shown) were used during teleoperation. FIGURE 11 FIGURE 11Model Quality Assessment and GOA measured across 10 live demonstrations of the ET-GOA algorithm.The robot begins with high confidence due to the world model indicating an easily navigable environment.(1) The popup obstacle appears in front of the robot; (2) the ET-GOA triggers and begins GOA; (3) GOA completes, the robot reports "very bad confidence," and the supervisor takes control to help the robot around the obstacle; (4) the supervisor returns control to the robot, and the robot begins GOA; (5) the robot reports "very good confidence," and the supervisor allows the robot to continue autonomous operation; and (6) the robot successfully arrives at the goal.
13,770.4
2024-01-04T00:00:00.000
[ "Engineering", "Computer Science" ]
Comparative Study of Bankruptcy Prediction Models Early indication of Bankruptcy is important for a company. If companies aware of potency of their Bankruptcy, they can take a preventive action to anticipate the Bankruptcy. In order to detect the potency of a Bankruptcy, a company can utilize a model of Bankruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for Bankruptcy prediction. It is expected that the comparison result will provide insight about the robust method for further research. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP), Hybrid of MLP + Multiple Linear Regression), it can be concluded that fuzzy k-NN method achieve the best performance with accuracy 77.5%. The result suggests that the enhanced development of bankruptcy prediction model could use the improvement or modification of fuzzy k-NN. Introduction In bussiness, a company can have two possibilities (gain profit ar loss).In the high competitive era, early warning of a Bankruptcy is important to prevent the worst condition for the company.In order to predict the Bankruptcy, a company can employ the relevant data such as asset total, inventroy, profit and financial deficiency.Those data will give maximum advantage when their pattern is interpretable.With the objective of discover the Bankruptcy pattern, a machine learning method can be employed.Specifically, the method will classify whether pattern in the company data support the indication of Bankruptcy or not. Recently, several machine learning methods are proposed for Bankruptcy prediction.Some of them are k-nearest neighbor, neural network and support vector machine.Those methods come with their advantage and disadvantage.Among several cases, neural network and support vector machine are superior than other methods.For example, support vector machine is exploited in detection of diabetes mellitus [1] and neural network is employed in classification of mobile robot navigation [2].The superiority is because of their capability in generalization.However, their models are difficult to interpret.On the contrary, model that use k-nearest neighbor is easier to interpret and its computation is simple. For Bankruptcy prediction model, Li et.al., [6] proposed fuzzy k-nn model and Wieslaw et.al. [3] proposed statistical-based model.Still, the improvement space is available in order to obtain a better model.The main contribution of this paper is conducting a comparative study for evaluating the most suitable model for Bankruptcy prediction problem.The comparative result can be used as a consideration for further research in the Bankruptcy prediction problem.In this comparative study, the usage of k-nearest neighbour, neural network and support vector machine in a model prediction will be evaluated and will be compared.In addition, the variant of the methods will be evaluated as well.The variant metods are fuzzy k-nearest neighbour, bagging nearest neighbour support vector machine, and a hybrid model of multilayer perceptron and multiple linear regression.By considering the excellency and the drawback of each method, this study will explore which method is suitable for Bankruptcy prediction model. The organization of the paper is as follow, the next section describes the dataset and followed by machine learning methods explanation in the third section.Subsequently, the result of the comparative study is illustrated in the fourth section.Finally, the last section describes the conclusion and dsicussion. Methods This section describes methods that are compared in this study and followed by the dataset. K-Nearest Neighbour K-Nearest Neighbor (KNN) is a non-parametric classification method.Computationally, it is simpler than another methods such as Support Vector Machine (SVM) and Artificial Neural Network (ANN).In order to classify, KNN requires three parameters, dataset, distance metric and k (number of nearest neigbours) [8]. Similarity between atributes with those of their neares neighbour can be computed using Euclidean distance.The majority class number will be transferred as the predicted class.If a record is represented as a vector (x1, x2, ..., xn), then Euclidean distance between two records is computed as follow [8]: (1) The value d(xi, xj) represents distance between a record with its neighbours.The computed distances are sorted in ascending way.Next, choose k smallest distances as k nearest distances.Classes of records in the k nearest neighbours are then used for class prediction.The majority class in that set will be tansferred to the predicted data. Fuzzy K-Nearest Neighbour In 1985, Keller proposed a KNN method with fuzzy logic, later it is caleed Fuzzy k-Nearest Neighbour [4].The fuzzy logic is exploited to define the membership degree for each data in each category, as describes in the next formula [4]: The i variable define the index of classes, j is number of k neighbours, and m with value in (1, ∞) is fuzzy strength parameter to define weight or membership degree from data x.Eulidean distance between x and j-th neighbour is symbolized as ||x-xj||.Membership function of xj to each class is defined as uij [4]: In addition, nj is the number of neighbours with j-th class.Equation ( 3) is subject to the next equation [4]: After a data is evaluated using those formulas, it would be classified into a class according to the membership degree to the corresponding class (in this case, class positive means bancrupt and class negative means not bancrupt).[5].C(x) = arg max u x , u x 2.3 Support Vector Machine Support vector machines (SVM) is a method that perform a classification by finding a hyperplane with the largest margin [8].A Hyperplane separate a class from another.Margin is distance between hyperplane and the closest data to the hyperplane.Data from each class that closest to hyperplane are defined as support vectors [8]. In order to generate SVM models, using training data x ∈ R and label class y ∈ 1, 1 , SVM finds a hyperplane with the largest margin with this equationc [8]: . 0 (6) To maximize margin, an SVM should satisfy this equation [8]: Xi is training data, yi is label class, w and b are parameters to be defined in the training process.The equation ( 7) is adjusted using slack variable in order to handle the misclassification cases.The adjusted formula is then defined as in equation ( 8) [8]: To solve the optimation process, Lagrange Multiplier (α) is introduced as follow: Because vector w may in high dimension, equation ( 9) is transformed into dual form [8]: And decision function is defined as follow [8]: Value of b parameter is calculated using this formula [8]: Bagging Nearest Neighbour Support Vector Machine (BNNSVM) In order to create BNNSVM model, model Nearest Neighbor Support Vector Machines (NNSVM) is created first.The procedure is as follow [6]: 1. Training data is divided into train set (trs) and test set (ts) using cross validation process.3. Perform a prediction task using 10 NNSVM models from step 2. 4. For each record in test set, vote the prediction result using the NNSVM models.5. Final prediction result is the class that is voted in the step 4. If the voting result is 'negative' then the data is predicted as 'negative' and vice versa for 'positive' result. Multiple Layer Perceptron (MLP) Multilayer Perceptron (MLP) method is an ANN method with architecture at least 3 layers.Those 3 layers are input laye, hidden layer and output layer.Similar to another ANN methods, this method aims to calculate the weight vectors.The weight vector will be fit to training data.To update the weight vector, MLP uses backpropagation algorithm.The activation function that is used in this MLP model is Sigmoid function. In prediction stage, a data company x will be classified as positive (the company has bancrupt potency) or negative (the company fine condition)according to equation (13).In the equation (13) wi is weight vector from training proses, w0 is bias and n feature dimension of the data [9]. In the training stage, the weight vector is updated in two steps.The first step perform initialization of weight vector, both in input layer and hidden layer.Afterward, the forward propagation is computed to obtain the network output.The computation is started from input layer, hidden layer and output layer.When the value (ok) from output layer and value (oh) from hidden layer are obtained, back propagation procedure is performed to calculate the error (δk) in output layer (equation 14) and error (δh) in hidden layer (equation 15).In the equation 8, wkh is weight value of the hidden unit that connected to output unit [9] ) According to error calculation, weight vector at input layer (equation 16) and weight vector at hidden layer (equation 17) are updated.The number of iteration is determined based on epoch [9]  2 . ISSN: 1693-6930 TELKOMNIKA Vol.11, No. 3, September 2013: 591 -596 594 Find k-nearest neighbours for each record in ts.These k-nearest neighbours is defined as ts_nns_bd.3. Create a classification model from ts_nns_bd.The model is specified as NNSVM.4. Perform prediction to testing data using NNSVM model.Subsequently, bagging algorithm is integrated to NNSVM model to form BNNSVM.The computation of BNNSVM model is defined in the next steps [6]: 1. Create 10 new base training set from trs data.In order to generate base training set, perform sampling with replacement.2. According to 10 base training set from step 1, generate 10 NNSVM model. 2 . 6 The Hybrid of MLP with Multiple Linear Regression (MLP+MLR) This hybrid classification model generated in two steps.The first step compute the Multiple Linear Regression (MLR) model.The result of the model is used as a new feature for
2,199.4
2013-09-01T00:00:00.000
[ "Computer Science", "Business" ]
METHODOLOGICAL PROBLEMS BEFORE REGIONAL DEVELOPMENT IN THE CONDITIONS OF POST COVID-19 GOVERNANCE This article is devoted to the theoretical problems of regional development in the conditions of post COVID-19 management. The text mphasizes the fundamental nature of regional development as a new scientific field, which has its own accumulation of knowledge, based on social sciences and natural sciences. The article focuses on problems with clarifying the place of regional development in post-crisis management. It is accepted that regional development has an important role in managing territorial problems and achieving a pulling development of individual spatial areas. In this direction, the very functional role of regional development brings to the fore the need for targeted measures and policies of a socio-economic nature. Some aspects of the regional development of scientific research are also presented. Emphasis is placed on the connections of regional development with other scientific directions and its role for effective geo-urban development, local self-government and local but in search of complexity, adaptability, competitiveness and sustainability. This further directed our focus on regional development and on search for its practical application. Thus, regional development is changing its intensity and modeling and now requires more dynamism and temporal processes to set new stereotypes for the management of territories of different ranks. Here is the place to frame a claim that regional development is a fundamental science that studies space and territory with a focus on the study of regional achievements in the field of population-territory -organization and management with an emphasis on solutions of a practical nature (Hartshorne,1939). On this occasion, it is important to outline the necessary tools achieved by us in the practical and theoretical training, related to solving a number of management tasks in regional development. In some cases, regional development can be considered as a system of functional connections with socio-economic orientation which have a territorial character. Thus, regional development tends to decrease differences, and in other cases, when it is ineffective, the differences increase. In this regard, a number of issues can be raised related to the search for opportunities to expand decentralization and subsidies in the management of regional development, increasingly engaging the regional level in the implementation of a regional development policy, compensating for insufficient "regionalization" and linking national strategies and programs, including those co-financed by EU funds, with the needs and potential of regional and local communities and territories (Georgiyev, 2006). Within this theory, it is important to point out the methodological construction that shows why something is so. On the other hand, the economic system alone is in many cases unable to achieve an acceptable equilibrium. Therefore, when developing policies aimed at a more even distribution of income between people, groups and territories, we need to derive certain methodological patterns. In this direction, economic activity in Europe is highly concentrated in large cities and urban agglomerations. Regional policy proposes easing tensions in large cities by directing part of the economically active population to other areas. The pan-European regional policy allows one country to engage in policies that stimulate economic activity in the territories of other countries. Solving territorial problems leads to a chain reaction, accumulating positive effects that spread beyond the borders of the individual state, affecting other countries as well (Dimitrov, 2021). During all stages of the development of human society, regional science has had its focus and sharpness and has manifested itself through active processes at the local level. In practice, however, regional development is an unconscious priority, which does not allow us to predict in which direction to go with regional development and how to form a sustainable framework. Here is the place to emphasize that a person cannot pass without knowing the environment in which he lives and carries out his work. Thus, the research of space and territory in the 21 st century regains its relevance. Implementation of new regional development policies The regional development of Bulgaria before 1989 was mainly determined by the spatial location of economic sectors and the impact through regulations by the departments. Improving the territorial organization of the economy is a dynamic process that is constantly evolving. This new reality, in turn, reflects on the development of social relations and the achievements of the various socio-economic parts of the world (Karastoyanov, 2008). This gives impetus to develop more regional processes and phenomena of different order. In general, regional development has its roots in geography, but geographical science does not remain the same. On the other hand, regional development is gradually arguing its integrity and at the same time strengthening, expanding and deepening its interests and capabilities. At the same time, it gradually sets itself a more complex task of having a functional role that explores the whole complex of socio-economic and natural processes and phenomena, their complex relationships and the impact between nature and man. Thus, regional development becomes the basis for the development of projects for economic development of the countries, the economy and the development of the territories. Regional development reaches the foundation of its subject -the study of the territory in its various states. On the other hand, the problem of the interaction of society with nature comes down to raising a group of questions without a clear and distinct answer. For example -How does the society influence the nature or the nature the society, and respectively, how is the regional development built into these interrelations? This raises the question -what plays a decisive role in the process of interaction, society or nature? In this complex conundrum, regional development is important, and it is designed to answer many of these questions. Thus, from a regional point of view, we accept that nature must be seen as an integral part of public life. It is obvious that in accordance with the dialectical logic, when it comes to the unity of society and nature, one must keep in mind the role of regional development as a structuring system in which both nature and society are its components and not independent; they are considered in a certain dependence or commitment. It is natural to conclude that regional development is a kind of bridge between nature and society (Dimov, 2008). But, in addition, the nature of regional development lies in the assessment and analysis of ongoing demographic and socio-economic processes at the national, regional and local levels. In practice, the existing problems and contradictions at the local level give grounds for seeking concrete solutions, which, however, must be balanced with and open to the national interests of the modern nation state. In this direction, we can assume that regional development in the 21 st century is at a turning point, which sets the benchmarks for the next 50 years. This undoubtedly strengthens its applied nature and its complementarity with the economic tools and economic management. This makes the post-COVID-19 period from 2022 to 2027 a turning point in the emerging technological change embodying a different geoeconomic environment of socio-economic development. The world is setting new contours of the geoeconomic reality, which will affect not only the main geopolitical players, but also a number of peripheral countries. In this transitional period, the extremely complex and serious problems of structuring regional development in the direction of its complexity and functionality stand out particularly clearly (Karakashev et al., 1989). The focus of regional development can again be placed on conditions of change in historical development and cultural traditions in different parts of Europe. We are witnessing the tolerance of the departmental approach in the policy for regional development and as a resultdeepening of the regional disparities, which despite the complicated tendencies in Bulgaria have achieved significant progress, given the uneven regional development and the emerging information change. In practice, this change will embody the model of development of nation-states, which will have to impose new rules and constitutional models of functioning of the systems. This will also affect regional development. The functioning of regional models will set the stereotypes about the different speeds of social change in different spaces and regions. This will exacerbate inequalities in social and economic development. Over time, this problem will become more acute and will increasingly lead to the need for intervention to overcome the discrepancies. Thus, the role of the state will be more of a regulator than a carrier of change. In this area, the demographic factor will acquire new significance, and the ability of regional business to enforce market rules will prove to be an essential element for the reliable functioning of the systems. On the other hand, the technical and social tasks for the development of very important practical problems with a strong regional character are gradually increasing, the need for business innovation and new cooperation with science will develop. In this direction, we need to look for the consolidating importance and understanding of the location of regional development in the whole system of sciences. Regional development is defined as an intermediate science, as a kind of integrator of the so-called management, and hence public, as well as the natural (natural) block of sciences. This is a great chance for regional development, which gives it priority in solving complex interdisciplinary problems (Dimov, 2008). The question also arises as to what the boundaries and the object of study of regional development are. In principle, they cannot be definitively determined, because most sciences partially cover their object of study. With a certain conditionality, the environment for social development (oikumena) or in other words our environment and economic activity in it can be defined as the main object of study of regional development. To a large extent, our environment has its own geographical nuance, which is why we need to emphasize the connection of regional development with the block of geographical sciences (Hagget, 1979). This largely corresponds to the fact that the geographical environment includes our environment. We can assume that the geographical environment is part of the geosphere, which is largely assimilated by man, and involved in active economic activity and the material basis for the existence of human society. The main task of regional development is the creation of a scientifically based forecast for the development of our environment (including geographical), related to human society and the spatial systems of the productive forces in the conditions of increasing influence of nature from scientific, technical and technological progress (Petrov, 2015). It follows that regional development is a science of the dynamic spatial systems formed on the earth's surface as a result of the interaction between nature and society, as well as the laws of their development and management. Regional policy is a key tool for regulating the development of society, bringing balance and bringing regions closer together in terms of living standards. The availability or provision of serious resources is essential for the management of regional development. The provision of resources for regional development is linked to material, labor and, above all, financial balances at the public level (Cheung, 2005). Factors influencing the implementation of regional development policies Without aiming to make a critical analysis of the nature of regional development in the various known scientific approaches and countries related to regional science, it is necessary to note that in the Anglo-Saxon scientific literature the regional paradigm prevails, which develops in different directions (Kimble, 1951). More precisely, the focus of regional development is derived from leading researchers who believe that regional development seeks to obtain complete knowledge of the territorial differentiation of the Earth and, therefore, distinguishes phenomena that change territorially only by their territorial significance or, in other words, by their relation to universal territorial differentiation. The statement that the methodological basis of regional development is the spatial principle and the category of space is the leitmotif of any regional study is becoming increasingly important. This gives grounds for bringing to the fore chronology, accepting three postulates of regional knowledge -the doctrine of the natural resource complex, the chronological or spatial concept and the theory of management of regional development. In addition, the solutions for regulation and management of territorial development should be sought in the skillful combination of the means of state regulation with the market mechanism. Regional policy depends on legislative decisions regarding the type of administrative-territorial organization and division of the country, local self-government and administration. In practice, in Bulgaria, regional development as a scientific field is relatively new. It is based on some European scientific schools, which shows a tendency to form a solid foundation for regional research, based on the accumulated theoretical and practical experience of the most developed nation states (Yokomichi, 2005). Regional development as a scientific field related to the state of territorial systems studies objects that represent a significant complexity at each level -global, regional and local. Therefore, a precise logic of scientific arrangement is needed, which must make the appropriate efforts to turn intuitive understanding into a clear fundamental and clear theoretical concept of regional development. In this direction, the initial statement must be based on the position that every science uses basic concepts represented by terms. In addition to basic concepts, any scientific theory is based on obvious facts. These simple and very simple, elementary facts play a role in regional knowledge, similar to axioms in mathematics. The difference between them lies in the fact that axioms in mathematics can derive whole mathematical knowledge in a purely deductive way, while in regional development it is necessary to constantly supplement and collect new details and facts and data about them, analyze and summarize these facts in order to derive the results, as well as to accumulate knowledge to complete the given stage of research. The regional development is based on and uses analytical developments, new methods of informatics and others with which it interprets the analyzed facts. It is well known that the process of differentiation dominates in various sciences. A similar process is taking place in regional development. Regional development is internally differentiated and forms a system of sciences, which includes five subsystems: General Regional Studies (complex), Foreign Studies (natural regional complexes), Regional Development Management (management and administration of public space), Regional Economics and Theories of Regional Development. In practice, together with the process of differentiation of regional development, another opposite process of integration is carried out, aimed at unification of the regional sciences, at their internal theoretical integrity. This gives grounds for opening new scientific directions on the basis of regional development, mainly three directions: Geoecological development, Geourbanistics and urban management and Strategic planning and forecasting of regional development (Lavrov, 1989). On the other hand, the integration of a fundamental science does not aim at the destruction of its branches and directions, created in the process of its differentiation. The aim is to unite their common theoretical concepts. Otherwise, regional development will lose its importance as a basic science and will become a kind of collection of theoretical and applied sciences in the field of management and administration. In principle, integration and differentiation are objective trends in the development of each of the basic sciences, but unlike other sciences, these processes must be further scientifically substantiated and focused in an indisputable way. As regional development emerges as a new scientific field based on socio-economic and geographical knowledge, which determines its border with the natural and social sciences, there is a need to derive methodological tools for spatial patterns, and they have the need for competition between socio-economic and natural resource knowledge to outline the territorial and spatial processes and phenomena. Notwithstanding this dichotomy, it can be assumed that regional development in itself is a science of the spatial relations of developing territorial objects and spaces. In this direction, it is necessary to assume that "regional development" is a functional process of attitude to our environment, and hence a clearer definition of the content of spatial relations between the elements of the considered territorial systems (Hartshorne, 1979). So a system and other geo-systems located in a certain area can be analyzed and monitored in order to conduct certain policies on them. Thus, these relations in a purely regional focus operate between natural and social phenomena, having a territorial definition and regional significance, in which they have a multifaceted impact. In trying to further define the problematic area of regional development, we come to the conclusion that the more urban a territory is, the more conditions are created in it for the implementation of many managerial and administrative measures related to the management and functioning of regional communities. To a large extent, this can give grounds for assuming that regional development is a scientific field that is a superstructure of social management and socio-geographical sciences. This gives reasons to look for the practical and applied nature of regional development (Naydenov, 2017). The combination of scientific tools related to the manifestations of the individual territory and measuring the state of the economy in the region or territorial community determines the scope of management measures and activities that are called to implement economically and politically active population to achieve development in a purely local and regional scale. To a certain extent, this can strengthen the scope of regional development by imposing a series of other measures, including administrative and legal ones, to optimize the territorial division or spatial planning. In practice, the administrative-territorial changes related to the merger, division or optimization of administrative-territorial units is a key mechanism for the effective functioning of government and the state, which helps to build a modern and stable state with a certain type of socio-economic system. Anthony Cheung, one of the leading authors on administrative-territorial reforms, suggests that global administrative reforms can be seen as a "new public management" to regulate spatial development processes (Cheung, 2005). This new approach to the new public management is a critique of the traditional model of public administration based on state bureaucracy and is expressed in the general failure of effective government management in a territorial aspect, far exceeding the state in which the private interest or the interest of privileged groups dominates the interest of society. Cheng accepted the thesis of more effective territorial structuring of administrative structures so that they have high regional competencies in order to be able to prioritize and help local authorities to manage territorial communities more effectively (Cheung, 2005). This view is also supported by Kiyotaka Yokomichi. According to him, after decentralization, within the relevant laws, municipalities are expected to perform all administrative activities, independently and by virtue of the principle of independent decisionmaking and delegation of responsibility. Yokomichi suggests that a real de-concentration of governance could lead to a greater focus on regional issues. The aim is to emphasize the promotion of strengths in the regional specifics of each municipality or territorial community (Yokomichi, 2005). Imposition of innovations and new practices in the management of regional development Based on the knowledge of physical geography, which deeply comprehends and defines the features of various natural and geographical phenomena with their impact on the anthropogenic factor, regional differences and specifics are derived as an important feature of the territorial system. Such an approach requires the search for more modern approaches in the study of regional specifics and features in the territories of a country. Therefore, it is especially important to obtain and analyze information about the natural environment and its components. This significantly contributes to the development of new areas such as "Geographic Information Systems" (GIS). These new technologies can be used for specific research, resource management, regional and spatial planning, cartography and in more and more areas of human life. Some early developments in Europe are also important for the development of GIS, such as those of the Swedish scientist Hägerstrand, who in 1955 studied the analytical potential of spatial data by taking into account location and population information related to households (Hägerstrand, 1955). The use of database management systems is especially important in modern GIS concepts, as it allows the integration of spatial and non-spatial data. This, in turn, gives impetus to the development of geourbanism and the regional economy. Urbanization is an increase in the urban population, but is also determined by the development of industry and other urban activities. The process of urbanization began with the very beginning of civilization, as well as the creation of cities, but intensified after the industrial revolution and the use of new technologies in agriculture, which has reduced the need for human labor and expanded the services sector in the economy. Urbanization brings with it a whole range of problems, especially when it is expressed as a sudden large concentration of population in a relatively small area. Therefore, there is a need to apply special measures and procedures to help build and organize settlements. Big cities need specially organized water supply, sewerage, food and other food supplies, electricity and telecommunications network. In addition, special attention should be paid to the quality of life in the city, including the fight against pollution, crime and other activities. This gives grounds for regional development to focus on the problems of the regional economy. Therefore, it is necessary to apply a special theoretical approach to territorial development which imposes three groups of problems. Firstly, economies of scale, specific to a certain territory, secondly, the search for advantages arising from the neighborhood to other industries, and thirdly, the urbanization effect, which is expressed in the extremely strong development of urban settlements, concentrating large masses of the population in them. This is related to the location of business companies in the territory with joint use of financial and administrative services, infrastructure, and proximity to the market. Addressing these problems, regional development becomes even more functional, it already covers the development of the economy, the sectoral structure and the built infrastructure, determines the territorial differences between the administrative-territorial units and economic regions and gives answers to a number of related questions. The measurement of development through indicators such as: concentration of people on the territory, economic activities located in the territory, working hour income, availability of the industrial, production and social infrastructure, the living status and ecological situation are an additional focus on regional science (Hägerstrand, 1992). However, the following indicators are the most widespread and officially used for the purposes of European regional policy: GDP per capita and the unemployment rate, per capita income, inflation, the human development index and others. Another popular indicator for measuring differences in different territories is the unemployment rate. Unemployment is one of Europe's most serious problems. The differences are mainly between rural areas, declining areas, peripheral areas and highly urbanized areas (Karakashev et al., 1989). The reasons for the income lag are the peripheral location, insufficient capital invested in production, insufficient infrastructure for enterprises and private households, as well as the low level of general and professional qualification of the population. It should also be mentioned that with rapid economic growth, the differences between the richest and the poorest regions decrease, and with a decline in economic growth, the differences increase again. In this direction, regional development increases its focus and scope of research, beginning to form as a stable fundamental science, solving the problems of spatial development in its socio-economic integrity. Of course, the application for this fundamentality is a path that must be scientifically followed by practical application and definition of various social dimensions and opportunities for attractive socio-economic development. This largely requires a return to the focus of research on another plane, following the normal vision of combining the natural component that affects the economically active population and the actual behavior of the demographic factor on the territorial development of individual territorial communities. Environmental aspects of regional development management Spatially related to the above judgments, the question is whether it is legitimate to interpret the physiographic environment as "natural" regionalism, if we assume that the object of its study is the geosphere, and its focus is the landscape. It should be recognized that a specific definition of natural regionalism needs to be defined. By linking the natural complex with its use and environmental protection, we undoubtedly arrive at common scientific postulates of natural geography and environmental protection. We can go further in this direction by assuming that geoecology is a combination of natural geography and environmental protection. Then "natural regionalism" is called to implement specific management measures and approaches to the rational use of natural resources and environmental management, related to its restoration and prevention of the decline of the territory from active human activity. In this direction, "natural regionalism" may acquire a newer dimension, including sustainable development policy, which may give rise to a new scientific direction "geoecological development" as an upgrade over regional development and geoecology. This gives us reason to refer to P. Haggett (1983), who believes that we are dealing with the structure and interaction of two main systems: the ecological, which unites man and the environment, and the spatial, connecting one area with another through a complex volume. We can also mention that S.B. Lavrov (1989) largely defines the essence of geoecology as a science from which new scientific directions can be defined, such as sustainable development. In practice, the natural sciences, as well as other groups of regional and geographical sciences, are interdisciplinary in nature, as they come into very close contact with other basic sciences or their branches. In practice, the natural sciences, as well as other groups of regional and geographical sciences, are interdisciplinary in nature, as they come into very close contact with other basic sciences or their branches. Therefore, it can be assumed that each independent science from the regional scientific branch is both interdisciplinary and intermediate and even fundamental, as long as it has its own coherent theory and methodology. In this case we are talking about specific research methods that can be borrowed from other sciences. The methods used by the regional sciences and geoecology are modified, adapted and improved by inspecting the object studied with their help. The definition of the block of regional sciences is more than positive due to the fact that regional development is special and is an important element of the structure of management sciences. Undoubtedly, in methodological and organizational terms, regional development in Bulgaria is not adapting successfully enough to world standards. To a large extent, Bulgarian scientific thought avoids imposing the concept of regional development. Also, regionalism fits in the form of economic regions, as economic geography or as management systems. In these cases, regional development, or more precisely regional aspects, fit into the management and geographical sciences. This requires us to pay attention to the fact that regional development is related to the study of a comprehensive plan of separate territories or regions of the land surface on the basis of ongoing socio-economic changes and management activities related to the state of human society. It is necessary to emphasize once again that the regional paradigm is the core of management sciences. Nevertheless, both in the past and now, the question arises as to the place of regional development in the system of administration and management, about the subject of regional development, and whether it replaces the social and natural sciences? In its essence, regional development is a synthesized and localized addition to the management, natural and social sciences. Regional development does not replace them. As the basic sciences, including physics, biology and geography, are a bridge between nature and society, so regional development is a kind of integrator between the two main blocks of management and geographical sciences. Regional development partially covers the objects of study and the subject of study of the natural and social sciences (Stoychev, 2020). The delineation of regional development and its methodology, specific tasks, methods and practical forms are needed primarily for the practice and application of social sciences. However, this is not about creating mechanical mixtures, but about characteristics that contain a logical combination of the most important features of their complete binding. Regional development synthesizes specific but different materials for countries and regions, reveals their specificity and without exaggeration represents the very essence of geography, without which it is deprived of the meaning of its existence. From what has been said so far, there is no doubt in the formation of a block of theoretical regional disciplines. This is because regional development covers the problems as a whole. However, this does not mean that regional sciences will not be able to have their own theoretical generalizations, formulations and principles. The theory of regional development summarizes disparate materials as a kind of counterbalance to the differentiation of regional development (Stoychev, 2020). In addition, regional development studies the objects of our environment and human society not from different countries, as the individual management sciences, but as a whole. The theoretical basis of regional development creates a new stage of specific regional research. The terms regionalism, regionalization and regional development should not be confused, although they have common features. To a large extent, regionalism precedes regional development, and regionalization is only one process of regional development. The imposition of statistical and cartographic methods, visual analysis and digital modeling can take regional development on a new path of scientific and technological development and strengthen the foundation of regional sciences as a leading scientific field. Conclusion The presented structure of the regional development expresses an argumentative point of view. It does not and cannot have claims to be exhaustive. It is necessary to gather many more opinions from foreign and Bulgarian authors working on the problems of regional development, meaningful from logical and lexical positions. Such opinions would lead to significantly more precise conceptual essence and affirmation of the most correct notions in terms of their content and transcription. At the same time, I hope that this approach and scientific alternative will open the beginning of a series of articles that will ultimately contribute to the rise of regional development as a modern and scientifically based science.
6,956.6
2021-11-29T00:00:00.000
[ "Political Science", "Economics", "Geography" ]
Time Signature Detection: A Survey This paper presents a thorough review of methods used in various research articles published in the field of time signature estimation and detection from 2003 to the present. The purpose of this review is to investigate the effectiveness of these methods and how they perform on different types of input signals (audio and MIDI). The results of the research have been divided into two categories: classical and deep learning techniques, and are summarized in order to make suggestions for future study. More than 110 publications from top journals and conferences written in English were reviewed, and each of the research selected was fully examined to demonstrate the feasibility of the approach used, the dataset, and accuracy obtained. Results of the studies analyzed show that, in general, the process of time signature estimation is a difficult one. However, the success of this research area could be an added advantage in a broader area of music genre classification using deep learning techniques. Suggestions for improved estimates and future research projects are also discussed. Introduction The majority of popular music scores are composed in a particular style known as lead sheet format. It summarizes a song by representing the notes of the main theme, the chord series, and other cues such as style, tempo, and time signature. Symbols are used in standard staff music notation to denote note duration (onset and offset times). Onset notes refers to the beginning of a notation or another sound and all musical notes have an onset, but do not always contain the first transient [1]. Offset is about the duration of the part from the beginning of the piece. It is the sum of the previous duration only when there is no rest and there are no place where two notes play together [1,2]. In addition, the staff contains details about the tempo, the beginning, the end of the bars, and the time signature. The time signature (sometimes referred to as a meter signature or metre signature) is a symbol for western music which specifies the number of beats (pulses) in each measure (bar) [3]. It is defined as a ratio of two integer numbers, where the numerator indicates the number of beats in a bar and the denominator specifies the note relation [4]. There are simple and compound time signatures that are relatively easy to estimate from the lead sheet or audio files. Examples include 2 2 , 3 4 , 4 4 , or 6 8 which means 2 minim beats, 3 crotchet beats, 4 crochets beats, and 6 quaver beats in a bar, respectively. The compound signatures are a multiples of the simple time signatures in terms of the number of beats [5]. Examples include 6 8 , 9 8 , 12 8 . There are also irregular time signatures that are much more difficult to estimate [6]. Examples include 5 8 , 7 8 , and 11 8 . Time signature estimation and detection cannot be possible without understanding the concept of upbeat, downbeat, and anacrusis. Upbeats and downbeats represent the simplest manner of associating downward motions with melodic movements to metrically stable points [7]. "Down" beats are times of stronger metric stability in the field of meter [8]. Anacrusis was defined by Lerdahl and Jackendoff [9,10] in two ways: "from the start of the group to the most powerful beat inside a group" and "from an upbeat to its related downbeat". With anacrusis in mind, estimating the time signature becomes tougher when the first note of the piece is not the strongest note. Additionally, the idea of strong beats must be considered in this process because strong beat spectrum peaks will occur during repetition moments in highly organized or repeated music. This shows both pace and relative intensity of certain beats, so that different types of rhythms may be distinguished at the same timing [11][12][13]. The music industry is fast-growing and with songs being produced every day, curators like Apple Music, Spotify, Audio Mack, etc. need a genre classification system to accurately curate playlist for their users. This involves grouping musical data together based on defined similarities such as rhythm-time signature and tempo-or harmonic content. In this domain, many attempts have been made to classify various genres of songs with notable successes [14][15][16][17]. However, with extracted features, such as time signature, the overall accuracy could get even better but this area is comparatively unexplored by reason of the estimation being difficult. Estimating time signature is a challenging task because all the beat times after the downbeat (strong beat) before the next downbeat do not always correspond to the total number of beats in a bar, especially for audio music. The main reason for this is because the tempo of any music track affects the time signature significantly. Beat times here refers to the time in seconds for a beat to sound relative to the entire duration of the track. For example, a track of 80 bpm with beat times as 1.02, 2.03, 3.03, 4.02 could estimate as 4 4 ; 1.02 being the downbeat time, whereas the same track played with 120 bpm could have beat times as 1.02, 1.33, 1.82, 2.13 which absolutely cannot be estimated as a 4 4 if it is assumed that a second beat time corresponds to a beat. Therefore, a couple of factors need to be put into consideration to accurately estimate the time signature, namely upbeat, downbeat, anacrusis, onset note, and tempo. However challenging, the automatic detection of time signature could help to reduce computational time for other temporal processes such as beat tracking, tempo estimation, and deep learning techniques. Moreover, it can be a preprocessing step to other tasks, such as gathering knowledge about music, automatic tagging of songs, improving genre classification, and recommendation systems. Understanding time signature and its function in feature extraction, estimation, and music genre classification would open the door to new possibilities for the music information retrieval domain [18,19]. Feature extraction for audio signal analysis is one of the most important steps for further studies related to time signature detection [20]. YAAFE [21], an audio feature extraction software, was developed in 2010 which has features including, but not limited to, speech/music discrimination, music genre, or mood recognition, and as a result, there has been improvement over the years. Garima Sharma et al. [22] also highlights the trends of various methods that have been used to extract audio signal features. A significant amount of study has been conducted on the automated retrieval of meta data from musical audio signals. Pitch detection [23,24], onset detection [25,26], key signature estimate [27], and tempo extraction [28,29] are some of the meta data obtained by various algorithms. The aim is two-fold: to equip computers with the capabilities of a human music listener in order to interpret a piece of music and derive explanations of specific musical properties. This enables a variety of applications, including automated transcription, playlist generation, and Music Information Retrieval (MIR) systems as is always discussed at every International Symposium on Music Information Retrieval (ISMIR) [30]. The algorithms that have been employed so far can be divided into two major approaches: the classical and the deep learning approach. The classical or manual approach involves using methods in the digital signal processing domain as evident in [31] by Meinard et al. in a study that showed how a piece of music can be analyzed by signal processing, by using comb filters [32] proposed by Klapuri just to mention a few. In an attempt to estimate time signature, one must have a sound knowledge about concepts such as frequency, tones, notes, duration, timbre, audio spectrum, beats, tempo, and timing. The deep learning approach, on the other hand, makes use of deep learning models. There are ideas that are common to both methods such as the use of Fourier transforms and analysis and the conversion of audio signals to log spectrograms-a more scientifically usable form. It is important to note that the majority of these approaches were implemented using MATLAB [33][34][35] and C++ for collaborations, testing and method validation up until around 2015 and beyond that Python became the go-to. This interest was sparked by several reasons, such as ease of understanding the language and the availability of high-quality machine study libraries, like scikit-learn [36] and librosa [37], just to name a few. This paper summarizes and reviews numerous research in this field, taking into account similar works, datasets, and a possible road map. To the best of our knowledge, no paper has ever conducted a survey on time signature detection or estimation. As a result, it is important that this survey be conducted in order to identify potential paths for creative ideas. Additionally, in this study, in the course of exploration, a deeper knowledge of frequencies in the time domain is obtained which may be useful in other domain areas like medicine and psychology, which have referred to beats as the pulse of the heart. The study is organized as follows. Section 2 discusses the music input signals and their impact on the methodologies. In Section 3, datasets utilized in this domain are highlighted in depth. In Section 4, state-of-the-art classical approaches are described, while in Section 5, deep learning approaches are examined in depth. Section 6 concludes the paper with a review of the results achieved and the proposed future course for this domain. Musical Input Signals Time signature estimation can be carried out by using two types of input data: music audio samples or Musical Instrument Digital Interface (MIDI) signals. The music audio samples basically refer to compressed sample files like mp3, uncompressed files like wav, or any other audio format usable with a range of roughly 20 to 20,000 Hz, which corresponds to the lower and upper limits of human hearing. For example, the audio signal on a compact disc is limited to a maximum frequency of 20 kHz, sampled at 44.1 kHz and encoded, with 16 bits per sample [38] and nearly perfect audio signals are obtained with 64 kb/s [38]. Sampling in music refers to the use of a part (or sample) of a sound file of another recording. Samples can be layered, equalized, sped up or slowed down, re-pitched, looped, or otherwise manipulated and can include elements such as rhythm, harmony, voice, vibrations, or whole bars of music [31]. On the other hand, the MIDI is a standard digital interface for communication with a musical instrument and other associated audio devices for performing, editing, and recording music [39]. A MIDI music piece's sound quality is determined by the synthesizer (sound card), and has other restrictions, such as the inability to save voice, which takes up far less space, making it much easier to store, share, adjust, and manipulate as well as being universally accepted and allowing for greater comparison between music works played on various instruments [40]. This is why some researchers prefer this format. A summary of their differences is shown in Table 1. As a result, this section has been divided into two subsections in order to properly understand how these input signals have impacted previous studies. Audio Samples as Data An audio signal can be analyzed at three levels in a time scale as discovered by Klapuri et al. [41]: at the temporally atomic tatum pulse level, the tactus pulse level that corresponds to a piece's tempo, and the harmonic measure level as shown in Figure 1. Christian Uhle et al. [42] as the pioneers of this research area in 2003, were very much interested in estimation and detection of three basic rhythm features: tempo, micro-time, and time signature in which musical pieces can be partly characterized by. The estimation of these three features was combined and individually separated by the integer ratios between them. The process involved the decomposition of four-second audio signal samples into frequency bands, a high-pass filter was applied-as the human ear cannot perceive sounds below 20 Hz [43], half-wave rectified amplitude envelopes were used to track onsets notes, and the filtered signal envelopes of each band were removed. Table 1. Differences between the input signals. MIDI Digital Audio Definition A MIDI file is a computer software that provides music info. A digital audio refers to digital sound reproduction and transmission. Pros Files of small size fit on a disk easily. The files are perfect at all times. The exact sound files are reproduced. It replicates superior quality. Cons There is variation from the original sound. They take more disk space with more minutes of sound, files can get corrupted with a little manipulation. Format Type Compressed. Compressed. Information Data Does not contain any audio information. Contains recorded audio information. The inter-onset intervals (IOIs) are then determined from the note onset times, and the tatum duration is measured using an IOI histogram. Using an auto-correlation system, periodicities in the temporal progression of the amplitude envelopes are observed in the subsequent processing. The auto-correlation function peaks refer to the time lags at which the signal is most close to itself. The envelopes of two segments are accumulated in advance, allowing for the measurement of a bar duration of up to four seconds. This estimate is inspired by the assumptions that self-similarity exists at the tatum, beat, and bar levels [44]. The algorithm's [42] output was evaluated using 117 samples 8-second-long each of percussive music. Music from different backgrounds and cultures, such as West African, Brazilian, and Japanese folkloristic music, and solo drum-set performance, were included in the test results. The presented research technique calculated tempo, micro time, and time signature from percussive music. A total of 84.6 percent of the tempo values, 83.8 percent of the micro duration, and 73.5 percent of the time signatures were accurately measured from 117 quotations of eight seconds length. However, this approach does not explicitly gives the estimation in terms of the numerator and denominator of the time signature which is our main focus. MIDI Signals as Data Nowadays, the MIDI signals are not really used anymore for this task because technology has provided better options, however, they were famously used back then because they are easier to work with owing to the precise signal patterns. For instance, the detection of onset notes can be obtained more precisely [11,45] because of the patterns that exist among the notes and a lot of researchers have exploited this advantage. Although it would be outstanding if a DAW like Logic Pro X can automatically determine the time signature by dragging a MIDI file into it, today, this is not common practice as MIDI data can adapt to any tempo and time signature specified. Grohganz et al. in [46] showed that the musical beat and tempo information is often defined in the MIDI files at a preset value that is not associated with the actual music content, so they introduced the method for determining musical beat grids in the provided MIDI file. They also showed, as a major addition, how the global time signature estimate may be utilized to fix local mistakes in the Pulse Grid estimate. Unlike the digital audio signal, when the notes are not perfectly on the grid, they could be quantized first before any process of time estimation is done. The assumption that the MIDI track is repetitive almost throughout the song was also used by Roig et al. in [47], and similar to the ASM, the Rhythm Self Similarity Matrix (RSSM) was employed for this study. In order to construct the RSSM using the tactus as a measuring unit, the rhythmic elements will be divided into the number of tactus corresponding to their duration. As a result, the inter onset interval (IOI) of each note is separated into tactus intervals. Datasets In every classification, estimation, or detection project, the dataset selection is critical. Sometimes, there are a range of potentially viable datasets available, each with their own set of advantages and drawbacks, and the decision to choose one dataset over another may have a huge impact on the project's outcome [48]. The journey of obtaining robust and well-balanced datasets has seen a shift from a very simple set to attempts at providing larger and more diverse datasets as shown in Table 2. The RWC dataset [49] was one of the first set of datasets that was put together solely for academic purposes. Shared libraries that made important contributions to scientific advancements were popular in other fields of scholarly study. It includes six original collections: the Popular Music Database (100 songs), the Royalty-Free Music Database (15 songs), the Classical Music Database (50 pieces), the Jazz Music Database (50 pieces), the Music Genre Database (100 pieces), and the Musical Instrument Sound Database (50 instruments). The data files of this dataset consist of audio signals, corresponding regular MIDI archives, and text files with lyrics all totaling 365 musical pieces performed and recorded. It also takes account of individual sounds at half-tone intervals with a variety of playing techniques, dynamics, instrument makers, and musicians. This collection served as a baseline to which researchers tested and analyzed different structures and methods. Unfortunately, this dataset is very small and unbalanced. Six years later, Ju-Chiang Wang et al. created another dataset, CAL500 [50], for music auto-tagging as an improvement of the RWC datasets with about 502 songs but the audio files are not provided in the dataset. The tag labels are annotated in the segment level instead of the track level. Unfortunately, 502 songs is inadequate to get better and accurate results for auto-tagging. The evolution of datasets in the music domain or music information retrieval space cannot be discussed without mentioning the GTZAN dataset [51] collected by G. Tzanetakis and P. Cook. It is by far the most popular dataset out there containing 1000 song excerpts of 30 s, sampling rate 22,050 Hz at 16 bit collected from various sources including personal CDs, radio, microphone recordings, and so on. Its songs are distributed evenly into 10 different genres: Blues, Classical, Country, Disco, Hip Hop, Jazz, Metal, Pop, Reggae, and Rock. Since its publication in 2002, the GTZAN has been widely used in music genre classification analysis [58][59][60][61][62]. It was selected mostly because it was well-organized and widely quoted in previous studies. This precedent lends authority while also providing a frame of reference for results. However, there are a few disadvantages to using this dataset. Its relatively small size is the most limiting factor. Mandel and Ellis created USPOP [52], centered only on popular artists with over 8752 audio songs without the raw file provided. Obviously, this is not a good dataset as its skewing can be questioned. Skewed datasets usually have a very high impact on the solutions they are used for as highlighted in these studies [63][64][65]. Chris Hartes, in 2010, created the Beatles datatset [66] which contains 180 songs and was well annotated by the musicologist Alan W. Pollack. Each music recording contains on average 10 sections from 5 unique section-types. It was one of the datasets used to generate the Million Song Dataset. Another notable dataset is SWAT10K [67]. This dataset was obtained from the Echo Nest API in conjunction with Pandora, having 10,870 audio songs that are weakly labeled using a tag vocabulary of 475 acoustic tags and 153 genre tags with the files also not provided. For developers and media firms, the Echo Nest is a music intelligence and data platform located in Somerville, MA bought by Spotify in 2014. The Echo Nest originated as an MIT Media Lab spin-off to investigate the auditory and textual content of recorded music. Its designer's intentions for the APIs are for music recognition, recommendation, playlist construction, audio fingerprinting, and analysis for consumers and developers [68]. Pandora is a subscription-based music streaming service headquartered in Oakland, California. It focuses on suggestions based on the "Music Genome Project", a method of categorizing individual songs based on musical characteristics. Like the SWAT10K, Mag-naTagATune [54] which has 25,863 audio files provided as csv was also created based on the Echo Nest API. Another dataset for popular music is the MusicCLEF [56] with 200,000 audio songs provided for research purpose. The Free Music Archive (FMA) by Defferrard et al. [55] contains over 100,000 tracks, each with its own genre label. There are many variations of the dataset available, ranging from the small version (8000 30-s samples) to the full version (all 106,574 songs in their entirety). The size of this dataset makes it suitable for labeling, and the fact that the audio files are available for download ensures that features can be derived directly from the audio. The Million Song Dataset (MSD) [57] is a set of audio features and metadata for a million contemporary songs (as the name implies) that is publicly accessible. Release year, artist, terms of the artist, related artists, danceability, energy, length, beats, tempo, loudness, and time signature are among the metadata and derived features included in the dataset although audio files with proper tag annotations (top-50 tags) are only available for about 240,000 previews of 30 s [69]. A very recent dataset, Augmented Maps (A-MAPS) [70] was created in 2018 with no precise number of MIDI files specified. However, it is the most common dataset used for automatic transcription of music. Adrien Ycart et al. updated the previous version of the original MIDI files, containing onset, offsets, and additional annotations. The annotations include duration of notes in fraction relative to a 1 4 th note (a crotchet), tempo curve, time signature, key signature (annotated as a relative major key), separate left and right-hand staff, and text annotations from the score (tempo indications, coda). However, due to MIDI format constraints, they do not contain all of the details required for staff-notation music transcription. It is difficult to say how this dataset was obtained because the original dataset MAPS is not readily available at the time of writing this paper. Among all these datasets, having seen their advantages and drawbacks, the two that seem very useful to this review in terms of time signature extraction are the FMA and the Million Song Dataset which are both extracted from the Echo Nest API. However, the metadata from the MSD have been pre-processed which makes it difficult to know how it was carried out, although there is a confidence level for the data we are most interested in (time signature). Classical Methods The methods discussed in this section consists of digital signal processing of audio samples tasks such as window framing in Figure 2, filtering and Fourier analysis [71]. Audio tracks are usually divided into perceivable audio chunks known as frames where 1 sample at 44.1 KHz is 0.0227 ms. This time is far shorter than what the human ear can meaningfully resolve-10 ms. Therefore in order to avoid spectral leakage, a windowing function is applied which eliminates samples at both ends of the frame hence the importance of the frame overlap to have a continuous signal again. Some of these processes will be explained in detail and a brief summary can be found in Table 3. [73] presented an extraction method for time signature which was referred to as meter. This method was also based on the assumption that the music meter was constant throughout the audio signals. Their assumption was valid given the type of music they used for the estimation-300 raw audio samples of Greek traditional dance music whose tempo ranges from 40 bpm to 330 bpm. It is important to note that there is a huge relationship between the speed of any music track (in bpm) and the time signature, as pointed out by Lee in [80]. By considering a similar approach as the ASM, a self-similarity matrix was used for this experiment which showed that periodicities corresponding to music meter and beat are revealed at the diagonals of the matrix of the audio spectrogram. Consequently, by examining these periodicities, it is possible to estimate both meter and beat simultaneously. In the first step, each raw audio recording was divided into nonoverlapping long-term segments with a length of 10 s each. The meter and tempo of the music were removed segment by segment. A short-term moving window, in particular, produces a series of function vectors for each long-term fragment. The approximate values for the short-term window duration and overlap duration between successive windows are 100 ms and 97 ms, implying a 3 ms moving window phase. The overall result accounted for the successful extraction of the rhythmic features while most mistaken results were produced for meters such as 2/4 with 4/4 or 5/4; 7/8 with 3/4 or 4/4. Since the ASM method proved to be effective, Gainza in [76] combined it with a Beat Similarity Matrix to estimate the meter of audio recordings. To begin, a spectrogram (a pictorial representation of the power of a signal or "loudness" of a signal over time at different frequencies of a specific waveform [81]) of the audio signal was generated using windowed frames with a length of L = 1024 samples and a hop size of H = 512 samples, which is half the frame length. Then, individual audio similarity matrices were calculated by comparing the spectrogram frames of the piece of music every two beats. Following that, a beat similarity matrix was constructed by combining similarity measures obtained from the individual audio similarity matrices. Finally, by processing the diagonals of the beat similarity matrix, the presence of identical patterns of beats was studied. The equation to obtain the matrix diagonals is defined as w(n) is a windowing function which in this case is the Hanning window that selects a L length block from the input signal x(n), and m, N, H, and k are the frame index, fast Fourier transform (FFT) length, hop size, and bin number respectively; k ∈ {1 : N/2}. The choice of the window type function was based on previous studies [82,83]. The findings obtained demonstrate the robustness of the presented approach, with 361 songs from a database of quadruple meters, a database of truple meters, and another of complex meters yielding a 95% accuracy. Furthermore, Gouyon and Herrera in [72] proposed a method to determine the meter of music audio signals by seeking recurrences in the beat segment. Several approaches were considered with the aim of testing the hypothesis that acoustic evidence for downbeats can be calculated on signal low-level characteristics, with an emphasis on their temporal recurrences. One approach is to determine which of the low-level audio features corresponding to specific meters were relevant for meter detection. This approach is limited because it was simplified to two-groupings only (duple and triple group meters) while not considering the cases for irregular meters. With a frame size of 20 ms and a hop size of 10 ms, features such as energy, spectral flatness, and energy in the upper half of the first bark band were extracted from each signal frame. Beat segmentation was also carried out as a different approach based on these features already extracted. For the study, a database of 70 sounds (44,100 Hz, 16 bit, mono) was used. Each extract is 20 s long. Bars for beginnings and endings were set at random, and music from Hip-hop, Pop, Opera, Classical, Jazz, Flamenco, Latin, Hard-rock, and other genres were included. As a more advanced technique to this problem, they also considered the classification methods to assign the value for the meter: from a non-parametric model (Kernel Density estimation) to a parametric one (Discriminant Analysis), including rule induction, neural networks, 1-Nearest Neighbor (1-NN), or Support Vector Machines (SVMs). For this, on a frame-by-frame basis, the following features were computed: energy, zero-crossing rate, spectral centroid, spectral kurtosis, spectral skewness, two measures of spectral flatness (one is the ratio geometric mean/arithmetic mean and the other is the ratio harmonic mean/arithmetic mean), 13 Mel-Frequency Cepstrum Coefficients (MFCCs), and energy in 26 non-overlapping spectral bands. The evaluation showed that, when 27 features were used, error rates for all cases were found to be less than 17.2% (the best technique, Naive Bayes, yielded just 5.8%, whereas a rule induction technique yielded 17.2%). Meter detection was also studied from the aspect of breaking down the metrical structure of a single bar by Andrew and Mark in [84] using some excerpts from Bach which eventually gave a 80.50% F-measure. They started by using the hierarchical tree structure of notes as seen in Figure 3. This gave insight for evaluation on each of the three levels (sub-beat, beat, and bar) of the guessed metrical tree. If it matched exactly a level of the metrical tree, it was counted as a true positive and otherwise, a clash was counted as a false positive. In another study [85], they pushed this model furthermore to accurately detect the meter. The suggested model was based on two musicological theories: a reasonably steady rate of the tatum without great discontinuities and notes that are relatively similar to those tatums. Each state in the model represents a single bar, with a list of tatums from that bar and a metrical hierarchy defining which tatums are beats and sub-beats. The tatum list and the downbeat of the next bar are obtained. The tatums are listed in ascending chronological order. The metrical hierarchy of a state has a certain number of tatums per sub-beat, sub-beats per beat, and beats per bar, as well as an anacrusis duration, which is determined by the number of tatums that fall before the first downbeat of a given piece. The first downbeat position probability was also considered by De Haas et al. [86] with a model-Inner Metric Analysis (IMA). The number of tatum per sub-beat was restricted to 4. Although, in principle, this could be any number. The set of possible sub-beat per beat and beat per bar pairs (i.e., time signatures) are taken all of those found in our training set ( 2 X , 3 X , 4 X , 6 X , 9 X , and 12 X ), where X could be any value ranging from 1 to 12. Gulati et al. then took on this very difficult challenge to estimate the meter of irregular time signature using the case study of Indian classical music in their study with meters of 7/8 [77]. The incoming audio stream is transformed to a mono channel after being downsampled to 16 kHz. The data is divided into 32 ms frames with a 5 ms hop size and a frame rate of 200 Hz. Each frame is subjected to a Hamming window, and a 512-point FFT is calculated. With 12 overlapping triangle filters that are equally spaced on the Melfrequency scale, the frequency bins are reduced to 12 non-linear frequency bands. The time history of the amplitudes of each of these 12 bands is represented by a band envelope with a sampling frequency of 200 Hz (frame rate). The band envelope is then transformed to log scale (dB) and low pass filtered using a half-wave raised cosine filter. The meter vector m is obtained when narrow comb filter banks are set up around integer multiples of tatum duration retrieved from the differential signal. The number of comb filters implemented per filter bank is equal to twice the integer multiple of the tatum duration plus one to account for the tatum duration's round-off factor. For each filter bank, the filter with the maximum output energy (i.e., with a certain delay value) is chosen, and the total energy of this filter over all Mel bands is calculated. The salience value for each feasible meter is calculated in Equations (2)-(4) i.e., for double, triple, and septuple. A simple rule-based technique is used to calculate the final meter value from m. A salience value for each conceivable meter is constructed, i.e., double, triple, and septuple, as shown in Equations (3)-(5), respectively. The ultimate meter of the song is determined by the sum of S2, S3, and S7. Holzapfel and Stylianou in [75] set out to estimate the rhythmic similarities in Turkish traditional music and on this path, the time signature was estimated with a data set consisting of 288 songs distributed along the six classes of different rhythmic schemes (9/8, 10/8, 8/8, 3/4, 4/4, 5/8). Although this was not the aim of this research, he proposed a method for estimating the time signature because the overall study was compared to a start-of-the-art estimation technique which Like Uhle proposed in [42]. The onset periods are read from the MIDI files, and each onset is allocated a weight. After evaluating several strategies for assigning weights, the most popular scheme was adopted: the weight of an onset may be compared to the note length, to melody characteristics, or all onsets are assigned the same weight. To evaluate a piece's time signature, all pairwise dissimilarities between songs were computed using either the scale-free auto correlation function (ACF) or the STM vectors, and a cosine distance; a similar method was used in [87]. The same method used by Brown in [88] since it is a count of the number of events that occur during an occurrence at time zero if events are clustered from measure to measure, with a higher occurrence of an event happening with the measure's time isolation, therefore peaks in the auto-correlation function should show the periods when measurements begin [89]. A single melody line was extracted from the music score for analysis. This produced dissimilarity matrices with values close to zero when two parts were discovered to be alike in terms of rhythmic information. The accuracy of an updated k-Nearest Neighbor (kNN) classification was calculated in order to calculate the consistency of the proposed rhythmic similarity metric [90][91][92][93]. The power of a similarity matrix in this sphere lies with the distance between the notes in comparison. That is, the higher the distance, the lesser the similarity and vice versa. Hence the need to evaluate the impact of the value of K on the nearest neighbor. Each individual song was then used as a query for classification into one of the available groups. The dissimilarity matrix was classified using the modified kNN. The melodic line x[n] was subjected to a short time auto-correlation calculation defined as where the average is taken over N samples and m is the auto-correlation time in samples. Coyle and Gainza in [74] proposed a method to detect the time signature from any given musical piece by using an Audio Similarity Matrix (ASM). The ASM compared longer audio segments (bars) from the combination of shorter segments (fraction of a note). This was based on an assumption that musical pieces have repetitive bars at different parts. A spectrogram with a frame length equal to a fraction of the duration of the song's beat was generated using prior knowledge of the song's tempo; a technique asserted by Kris West in [94]. Following that, the song's first note was obtained. The reference ASM was then produced by taking the Euclidian distance between the frames beginning with the first note and this enables the parallels between minor musical incidents such as short notes to be captured. Then, a multi-resolution ASM technique is used to create other audio similarity matrices representing different bar lengths. After computing all of the ASMs within a certain range, the ASM with the greatest resemblance between its components would conform to the bar duration and a technique for detecting the song's anacrusis-an anticipatory note or notes occurring before the first bar of a piece, is added. Finally, the time signature is estimated, as well as a more precise tempo measurement. The music meter estimation problem can also be considered as a classification task as demonstrated by Varewyck et al. in [78]. Having considered the previous methods in this field that worked, they used the Support Vector Machine (SVM) for this purpose. Prior to the periodicity analysis, an external beat tracker was used to perform beat-level analysis, alongside, spectral envelope and pitch analysis were also carried out. Furthermore, a similarity analysis of the interval between two successive beats which they called Inter-Beat-Interval (IBI) already shown by Gouyon and Herrera (2003) [72] was performed. Hereafter, a hypothesis for the meter generated was developed and the meter was obtained. The similarity of the IBI was calculated using cosine similarity as shown in the equation below where b is the beat, z(b − 1) and z(b) are low dimensional vectors grouped by related features. Eventually, they created an automatic meter classification method with the best combination of features that made an error of around 10% in duple/triple meter classification and around 28% in meter 3, 4, and 6 with a balanced set of 30 song samples. Meter estimation for traditional folk songs is especially more challenging as much research is usually carried out on Western music. However, Estefan et al. in [79] made some attempt to estimate the meter and beat of Colombian dance music known as the bambuco. The bambuco has a superposition of 3 4 and 6 8 m but due to the caudal syncopation and the accentuation of the third beat, the case of downbeat does not hold for this type of music. With the ACMUS-MIR dataset (V1.1), a collection of annotated music from the Andes region in Colombia, they were able to perform beat tracking and meter estimation. For the study, 10 candidates were asked to tap to the rhythm in order to choose 10 bambuco packs with Sonic Visualiser's on the computer keyboard. There were two sets of annotations: (1) beats were taped while the audio was playing (without any visual information) and participants were not granted permission to make any adjustments. (2) Participants were permitted to change the Sonic Visualiser's first beat annotations using both audio and audio waveform visuals. Three musicologists from Colombia evaluated the beat annotations from these 10 participants in order to establish the underlying meters of each track. Each annotation was mapped directly to a certain meter, either 3 4 , 6 8 , or a combination; even though the participants were asked to naturally tap to the beats. They also performed beat tracking using two methods; madmon and multiBT while evaluating the F1 score for each perceived meter. For 3 4 , madmon had 76.05% while multiBT had 42.79% and for 6 8 , madmon had 41.13% while multiBT had 45.15%. In conclusion, in the annotations made by the research participants, five metric alternatives were discovered. Deep Learning Techniques Things are a little different with deep learning, because more knowledge is gathered. With deep learning, it is basically a neural network with three or more layers. Although a single-layer neural network may still generate approximate predictions, more hidden layers can assist optimize and tune for accuracy. In resolving several complicated learning issues, such as sentiment analysis, extraction of functions, genre classification, and prediction, Convolutional Neural Networks (CNNs) have been used extensively [95][96][97]. For tempo data such as audio signals and words sequencing, a hybrid model of CNNs and Recurrent Neural Networks (RNNs) was recently used [98]. Audio data is represented by frameworks and the sequential character of audio is entirely overlooked in the traditional RNN approach for temporal classification, hence the need for a well-modeled sequential network; the long-term recurrent neural network (LSTM) which has recorded successes for a number of sequence labeling and sequence prediction tasks [99,100]. Convolutional-Recurrent Neural Networks (CRNNs) are complicated neural networks constructed by the combination of CNN and RNN. As an adapted CNN model, the RNN architecture is placed on CNN structure with the aim of obtaining local features using CNN layers and temporal summation by RNN networks. The main components for a CNN network are: input type, rate of learning, batches and architectural activation features, and the ideal type of input for music information collection is the mel-spectrogram [97]. Mel spectrograms are comprised of broad functionality for latent feature learning and onset and offset detection since the Mel scale has been shown to be similar to the human auditory system [81,101]. In order to obtain a mel-spectrogram signal, the pre-processing phase is necessary for STFT (Fourier short transform) and the log amplitude spectrogram. The methods in this section discussed and summarized in Table 4 consist of neural networks that extract the time signature as a feature that can be used as input for further calculation or classification problems in the MIR domain rather than estimating it exactly. Handcrafted features like Mel Frequency Cepstral Coefficients (MFCC), Statistical Spectrum Descriptors (SSD), and Robust Local Binary Patterns (RLBP) [102], used in deep learning are extracted based on human's domain knowledge [101]. However, these features have not been totally proven to be correlated to meter or time signature detection and their effectiveness and validity are not very clear. Rajan et al. in [108] proposed a meter classification scheme using musical texture features (MTF) with a deep neural network and a hybrid Gaussian mixture model-deep neural network (GMM-DNN) framework. The proposed system's performance was assessed using a freshly produced poetry corpus in Malayalam, one of India's most widely spoken languages, and compared to the performance of a support vector machine (SVM) classifier. A total of 13 dim MFCCs were extracted using frame-size of 40 ms and frame-shift of 10 ms alongside seven other features; spectral centroid, spectral roll-off, spectral flux, zero crossing, low energy, RMS, and spectrum energy. Rectified linear units (ReLUs) were chosen as the activation function for hidden layers, while the softmax function was used for the output layer. These methods produce an accuracy of 86.66 percent in the hybrid GMM-DNN framework. The overall accuracies for DNN and GMM-DNN were 85.83 percent and 86.66 percent, respectively. Convolutional Neural Netorks In a study conducted by Sander Dieleman et al. in [103] where unsupervised pretraining was performed using the Million Song Dataset, the learnt parameters were transferred to a convolutional network with 24 input features. Timbre properties from the dataset were presented to the network as shown in Figure 4. Two input layers composed of chroma and timbre characteristics were stacked with separate convolution layers and the output of these layers was then maxpooled. The performance of the max-pooling layer was invariant to all displacements of less than one bar (up to 3 beats). The accuracy in terms of time signature was not stated since this was not the main objective of the research. Sebastian Böck et al. [105] showed that tempo estimation can be achieved by learning from a beat tracking process in a multi-task learning algorithm since they are highly interconnected; a method that has been used in other research areas for optimization [109,110]. This approach proved effective in that mutual information of both tasks was brought forward by one improving the other. The multi-task approach extends a beat tracking system built around temporal convolutional networks (TCNs) and feeds the result into a tempo classification layer. Instead of using raw audio as data input, dilated convolutions are applied to a heavily sub-sampled low-dimensional attribute representation. This 16dimensional function vector is generated by adding several convolution and max pooling operations to the input audio signal's log magnitude spectrogram. The log magnitude spectrum is obtained because this is what the human ear can perceive [111,112]. The spectrogram is produced using a window and FFT size of 2048 samples, as well as a hop size of 441 samples. The convolutional layers each have 16 filters, with kernel sizes of 3 × 3 for the first two layers and 1 × 8 for the final layer. The method was tested on a variety of existing beat-and tempo-annotated datasets, and its success was compared to reference systems in both tasks. Findings show that the multi-task formulation produces cutting-edge efficiency in both tempo estimation and beat recording. The most noticeable improvement in output occurs on a dataset where the network was trained on tempo labels but where the beat annotations are mostly ignored by the network. The underlying beat tracking system is inspired by two well-known deep learning methods: the WaveNet model [38] and the latest state-of-the-art in musical audio beat tracking, which employs a bi-directional long short-term memory (BLSTM) recurrent architecture. To train the system, annotated beat training data as impulse were represented at the same temporal resolution as of the input feature (i.e., 100 frames per second) and different datasets were used for this training and eventual evaluation, unlike other approaches where one single dataset is divided into training and test sets. Tracking meter at a higher metrical level is a task pursued under the title of downbeat detection [113]. Therefore we can also consider downbeat detection with deep learning features. Durand and Essid in [104] suggested a random field method conditioning an audio signal's downbeat. In the first instance the signal generated four additional characteristics pertaining to harmony, rhythm, melody, and bass, and the tatum level was separated. Adapted convolutional neural networks (CNN) were then used for feature learning based on each feature's characteristics. Finally, a feature representation concatenated from the networks' final and/or penultimate layers was used to describe observation feature functions and fed into a Markovian model of Conditional Random Field (CRF) that produced the downbeat series. The model was evaluated using a Friedman's test and a Tukey's honestly significant criterion (HSD) and was found to have a F-measure improvement of +0.9% using features from the last layer and 95% confidence interval. With a deep learning approach, music domain assumptions are relevant when not enough training data are available as suggested by Pons et al. in a study done recently in 2017 [69]. They were able to automatically categorize audio samples using waveforms as input and a very small convolutional filter on a convolutional neural network-a common architecture for music genre classification as shown in Figure 5 and thus indirectly calculated various attributes, one of which was the meter. The CNN architecture was divided into input, front-end, back-end, and output for easy implementation. The front-end that takes in the spectrogram is a single-layer CNN with multiple filter shapes divided into two branches: top branch-timbral features, and lower branch-temporal features. The shared backend is made up of three convolutional layers (each with 512 filters and two residual connections), two pooling layers, and a dense layer. With two models combined where one implemented classical audio features extraction with minimal assumption and the other dealt with spectrograms, and a design that heavily relies on musical domain knowledge, meter tags were obtained. Figure 5. A typical convolutional neural network architecture used to time signature detectionaudio signal processed into spectrogram which is an input to convolutional layers, and then an outcome is an input to classical artificial neural network. This kind of pipeline was also suggested by Humphrey et al. [114] where it was advocated to move beyond feature design to automatic feature learning. A fraction of the Million Song Dataset alongside the MagnaTagATune (25 k songs) which have been mentioned in the dataset section, and a private dataset of 1.2 M songs were combined together to validate the two distinct music auto-tagging design approaches considered. The result of this study brought about an approach to learn timbral and temporal features with 88% accuracy. Purwins et al. in [106] showed how deep learning techniques could be very useful in audio signal processing in the area of beat tracking, meter identification, downbeat tracking, key detection, melody extraction, chord estimation, and tempo estimation by processing speech, music, and environmental sounds. Whereas in traditional signal processing, MFCCs are the dominant features; in deep learning the log-mel spectrograms (see Section 1) are the pre-dominant features. As confirmed by Purwmins, the convolutional neural networks have a fixed flexible field, which limits the temporal context taken into account for a prediction while also making it very simple to expand or narrow the context used. While it was not explicitly stated which of the three popular methods of deep learning performs the best, the data used sometimes determines the method to be used. For this analysis, the Million Song Dataset was chosen to reduce a 29 s log-mel spectrogram to an 1 × 1 feature map and categorized using 3 × 3 convolutions interposed with maxpooling which yielded a good result of 0.894 AUC. Convo-Recurrent Neural Networks Fuentes et al. in [107] combined a non-machine learning approach as well as deep learning to estimate downbeat and in the process extract the time signature. The deep learning approach was a combination of a convolutional and recurrent network which they called CRNN proposed in their previous work [115]. By using the Beatles dataset because of its peculiarity in annotated features such as beats and downbeats, they considered a set of labels Y which represents the beat position inside a bar, then took bar lengths of 3 and 4 beats, corresponding to 3/4 and 4/4 m. The output labels y are a function of two variables: the beat position b ∈ B = {1, . . . , b max (r)} and the number of beats per bar r ∈ R = {r 1 , . . . , r n }, which relates to the time signature of the piece. The model experienced a level of success but it was incapable of identifying rare music variations in order to fit the global time signature consistently. For example, it estimated more 4/4 pieces than 3/4. Consequently, this model improves the downbeat tracking performance of the mean F-measure from 0.35 to 0.72. Conclusions and Future Pathways In this paper, we presented a summary of different methods for estimating time signature in music, considering both state-of-the-art classical and deep learning methods with a focus on the dataset used and the accuracy obtained as shown on the dendrogram in Figure 6. Since there has not been a study like this, there is a need for this analysis. The history of datasets has also been explored in terms of their production processes and purposes. The experiments that have been conducted so far have produced promising findings, indicating that time signature may be a significant feature of music genre classification. This survey has shown that in order to estimate the time signature using digital signal processing analysis, the most promising approach has come from generating some similarity matrices of the temporal features of audio or MIDI files when music signal is converted into an appropriate feature sequence. Based on a similarity measure, a selfsimilarity matrix is generated from the feature sequence. The SSM generates blocks and pathways with a high overall score. Each block or path specifies a pair of segments that are comparable. Using a clustering step, whole groups of mutually comparable segments are generated from the pairwise relations. A more detailed research into similarity matrices of MFCCs between 4 and 20 could yield better results. It is important to note that the ASM, RSSM, BSSM, and ACF work better on MIDI files than on digital audio files, however, MIDI files are not popularly used anymore. With audio samples, time signature estimation becomes relative to the tempo of the track which these other methods did not take seriously. In terms of using any deep learning approach, network architectures such as RNN has shown some level of success but cannot retain audio information for too long, however, the CNN architecture is definitely the way forward in this kind of task because it gives more accuracy for a wide range of both regular and irregular time signatures but it also takes more computational time and power to perform this task. A combination of two architectures like CNN and RNN where features are extracted in the convoluted layer and later transferred to recurrent layer has also proven to be effective in time-based series of audio signals. This implies that transfer learning-an approach that has not been fully explored in this research area could also be given more attention. More than 70% of the studies considered in this review assumed that music pieces had repeated bars at various points in the piece, which is not always the case. Estimating musical parts with an irregular signature or beat is challenging. As a result, additional research may be conducted in this field. The aim of this analysis is to chart a course for future study in feature extraction of machine learning algorithms used in music genre classification, time signature estimation and identification, and beat and tempo estimation in the Music Information Retrieval domain. Using a better approach as a pre-processor to retrieve the time signature as an additional feature in a neural network pipeline could drastically increase the accuracy of the model eventually. Author Contributions: Conceptualization, D.K. and J.A.; writing-original draft preparation, J.A. and D.K.; writing-review and editing, J.A., D.K. and P.K.; supervision, D.K. and P.K.; funding acquisition, D.K. and P.K. All authors have read and agreed to the published version of the manuscript.
12,283.6
2021-09-29T00:00:00.000
[ "Computer Science", "Engineering" ]
Investigation on Characteristics of Microwave Treatment of Organic Matter in Municipal Dewatered Sludge This study aimed to utilize a microwave technology to degrade active organic matters of the municipal dewatered sludge in a high-temperature environment. The effects of extraction agent, nanomaterial assistants, and microwave-absorbing agents and activating agents on the degradation efficiency were investigated. Dimethyl carbonate was used as the extraction agent. Nanostructured titanium oxide (TiO2) and zinc oxide (ZnO) exhibited effective assistance in the process of microwave treatment. We also developed a kind of microwave-absorbing agent, which was the sludge-based biological carbon. The sodium sulfate (Na2SO4), calcium hydroxide (Ca(OH)2), and magnesium chloride (MgCl2) were selected as activating agents to facilitate the organic matter discharging from the sludge. Through optimizing the experimental factors, it was confirmed that 0.1 wt% TiO2, 0.1 wt% ZnO, 2 wt% dimethyl carbonate, 10 wt% sludge-based biological carbon, 7.5 wt% Ca(OH)2, 0.5 wt% MgCl2, and 6 wt% Na2SO4 were the most appropriate addition amounts in the municipal dewatered sludge to make the organic matter decrease from 42.17% to 22.45%, and the moisture content reduce from 82.98% to 0.48% after the microwave treatment. By comparison, the organic matter degradation is almost zero, and the moisture content decreases to 8.69% without any additives. Moreover, the residual inert organic matter and sludge can be further solidified to lightweight construction materials by using liquid sodium silicate as the curing agent. The research provides a significant reference for the effective, fast, and low-cost treatment of the organic matter in the municipal sludge. Introduction With the rapid development of construction in urban sewage treatment facilities, the rate of sewage concentrated disposal increased accordingly, resulting in a sharp increase in dewatered sludge quantity [1,2].There are a large number of organic matters in the sludge, which usually exhibit different contents and compositions due to the variety of the source of sewage, treatment process, the living standard, and dietary habits of urban residents.Common sludge organic matter with high molecular weight and boiling point often contains twigs, sawdust, small cloth, bacteria, insect eggs, and other components [3].However, the high content of organic matter in municipal dewatered sludge is easily corruptible if the disposition of organic matters is inappropriate, which not only causes environmental pollution, but also brings resource waste.Therefore, it is a great challenge to both manage municipal dewatered sludge by degrading the organic matters and explore an effective route to improve the degradation efficiency. Usually, traditional sludge treatment strategies include concentration, dehydration and deweighting, dewatering, anaerobic and aerobic nitrification, pyrolysis, and drying [4][5][6][7].As is well known, microwave irradiation is a new type of green energy, which contains both electric field and magnetic field.There are thermal effects and nonthermal effects under microwave irradiation, both of which conduct highly efficient treatments for the organic matter, heavy metal, and water in sludge [8,9].When the nanostructured materials were irradiated by microwave, the dipole of organic matters surrounded by the nanostructured materials would generate turning-direction polarization.When the direction of the electric field with a high frequency is changed, the polarization direction of the dipole will accordingly alter.In the change process, a lot of heat is produced due to friction among the molecules, resulting in the temperature increase of the municipal dewatered sludge.Thus, the thermal effect of microwave treatment can promote the degradation of the organic matters [10][11][12]. At the same time, the nonthermal effect of microwave treatment is more significant.The microwave electric field increases the absorption efficiency and carrier separation of the nanostructured materials, and promotes water removal and formation of free groups.More importantly, the high-frequency electromagnetic waves of microwave radiation can penetrate the extraction medium and reach the vascular bundle and adenocyte system [13].The temperature inside the cell rises quickly ascribes due to the absorption of microwave energy, leading to cell lysis.Consequently, the active components in the cells flow out freely, which can be captured and dissolved by the extraction medium at a low temperature.Therefore, microwave heating causes greater heating of materials compared with conventional heating methods.Furthermore, microwaves can penetrate inside materials, providing heat from inside to outside, which is advantageous to the volatilization of the water and organic matters.As a result, the thermal effect and nonthermal effect of the microwave treatment together lead to a fast organic matters extraction and sludge dewatering [14,15]. However, many organic matters with the characteristic of absorbing heating selectively cannot absorb the microwave energy, which generates poor degradation efficiency [16].In order to improve the extraction of the organic matters from the sludge, we used the different additives into the sludge to enhance the interfacial interaction of the organic matters and sludge [17]. In this work, the microwave was utilized to treat the municipal dewatered sludge.The mixture of green extraction agent, a small amount of nanomaterials, and sludge-based microwave absorbent and activating agents was introduced into the sludge with the irradiation of microwave, resulting in a synergy effect to effectively improve the degradation of the organic matters. Materials and Methods The used sludge was taken from a sewage treatment plant of Wuhan City in Hubei.The sludge parameters are moisture content of 82.98 wt%, natural density of 1.14 kg/m 3 , pH of 7.07, the content of organic matter is 42.17 wt%, and the boiling point is 88 • C. The used microwave oven was self-designed and manufactured by Guangzhou Diwei Microwave Equipment Co., Ltd.(Guangzhou, China), and the output power of microwave ranged from 900 to 6300 W. We conducted the single factor experiment to confirm the optimized addition of extraction agent, nanomaterial assistants, microwave-absorbing agents, and activating agents.The typical experimental scheme: Firstly, municipal dehydrated sludge (M 0 ) was introduced into a polystyrene cup with the thermal decomposition temperature of over 300 • C, and then a certain amount of assistant nanomaterials (TiO 2 and ZnO nanoparticles), activating agents (Na 2 SO 4 , Ca(OH) 2 and MgCl 2 ), and absorbing agents (sludge-based biological carbon) were added into the cup with constant stirring.The total weight of the mixture is M 1 .After the microwave treatment was conducted, the sludge was cooled to room temperature and weighted as M 2 .The quality reduction rate of the samples (ψ 1 ) is evaluated by the following Equation (1). The calculation method for the organic matter content: dry sludge (M 2 ) after microwave treatment was put into a ceramic crucible and the total weight is M 3 .The crucible was transferred into a muffle furnace for heating treatment at 600 • C for 6 h, and the resultant weight is M 4 .The organic matter content is calculated by Equation (2). Furthermore, we designed an orthogonal experiment to analyze the influence of activating agents, determining the most suitable addition amount. Results and Discussion Under microwave radiation, the sludge could be rapidly heated up, promoting the hydrolysis of carbohydrates into polysaccharides and monosaccharides with low molecular weights, and facilitating the hydrolysis of proteins into polypeptides, dipeptides, amino acids, and other substances that are further hydrolyzed into low molecular organic acids, ammonia, and carbon dioxide [12].Meanwhile fat would be hydrolyzed into stearic acid, palmitic acid, and so on.Phosphorus and nitrogen in cells are also released due to the hydrolysis of nucleic acid.The mechanism illustration of microwave treatment for the municipal dewatered sludge is shown in Figure 1.The mixtures of farthing nanomaterials, green extraction agent, sludge-based microwave absorbent, curing binder, activator, and other low-cost materials are irradiated by microwave together, resulting in the degradation of the organic matters, and the dehydration and solidification of the residual materials. Appl.Sci.2019, 8, x 3 of 9 The calculation method for the organic matter content: dry sludge (M2) after microwave treatment was put into a ceramic crucible and the total weight is M3.The crucible was transferred into a muffle furnace for heating treatment at 600 °C for 6 h, and the resultant weight is M4.The organic matter content is calculated by Equation (2). Furthermore, we designed an orthogonal experiment to analyze the influence of activating agents, determining the most suitable addition amount. Results and Discussion Under microwave radiation, the sludge could be rapidly heated up, promoting the hydrolysis of carbohydrates into polysaccharides and monosaccharides with low molecular weights, and facilitating the hydrolysis of proteins into polypeptides, dipeptides, amino acids, and other substances that are further hydrolyzed into low molecular organic acids, ammonia, and carbon dioxide [12].Meanwhile fat would be hydrolyzed into stearic acid, palmitic acid, and so on.Phosphorus and nitrogen in cells are also released due to the hydrolysis of nucleic acid.The mechanism illustration of microwave treatment for the municipal dewatered sludge is shown in Figure 1.The mixtures of farthing nanomaterials, green extraction agent, sludge-based microwave absorbent, curing binder, activator, and other low-cost materials are irradiated by microwave together, resulting in the degradation of the organic matters, and the dehydration and solidification of the residual materials.The above-mentioned hydrolysates could be extracted by suitable extractants [13].Thus, the choice of extractants plays a significant role in determining the efficiency of the microwave treatment.As reported, dimethyl carbonate represents a new generation of nontoxic solvent, exhibiting the characteristics of safety, convenience, less pollution, and diverse dissolution to organic compounds [14,15].Thus, in this work, we investigated the effect of dimethyl carbonate as an extraction agent on the reduction rate of the organic matters under microwave irradiation. The microwave treatment was conducted under the power of 6300 W for 2 min (an optimized condition).The generated temperature is approximately 93.4 °C.The added mass ratios of the dimethyl carbonate are 0 wt%, 1 wt%, 2 wt%, 3 wt%, and 4 wt%, respectively.Figure 2 displays the variation tendencies of the sludge mass loss and the organic matter content as the content of the dimethyl carbonate increased.The plot shows that the quality loss of the sludge mixture is initially increased and then decreased, and the mass loss rate of the mixture reached a maximum of 79.90% with dimethyl carbonate content of 2 wt%.Accordingly, the content of organic matter decreased from 42.17% in the initial stage to a minimum of 38.20%.As calculated, the 2 wt% dimethyl carbonate extracts only 3.97% of the organic matter in the sludge under microwave radiation.There is still a lot of residual organic matter in the sludge.It is considered that the extraction efficiency of the organic The above-mentioned hydrolysates could be extracted by suitable extractants [13].Thus, the choice of extractants plays a significant role in determining the efficiency of the microwave treatment.As reported, dimethyl carbonate represents a new generation of nontoxic solvent, exhibiting the characteristics of safety, convenience, less pollution, and diverse dissolution to organic compounds [14,15].Thus, in this work, we investigated the effect of dimethyl carbonate as an extraction agent on the reduction rate of the organic matters under microwave irradiation. The microwave treatment was conducted under the power of 6300 W for 2 min (an optimized condition).The generated temperature is approximately 93.4 • C. The added mass ratios of the dimethyl carbonate are 0 wt%, 1 wt%, 2 wt%, 3 wt%, and 4 wt%, respectively.Figure 2 displays the variation tendencies of the sludge mass loss and the organic matter content as the content of the dimethyl carbonate increased.The plot shows that the quality loss of the sludge mixture is initially increased and then decreased, and the mass loss rate of the mixture reached a maximum of 79.90% with dimethyl carbonate content of 2 wt%.Accordingly, the content of organic matter decreased from 42.17% in the initial stage to a minimum of 38.20%.As calculated, the 2 wt% dimethyl carbonate extracts only 3.97% of the organic matter in the sludge under microwave radiation.There is still a lot of residual organic matter in the sludge.It is considered that the extraction efficiency of the organic matter is hardly improved by microwave radiation depending on dimethyl carbonate at a relatively low temperature of 93.4 • C, and the organic matters with high molecular weight were tightly attached to the sludge, so a higher energy is required to improve the extraction efficiency.As is well known, nanomaterials have a large specific surface area and many defects, thus, they exhibit high surface adsorption and microwave absorption [15][16][17][18][19], which leads to the increase in temperature around the nanomaterials. To investigate the effect of ZnO nanoparticles on the degradation efficiency of the organic matters, we mixed 0.2-1 wt% ZnO pretreated powder and 2 wt% dimethyl carbonate with the municipal dewatered sludge under microwave treatment.Figure 3a presents the relationship of the mass loss and organic matter content with the added content of ZnO.The plot reveals that the mass loss of the mixture initially increased as the ZnO content increased, and maintains a saturated value.Correspondingly, the organic matter content is greatly reduced when the added ZnO is increased to 0.1%, and then displays a little increase.Thus, considering the nanomaterial cost, the amount of ZnO powder added in the sludge was selected as 0.1 wt%.Through calculating the organic matter content, the result shows that it is decreased from 42.17% to 37.87% after microwave treatment, and the degradation rate of the organic matters in the sludge is 10.23%, suggesting an effective treatment by using a small amount of ZnO. It was also found that the TiO2 nanomaterial is another kind of effective additive to assist in the degradation of organic matter.We conducted the microwave treatment using 0.1-0.5 wt% TiO2 nanoparticles and 2 wt% dimethyl carbonate.Figure 3b depicts the change of mass losses and organic matter content with the added content of TiO2, demonstrating a similar variation tendency with the case of ZnO.Therefore, the added amount of TiO2 powder is set as 0.1 wt%, and the content of organic matter is decreased from 42.17% to 24.68%.The improvement in degradation efficiency of the organic matters by ZnO and TiO2 powders is mainly because the nanomaterials could be suspended on the surface of organic matters, and the friction between the nanomaterial and organic matter occur under the microwave with high frequency, leading to the requirement of a large amount of energy to improve the degradation efficiency of the organic matters. Before microwave treatment, ZnO and TiO2 nanomaterials should undergo a surface modification process to reduce nanoparticle aggregation and improve the surface free energy of the nanomaterials.Thus, we put the nanomaterials into the microwave oven to facilitate short-term activation before mixing them with the sludge for the microwave treatment.When the microwave As is well known, nanomaterials have a large specific surface area and many defects, thus, they exhibit high surface adsorption and microwave absorption [15][16][17][18][19], which leads to the increase in temperature around the nanomaterials. To investigate the effect of ZnO nanoparticles on the degradation efficiency of the organic matters, we mixed 0.2-1 wt% ZnO pretreated powder and 2 wt% dimethyl carbonate with the municipal dewatered sludge under microwave treatment.Figure 3a presents the relationship of the mass loss and organic matter content with the added content of ZnO.The plot reveals that the mass loss of the mixture initially increased as the ZnO content increased, and maintains a saturated value.Correspondingly, the organic matter content is greatly reduced when the added ZnO is increased to 0.1%, and then displays a little increase.Thus, considering the nanomaterial cost, the amount of ZnO powder added in the sludge was selected as 0.1 wt%.Through calculating the organic matter content, the result shows that it is decreased from 42.17% to 37.87% after microwave treatment, and the degradation rate of the organic matters in the sludge is 10.23%, suggesting an effective treatment by using a small amount of ZnO. It was also found that the TiO 2 nanomaterial is another kind of effective additive to assist in the degradation of organic matter.We conducted the microwave treatment using 0.1-0.5 wt% TiO 2 nanoparticles and 2 wt% dimethyl carbonate.Figure 3b depicts the change of mass losses and organic matter content with the added content of TiO 2 , demonstrating a similar variation tendency with the case of ZnO.Therefore, the added amount of TiO 2 powder is set as 0.1 wt%, and the content of organic matter is decreased from 42.17% to 24.68%.The improvement in degradation efficiency of the organic matters by ZnO and TiO 2 powders is mainly because the nanomaterials could be suspended on the surface of organic matters, and the friction between the nanomaterial and organic matter occur under Appl.Sci.2019, 9, 1175 5 of 9 the microwave with high frequency, leading to the requirement of a large amount of energy to improve the degradation efficiency of the organic matters. Before microwave treatment, ZnO and TiO 2 nanomaterials should undergo a surface modification process to reduce nanoparticle aggregation and improve the surface free energy of the nanomaterials.Thus, we put the nanomaterials into the microwave oven to facilitate short-term activation before mixing them with the sludge for the microwave treatment.When the microwave activation time is set below 10 s, the degradation efficiency of the organic matters increased with the extension of activation time.However, if the time is set as 10-20 s, the degradation efficiency would gradually decrease.Thus, 10 s is an optimized pretreatment time, making the organic matter content decrease from 42.17% to 26.91%. Appl.Sci.2019, 8, x 5 of 9 gradually decrease.Thus, 10 s is an optimized pretreatment time, making the organic matter content decrease from 42.17% to 26.91%.ZnO and TiO2 nanomaterials have been confirmed to effectively assist in degrading organic matter.Furthermore, to further improve the degradation efficiency, microwave-absorbing agents are also applied in the experiment.There have been many investigations that report their effects during the process of microwave treatment [20].Sludge-activated carbon powder, due to its decontamination performance, was used as the microwave-absorbing agent in this work, which has been proven to exhibit high removal efficiency of some organic materials and heavy metals [21].Thus, 2 wt% dimethyl carbonates, 0.1 wt% TiO2, and 0.1 wt% ZnO nanoparticles, as well as the sludgeactivated carbon powder with content of 5 wt%, 10 wt%, and 15 wt% were introduced into the sludge for the microwave treatment.The mixture without sludge-activated carbon powder is also carried out for comparison. It should be mentioned that the temperature of the mixture is increased when the amount of the sludge-activated carbon enhanced.The highest temperature reached 212 °C, when the content of sludge-activated carbon is 15 wt%.The mass reduction rate of the mixture exceeded 10%, and the content of the organic matters was remarkably decreased from 42.19% to 26.25%, as displayed in Figure 4.The effective degradation of the organic matters is ascribed to the elevation of the temperature, and consequently resulted in the promotion of reactive activity.Furthermore, after microwave treatment, the sludge-based biological carbon changed to residual dry sludge consisting of carbonaceous materials, ZnO and TiO2 nanoparticles, and others.Thus, the dewatered sludge can be further used as raw material of the microwave absorber.ZnO and TiO 2 nanomaterials have been confirmed to effectively assist in degrading organic matter.Furthermore, to further improve the degradation efficiency, microwave-absorbing agents are also applied in the experiment.There have been many investigations that report their effects during the process of microwave treatment [20].Sludge-activated carbon powder, due to its decontamination performance, was used as the microwave-absorbing agent in this work, which has been proven to exhibit high removal efficiency of some organic materials and heavy metals [21].Thus, 2 wt% dimethyl carbonates, 0.1 wt% TiO 2 , and 0.1 wt% ZnO nanoparticles, as well as the sludge-activated carbon powder with content of 5 wt%, 10 wt%, and 15 wt% were introduced into the sludge for the microwave treatment.The mixture without sludge-activated carbon powder is also carried out for comparison. It should be mentioned that the temperature of the mixture is increased when the amount of the sludge-activated carbon enhanced.The highest temperature reached 212 • C, when the content of sludge-activated carbon is 15 wt%.The mass reduction rate of the mixture exceeded 10%, and the content of the organic matters was remarkably decreased from 42.19% to 26.25%, as displayed in Figure 4.The effective degradation of the organic matters is ascribed to the elevation of the temperature, and consequently resulted in the promotion of reactive activity.Furthermore, after microwave treatment, the sludge-based biological carbon changed to residual dry sludge consisting of carbonaceous materials, ZnO and TiO 2 nanoparticles, and others.Thus, the dewatered sludge can be further used as raw material of the microwave absorber. content of the organic matters was remarkably decreased from 42.19% to 26.25%, as displayed in Figure 4.The effective degradation of the organic matters is ascribed to the elevation of the temperature, and consequently resulted in the promotion of reactive activity.Furthermore, after microwave treatment, the sludge-based biological carbon changed to residual dry sludge consisting of carbonaceous materials, ZnO and TiO2 nanoparticles, and others.Thus, the dewatered sludge can be further used as raw material of the microwave absorber.As reported, the organic matters could be effectively extracted with assistance of the cationic exchange resin [22].In this work, we also introduced the inorganic salts into the reaction to promote the dissolution of protein, such as sodium sulfate, magnesium chloride, and calcium hydroxide, which were regarded as the microwave activators.They can not only adjust the acid-base balance of the sludge, but also improve the water removal [23].Thus, we mixed sodium sulfate, magnesium chloride, and calcium hydroxide with the municipal dewatered sludge in the presence of ZnO and TiO 2 nanoparticles.Figure 5a-c presents the effects of added MgCl 2 , Na 2 SO 4 , and Ca(OH) 2 with different mass ratios on the mass losses of sludge and organic matter content.As reported, the organic matters could be effectively extracted with assistance of the cationic exchange resin [22].In this work, we also introduced the inorganic salts into the reaction to promote the dissolution of protein, such as sodium sulfate, magnesium chloride, and calcium hydroxide, which were regarded as the microwave activators.They can not only adjust the acid-base balance of the sludge, but also improve the water removal [23].Thus, we mixed sodium sulfate, magnesium chloride, and calcium hydroxide with the municipal dewatered sludge in the presence of ZnO and TiO2 nanoparticles. Figure 5a-c presents the effects of added MgCl2, Na2SO4, and Ca(OH)2 with different mass ratios on the mass losses of sludge and organic matter content.Figure 5a displays that the quality loss of the mixture is initially increased and then decreased as the content of MgCl2 gradually increased.The largest mass loss is achieved when the added amount of MgCl2 is 0.5 wt%.The content of the organic matters decreased from 42.17% to 26.98% as the MgCl2 content increased from 0 to 1.5 wt%. Figure 5b shows that the mass loss rate of the mixture and reduction of the organic matters achieved the highest values when the content of Na2SO4 was 6 wt%. Figure 5c shows that the sludge mass loss is firstly increased and then remains at a relative stable value as the content of Ca(OH)2 increased from 0 to 15 wt%.The highest reduction rate of the sludge reached 72.86%, while the content of the organic matters decreased from 42.17% to 24.68%, with Ca(OH)2 content of 5 wt%. To further optimize the added amount of MgCl2, Na2SO4, and Ca(OH)2 in the process of the microwave treatment, we designed an orthogonal experiment to analyze the influence of each factor, and the orthogonal experiments determined the most suitable contents for Ca(OH)2 of 7.5 wt%, MgCl2 of 0.5 wt%, and Na2SO4 of 6 wt%.The experimental results are listed in Table 1.The effects of MgCl2, Na2SO4, and Ca(OH)2 on the mass loss of sludge and degradation of organic matters are crossed.It is necessary to elucidated the most suitable adding content of the microwave activators.In the orthogonal experiment, we mixed into the 20 g of sludge TiO2 nanoparticles of 0.02 g, ZnO nanoparticles of 0.02 g, dimethyl carbonate of 0.4 g, and sludge-based biological carbon after this microwave treatment of 1 g together.Figure 5a displays that the quality loss of the mixture is initially increased and then decreased as the content of MgCl 2 gradually increased.The largest mass loss is achieved when the added amount of MgCl 2 is 0.5 wt%.The content of the organic matters decreased from 42.17% to 26.98% as the MgCl 2 content increased from 0 to 1.5 wt%. Figure 5b shows that the mass loss rate of the mixture and reduction of the organic matters achieved the highest values when the content of Na 2 SO 4 was 6 wt%. Figure 5c shows that the sludge mass loss is firstly increased and then remains at a relative stable value as the content of Ca(OH) 2 increased from 0 to 15 wt%.The highest reduction rate of the sludge reached 72.86%, while the content of the organic matters decreased from 42.17% to 24.68%, with Ca(OH) 2 content of 5 wt%. To further optimize the added amount of MgCl 2 , Na 2 SO 4 , and Ca(OH) 2 in the process of the microwave treatment, we designed an orthogonal experiment to analyze the influence of each factor, and the orthogonal experiments determined the most suitable contents for Ca(OH) 2 of 7.5 wt%, MgCl 2 of 0.5 wt%, and Na 2 SO 4 of 6 wt%.The experimental results are listed in Table 1.The effects of MgCl 2 , Na 2 SO 4 , and Ca(OH) 2 on the mass loss of sludge and degradation of organic matters are crossed.It is necessary to elucidated the most suitable adding content of the microwave activators.In the orthogonal experiment, we mixed into the 20 g of sludge TiO 2 nanoparticles of 0.02 g, ZnO nanoparticles of 0.02 g, dimethyl carbonate of 0.4 g, and sludge-based biological carbon after this microwave treatment of 1 g together.Finally, in order to further solidify the residual organic matters, we introduced liquid water glass into the mixture to conduct a curing process using the waste heat.In the experiment, 0.1 wt% ZnO nanoparticles, 0.1 wt% TiO 2 nanoparticles, 2 wt% dimethyl carbonate, 10 wt% sludge-based biological carbon, 0.5 wt% MgCl 2 , 7.5 wt% Ca(OH) 2 , 6 wt% Na 2 SO 4 , and liquid water glass, with added amount ratios of 0 wt%, 1 wt%, 2 wt%, 3 wt%, 4 wt%, and 5 wt%, were introduced into 20g of sludge.Figure 6 reveals that the mass loss rate of the mixture and content of the organic matters maintain relative stable values when the content of liquid water glass was elevated, indicating that the liquid water glass cannot play a significant role in promoting the degradation, but it can solidify the residual organic matters and sodium silicate in the sludge mixture to produce lightweight construction materials using one-step microwave irradiation.The compressive strength is remarkably enhanced over 2 MPa.When the content of the liquid water glass was 5 wt%, the mass loss rate of the mixture reached 72.01%, and the content of organic matter was reduced from 42.17% to 22.45%.Only ~46.76% of the inert organic matter was stable in the sludge. Appl.Sci.2019, 8, x 7 of 9 Finally, in order to further solidify the residual organic matters, we introduced liquid water glass into the mixture to conduct a curing process using the waste heat.In the experiment, 0.1 wt% ZnO nanoparticles, 0.1 wt%TiO2 nanoparticles, 2 wt% dimethyl carbonate, 10 wt% sludge-based biological carbon, 0.5 wt% MgCl2, 7.5 wt% Ca(OH)2, 6 wt% Na2SO4, and liquid water glass, with added amount ratios of 0 wt%, 1 wt%, 2 wt%, 3 wt%, 4 wt%, and 5 wt%, were introduced into 20g of sludge.Figure 6 reveals that the mass loss rate of the mixture and content of the organic matters maintain relative stable values when the content of liquid water glass was elevated, indicating that the liquid water glass cannot play a significant role in promoting the degradation, but it can solidify the residual organic matters and sodium silicate in the sludge mixture to produce lightweight construction materials using one-step microwave irradiation.The compressive strength is remarkably enhanced over 2 MPa.When the content of the liquid water glass was 5 wt%, the mass loss rate of the mixture reached 72.01%, and the content of organic matter was reduced from 42.17% to 22.45%.Only ~46.76% of the inert organic matter was stable in the sludge.In this work, we always put cost into the consideration.The total addition of the microwave solvent and nanomaterial assistants is 2.2 wt%.The usage of dry municipal dewatered sludge as the microwave-absorbing agent can further reduce the cost.The activating agents are industrial byproducts.By calculation, if one ton of municipal dewatered sludge is treated the cost of power consumption is 56 RMB.The price of added microwave solvent, nanomaterial assistants, and liquid sodium silicate is 168 RMB.After microwave treatment, 0.6 ton of lightweight construction materials is produced, which has a value of 300 RMB.Thus, we can obtain a profit of 76 RMB per ton.In this work, we always put cost into the consideration.The total addition of the microwave solvent and nanomaterial assistants is 2.2 wt%.The usage of dry municipal dewatered sludge as the microwave-absorbing agent can further reduce the cost.The activating agents are industrial by-products.By calculation, if one ton of municipal dewatered sludge is treated the cost of power Figure 1 . Figure 1.Illustration of sludge mixed with various additives (a) and municipal dewatered sludge after microwave treatment (b). Figure 1 . Figure 1.Illustration of sludge mixed with various additives (a) and municipal dewatered sludge after microwave treatment (b). Appl.Sci.2019, 8, x 4 of 9 low temperature of 93.4 °C, and the organic matters with high molecular weight were tightly attached to the sludge, so a higher energy is required to improve the extraction efficiency. Figure 2 . Figure 2. Mass loss of the sludge and content of the organic matters with respect to the content of the dimethyl carbonate during the process of the microwave treatment. Figure 2 . Figure 2. Mass loss of the sludge and content of the organic matters with respect to the content of the dimethyl carbonate during the process of the microwave treatment. Figure 3 . Figure 3. Relationship of mass losses and organic matter content with the content of ZnO (a) and TiO2 powder (b). Figure 3 . Figure 3. Relationship of mass losses and organic matter content with the content of ZnO (a) and TiO 2 powder (b). Figure 4 . Figure 4. Effect of the sludge-based microwave absorber on mass losses and extraction of organic matter. Figure 4 . Figure 4. Effect of the sludge-based microwave absorber on mass losses and extraction of organic matter. Figure 5 . Figure 5.Effect of the MgCl2 (a), Na2SO4 (b), and Ca(OH)2 (c) added amounts on the mass losses and extraction of organic matter. Figure 5 . Figure 5.Effect of the MgCl 2 (a), Na 2 SO 4 (b), and Ca(OH) 2 (c) added amounts on the mass losses and extraction of organic matter. Figure 6 . Figure 6.Effects of liquid water glass curing binder on microwave treatment. Figure 6 . Figure 6.Effects of liquid water glass curing binder on microwave treatment. Table 1 . Results of the orthogonal experiments. Table 1 . Results of the orthogonal experiments.
7,138.6
2019-03-20T00:00:00.000
[ "Environmental Science", "Materials Science" ]
ANALYSIS OF EFFECT OF CAPR, DAR, ROA AND SIZE ON TAX AVOIDANCE DOI:10.31933/DIJMS Abstract: This study aimed to identify the effect of the independent variable capital intensity (CAPR), return on assets (ROA), debt to asset ratio (DAR), and the size of the company (SIZE) on tax avoidance (CETR) as dependent variable. This study tested using multiple linear regression analysis with the SPSS 25 program with a causality and comparative approach using cross sectional data. The results of the study in 2015 showed that the capital intensity and debt to asset ratio does not affect on tax avoidance, while return on assets and company size have significant negative effect on tax avoidance. In 2017, showed that the capital intensity, debt to asset ratio, and company size does not affect on tax avoidance, while return on assets has a significant negative effect on tax avoidance. Hypothesis testing results indicate that the independent variables simultaneously in 2015 and 2017 affect the dependent variable. INTRODUCTION One of the nation's independence visible manifestation of the progress of national development. Indonesia is one country whose main source of funding comes from tax revenue. Since the election of Ir. Joko Widodo (Jokowi) as 7th of President of the Republic of Indonesia, national development in infrastructure and property to be one of the featured programs Jokowi administration, such as a million homes program to meet the residential needs of Indonesian society. The next phenomenon the government has set Government Regulation No. 34 in 2016, amendments to Government Regulation No. 71 in 2008 concerning the new Final Income Tax rate on income from the transfer of title to land and buildings, and the binding sale and purchase agreement for land and buildings. This regulation shall determine the amount of income tax from the sale of a house or land that is lower than the previous regulation of rates of 5% to 2.5% and came into force after 30 days from the date of enactment which falls on September 8, 2016. Effective tax rate is the percentage of the tax rate to be applied to certain tax bases so that the tax burden more effectively. Broadly, the effective tax rate is a measure of the tax burden on companies that represent the value of taxes paid through the company's revenues (Handayani et.al, 2016). Fiscal Policy Office noted that since 2016, the property sector is sluggish, the realization of tax revenue growth in the construction and property sector fell compared to the prior year period. According to Asep Nurwanda as Division Head of Fiscal Policy Agency, the slowing property sector that have occurred since the last three years due to the drop in commodity prices. So consumers properties that work in the sector affected. The next phenomenon related to property and real estate sector is originated from machine simulator license driver cases conducted by the DS in 2013, investigators found the presence of tax evasion on property transactions taking place in society. In the court proceedings revealed the existence of a luxury home sales by developers to the defendant. But the notary deed listed the actual prices. The value difference, obviously causes a loss of potential revenue (Tambunan, 2015). Efforts to tax evasion should be done legally by the taxpayer and not contrary to the provisions of taxation, namely the method of utilizing loopholes in the tax regulations (Gem, 2018). However, the company owner usually will encourage aggressive tax management action to reduce the tax burden arising (Chen et.al in Handayani et.al, 2016). If the successful management of tax avoidance efforts, is not likely to have an impact both on the rise in management performance assessment because it has managed to maintain the company's profit, so usually the principal will give awards or bonuses to employees of the company. LITERATURE REVIEW Agency Theory Agency theory is a theory of the relationship between the agent and the principal states where one party has more information and one of the other parties have less (Jensen and Meckling in Hand, 2016). The information imbalance causing asymmetry of information between the principals and the agent. Detailed information may be used by the agent, the agent to commit fraud to stakeholders (Handayani, 2016). The conditions such as agents manipulation of financial statements or tax reports which aim to improve their own welfare. Capital Intensity Capital intensity reflects how much capital the company needed to generate the revenue earned from the decrease or increase of fixed assets. Capital intensity is defined as the ratio of fixed assets such as equipment, machinery and property of the total assets of the company (Noor et al. In Puspita & Febriyanti, 2017). Return on Assets ROA is a formula to measure process management capabilities, especially its fixed assets in maximizing profitability and overall managerial efficiency (Slamet in Mulyani et al., 2017). Debt to Asset Ratio DAR is the ratio used to measure the extent of the company's assets are financed by debt. This means that in terms of taxation, high and low corporate debt will affect the company's tax burden. Size of Company The size of the company in a tax evasion can be measured by the natural logarithm of the total assets, because the large size of the company which usually have high resources, it is possible to have an influence on tax evasion. Research Design This type of research is quantitative research with causality and comparative approach using a cross sectional. The research data are secondary data on the company's financial statements and the real estate sector of the properties listed on the Indonesia Stock Exchange (IDX) in 2015 and 2017. The study population was recorded as many as 63 companies of real estate and properties listed on the Indonesia Stock Exchange ( The research sample was selected a total of 29 real estate companies and properties determined by purposive sampling method. The data analysis method uses quantitative data with ratio scale measurement. The research sample data were analyzed by inferential analysis through multiple linear regression tests using the SPSS 25 program. Independent Variables Capital intensity (CAPR) Return on assets (ROA) Debt to assets ratio (DAR) Firm size (SIZE) According to the table 1 the minimum value CAPR ratio in 2015 and 2017 amounting to 0.00. This means that these companies have not been able to optimize on the amount of capital to increase its profit through its fixed assets. Then the maximum value of a sample of the CAPR ratio in 2015 and 2017 of 0.40 and 0.45 which means that the company is able to process the amount of capital to help optimize revenue through its fixed assets approximately 40% -45%. Based on Table 2, the minimum value of ROA ratio of 0.00 in 2015 and 2017. This means that the company is not able to manage their assets to increase its profit. Then note the maximum value of a sample of ROA in 2015 and 2017 by 0.27 and 0.17 which means that there are companies that try to manage their assets in order to optimize revenue. Based on Table 3, the minimum value of the DAR ratio of 0.08 in 2015 and 2017 by 0.07. This means the company does not much depend on debt to manage its assets for 2015 and 2017 the company's assets are financed by debt is only about 7% and 8% of total assets, so that the company is unable to repay its obligations. Then note the maximum value of a sample of the DAR ratio in 2015 and 2017 amounted to 0.65 and 0.79 which means that the company is relying on half of its assets financed by debt, so it is feared such company is not able to repay long-term liabilities in the future. Based on Table 4, the minimum value SIZE ratio of 25.89 in 2015 and 2017 amounted to 25.91 Starter owned by Bekasi Asri Pemula Tbk, this ratio is obtained from the natura logarithm calculation (Ln) of the total assets of a company, which means the company has a total most asset lower than other companies so that they can be categorized Bekasi Asri Pemula Tbk is a company with the size of the lowest in the sample in 2015 and 2017. Then the maximum SIZE ratio in 2015 and 2017 amounted to 31.35 and 31.67 can be considered as a large company size as measured by total assets is high, the company in 2015 and 2017 is Lippo Karawaci Tbk. (2019) Based on Table 5, the minimum value CETR ratio of 0.01 in 2015 and 2017 of 0.03 owned by Greenwood Sejahtera Tbk. This means that the company is descriptive, not actively seeking to legality tax avoidance is indicated by a ratio value close to 0. FINDINGS AND DISCUSSION Descriptive Analysis Then note the maximum value of a sample of CETR ratio between 2015 and 2017 of 0.75 which means that the company is actively seeking to legality tax avoidance in the management of managing the business. (2019) From Table 6 above, the interpretation of the results obtained sample of 2015 research hypothesis that each of the variables that will be described as follows: a. Capital intensity (CAPR) H0: The Results of t-Test There is no effect of Capital Intensity on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year prior to the enactment of Government Regulation No.34 in 2016. H1: There is effect of Capital Intensity on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year prior to the enactment of Government Regulation No.34 in 2016. The t-test results for H1 known that the value of sig CAPR is 0.385, greater than the probability value of 0.05, or the value of 0.385 > 0.05, then H1 is rejected and H0 is accepted. The CAPR variable has a t-count of 0.885 with a t-table of 2.064. So t-count < t-table which means that the CAPR variable has no contribution to CETR, so it can be concluded that capital intensity has no effect on legally tax avoidance in the year prior to the enactment of Government Regulation No.34 in 2016. b. Return on assets (ROA) H0: There is no effect of ROA on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year prior to the enactment of Government Regulation No.34 in 2016. H2: There is effect of ROA on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year prior to the enactment of Government Regulation No.34 in 2016. The t-test results for H2 note that the value of sig ROA is 0,000, smaller than the probability value of 0.05, or the value of 0,000 < 0.05, then H2 is accepted and H0 is rejected. The ROA variable has a t-test of 5.125 and t- (2019) From table 7 above, the result of interpretation of the research hypothesis that each of the variables that will be described as follows: a. Capital intensity (CAPR) H0: There is no effect of Capital Intensity on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year after to the enactment of Government Regulation No.34 in 2016. H1: There is effect of Capital Intensity on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year after to the enactment of Government Regulation No.34 in 2016. The t-test results for H1 known that the CAPR sig value is 0.076, greater than the probability value of 0.05, or a value of 0.076 > 0.05, then H1 is rejected and H0 is accepted. The CAPR variable has a t-count of 1.852 with a t-table of 2.064. So t-count < t-table can be interpreted that the CAPR variable has no contribution to CETR, so it can be concluded that capital intensity has no effect on legally tax avoidance in the year after the enactment of Government Regulation No.34 in 2016. b. Return on Assets (ROA) H0: There is no effect of ROA on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year after to the enactment of Government Regulation No.34 in 2016. H2: There is effect of ROA on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year after to the enactment of Government Regulation No.34 in 2016. The t-test results for H2 known that the ROA sig value is 0.002, smaller than the probability value of 0.05, or the value of 0.002 < 0.05, then H2 is accepted and H0 is rejected. The ROA variable has a t-count is 3.502 and t-table is 2.064. So t-count > t-table can mean the ROA variable has a contribution to CETR. A negative t value illustrates that ROA has an inverse relationship with CETR. So it can be concluded that ROA has a significant negative There is effect of SIZE on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year after to the enactment of Government Regulation No.34 in 2016. The t-test results for H4 known SIZE sig value is 0.634, greater than the probability value of 0.05, or a value of 0.634 > 0.05, then H4 is rejected and H0 is accepted. The SIZE variable has a t-count of 0.482 and a t-table of 2.064. So t-count < t-table means that the SIZE variable has no contribution to CETR. A negative t value illustrates that SIZE has an inverse relationship with CETR. So it can be concluded that SIZE has no effect on legally tax avoidance in the year after the enactment of Government Regulation No.34 in 2016. Discussion of The Results of t-Test Analysis a. Effect of capital intensity (CAPR) to legally tax avoidance T-test results in the year before and after the enactment of Government Regulation No.34 in 2016, showed the similarity that both conditions equally showed there was no influence between capital intensity on legally tax avoidance. This is due, the average amount of capital to support income in real estate property companies in the 2015 and 2017 samples is only around 6% to 7%. Most assets owned by real estate and property companies are assets to be resold. So the cost of depreciation of fixed assets in companies contained in this study sample, is not intended to make tax avoidance efforts, but only more solely companies running company operations. b. Influence Return on Assets (ROA) of Legally Tax Avoidance T-test results in the year before and after the enactment of Government Regulation No.34 in 2016, showed the similarity that both situations, there were both significant negative effects and ROA variables on legally tax avoidance. In other words, the higher the ROA the lower the legally tax avoidance effort. Because real estate and property companies are now the corporate sector that is contributing to the implementation of the million house program in the Joko Widodo Era of Government, to attract public interest in the program, there are certainly many government policies that are devoted to supporting the efforts of real estate and property sector companies in marketing units its units to the public. The bank interest installment policy and/or low tax rates are expected to keep the company able to maintain profits. But profitable for from the real estate and property sector will certainly be a tight supervision of the government, especially in terms of tax supervision and auditing so as to encourage Taxpayers to prefer to obey the applicable tax provisions. c. Effect of Debt On Asset (DAR) to Legally Tax Avoidance T-test results in the year before and after the enactment of Government Regulation No.34 in 2016 showed the similarity that in both conditions, both showed that there was no influence of the DAR variable on legally tax avoidance, which means that the higher or lower the company's debt, would not affect companies to practice legally tax avoidance. This happens because the real estate and property companies on average have large investment property assets, the high debt of the company is not only deliberately done by management only to create a legally tax avoidance effort, although there will be an interest expense due to corporate debt that can be used to reduce the corporate tax burden. However, management will certainly be more conservative in financial statements and be careful in managing financial ratios and cash flow related to company operations, so that the interest expense incurred for legally tax avoidance efforts will not be commensurate with the high risks that companies must face due to having large debts. d. Influence of company size (SIZE) to Legally Tax Avoidance T-test results in the year prior to the enactment of Government Regulation No.34 in 2016 showed that there was a significant negative influence and effect on legally tax avoidance, which can be interpreted that the higher the size of the company, the lower the level of effort in the practice of legally tax avoidance. This is certainly reasonable because the SIZE variable is measured using the natural logarithm of the company's total assets. So if the total assets of a company are getting higher, then the company will certainly be closely monitored by the Directorate General of Taxes as the country's efforts to maximize the potential for tax revenue. Therefore, the more size a successful real estate and property company becomes a large company, the fewer companies will dare to practice tax avoidance (legal tax avoidance). Because the size of the company has been already large, the 5% tariff will not be too heavy a tax burden for the company and management will be more inclined to choose tax compliance rather than bear the potential risk that would be accepted if the practice of tax avoidance. Then in the year after the enactment of Government Regulation No.34 in 2016, the tax rate on the transfer of land and buildings was changed to 2.5% by the Government of the Republic of Indonesia. The results showed there was no influence between company size and legal tax avoidance, which means that tax obligations for both small and large companies no longer make the company's main focus on tax avoidance. This is due to the new tax rate (2.5%) which is not too heavy a heavy corporate tax burden even for small company sizes. There is a simultaneous effect of CAPR, ROA, DAR and SIZE on legally tax avoidance on real estate and property companies listed on the IDX in the year before the enactment of Government Regulation No.34 in 2016. The results of table 8 above, it can be seen that the F-test results show that the significance value is 0,000, smaller than the probability value of 0.05, or the value of 0,000 < 0.05, then Ha is accepted and H5 is rejected. Then, strengthened by the F-count result is 9.380 and the F-table is 2.76. So F-count > F-table which is interpreted simultaneously the independent variable has a contribution to the dependent variable. So it can be concluded that the simultaneous CAPR, ROA, DAR, and SIZE significantly affect and positively affect legally tax avoidance in the year before the enactment of Government Regulation No.34 in 2016. There is a simultaneous effect of CAPR, ROA, DAR and SIZE on Legally Tax Avoidance on real estate and property companies listed on the IDX in the year after the enactment of Government Regulation No.34 in 2016. The results of table 9 above can be seen that the F-test results revealed that the significance value is 0.030, smaller than the probability value of 0.05, or the value of 0.030 < 0.05, then H5 is accepted and Ho is rejected. Then, strengthened by the F-count result is 3.222 and the F-table is 2.76. So F-count > F-table which is interpreted simultaneously the independent variable has a contribution to the dependent variable. So it can be concluded that simultaneously CAPR, ROA, DAR, and SIZE significantly influence and positively affect legally tax avoidance in the year after the enactment of Government Regulation No.34 in 2016. CONCLUSION AND SUGGESTION Based on partial test results of hypothesis testing with multiple regression analysis in 2015 (before the enactment of Government Regulation No.34 in 2016), showed that the capital intensity and debt to asset ratio does not affect on tax avoidance, while return on assets and company size have significant negative effect on tax avoidance. In 2017, showed that the capital intensity, debt to asset ratio, and company size does not affect on tax avoidance, while return on assets has a significant negative effect on tax avoidance. Hypothesis testing results indicate that the independent variables simultaneously in 2015 and 2017 affect the dependent variable.
4,828.6
2020-05-11T00:00:00.000
[ "Economics", "Business" ]
Sex chromosome pre-reduction in male meiosis of Lethocerus patruelis (Stål, 1854) (Heteroptera, Belostomatidae) with some notes on the distribution of the species Abstract The karyotype and meiosis in males of giant water bug Lethocerus patruelis (Heteroptera: Belostomatidae: Lethocerinae) were studied using standard and fluorochrome (CMA3 and DAPI) staining of chromosomes. The species was shown to have 2n = 22A + 2m + XY where 2m are a pair of microchromosomes. NORs are located in X and Y chromosomes. Within Belostomatidae, Lethocerus patruelis is unique in showing sex chromosome pre-reduction in male meiosis, with the sex chromosomes undergoing reductional division at anaphase I and equational division at anaphase II. Cytogenetic data on the family Belostomatidae are summarized and compared. In addition, the structure of the male internal reproductive organs of Lethocerus patruelis is presented, the contemporary distribution of Lethocerus patruelis in Bulgaria and in the northern Aegean Islands is discussed, and the first information about the breeding and nymphal development of this species in Bulgaria is provided. introduction The genus Lethocerus Mayr, 1853 is a member of the family Belostomatidae (electric light bugs, toe biters), the subfamily Lethocerinae (Perez Goodwin 2006). The giant water bug Lethocerus patruelis is the largest European true bug and the largest European water insect. The adult bugs reach 80 mm in length. The information on the karyotypes of the genus Lethocerus has been recently summarized by Bardella et al. (2012). In Lethocerus species, chromosome numbers vary from 2n = 4 to 2n = ca. 30 with intermediate numbers of 2n = 8, 26 and 28. The cytogenetic mechanisms of sex determination are also diversified with XY, neo-XY and multiple X n Y encountered in different species. In several species, a pair of m-chromosomes (= microchromosomes) has been described (Ueshima 1979). As is common in Belostomatidae and in Heteroptera as a whole, all so far studied species of Lethocerus have been shown to have an inverted meiosis for the sex chromosomes in males (the so-called "sex chromosome post-reduction") with the sex chromosomes undergoing equational separation during the first division while reductional segregation during the second division (Ueshima 1979, Papeschi and Bressa 2006, Bardella et al. 2012. In the present work, we studied for the first time the structure of the internal reproductive organs, karyotype and meiosis in males of Lethocerus patruelis (Stål, 1854). In addition, we summarize here data on the contemporary distribution of L. patruelis in Bulgaria and in the Northern Aegean Islands, and provide the first information on the reproduction of this species in Bulgaria. Insects Males of Lethocerus patruelis were collected in 2001-2012 in different regions of southern Bulgaria. Collections were made either in water bodies using plankton net or (predominantly) by light traps. Two adults and three larvae were reared in the laboratory using small fishes (Gambusia affinis, Pseudorasbora parva and Carassius gibelio) as a food. Cytogenetic study was based on three males collected in the area of the border checkpoint Kapitan Andreevo, Bulgaria. Preparations To examine the internal reproductive organs, the abdomen of chloroform-anaesthetized males was opened and the entire reproductive system was dissected. For chromosome studies, the gonads were dissected out from the adults and fixed in Carnoy's fixative consisting of 96% ethanol and glacial acetic acid (3:1) and stored at 4°C. Cytological preparations were made by squashing a piece of the testis in a drop of 45% acetic acid on a slide. The coverslip was removed using a dry-ice technique (Conger and Fairchild 1953). Standard staining of chromosomes For this staining, the method described in Grozeva et al. (2010) with minor modifications was used. The preparations were first subjected to hydrolysis in 1 N HCl at 60°C for 8 min and stained in Schiff's reagent for 20 min. After rinsing thoroughly in distilled water, the preparations were additionally stained in 4% Giemsa in Sorensen's buffer, pH 6.8 for 20 min, rinsed with distilled water, air-dried, and mounted in Entellan. Fluorochrome staining of chromosomes For revealing the base composition of C-heterochromatin, staining by GC-specific chromomycin A 3 (CMA 3 ) and AT-specific 4-6-diamidino-2-phenylindole (DAPI) was used following the method described in Grozeva et al. (2010). C-banding pretreatment was carried out using 0.2 N HCl at room temperature for 30 min, followed by 7-8 min treatment in saturated Ba(OH) 2 at room temperature and then an incubation in 2xSSC at 60°C for 1 h. The preparations were then stained first with CMA 3 (0.4 μg/ml) for 25 min and then with DAPI (0.4 μg/ml) for 5 min. After staining, the preparations were rinsed in the McIlvaine buffer, pH 7 and mounted in an antifade medium (700 μl of glycerol, 300 μl of 10 mM McIlvaine buffer, pH 7, and 10 mg of N-propyl gallate). Microscopy The chromosome preparations were examined using the light and fluorescent microscope Axio Scope A1 with digital camera ProgRes MFcool Jenoptik at 100× magnification. Testes In L. patruelis males, the internal reproductive organs consisted of a pair of testes united by vasa deferentia (v d) with a median unimpaired tube, ductus ejaculatorius (d e) (Fig. 1). Each vas deferens was dilated to form a large vesicula seminalis (v s). The testes were colorless and spherical in form, and each consisted of a single very long tube (seminal follicle) rolled up into a ball. The follicle decreased in diameter from the apex to the vas deferens and showed synchronized divisions in different parts, with only sperms in its thinner part. There were no bulbus ejaculatorius and accessory glands. Male karyotype and meiosis All three studied L. patruelis males presented the same chromosome complement. Spermatogonial metaphases showed 26 chromosomes including four larger and two very small ones, and the rest of the chromosomes formed a gradual size row. There was also a pair of very small m-chromosomes (= microchromosomes) ( Fig. 2b) but these were not apparent in many nuclei (Fig. 2a). The chromosomes had no primary constrictions, i.e. centromeres. Two of larger chromosomes showed each a subtelomeric unstained gap, or secondary constriction, representing the nucleolus organizing region (NOR). These chromosomes are X and Y sex chromosomes as was revealed by the observation of meiotic stages (see below). During meiotic prophase, the sex chromosomes were united and visible as a large, positively heteropycnotic body brightly fluorescent after CMA 3 staining (Fig. 3). Cells at metaphase I (MI) showed 13 bivalents, including a small and negatively heteropycnotic pair of m-chromosomes (n = 13). At this stage, all bivalents were distributed randomly relative to each other. Distinguishing between bivalents of autosomes and XY pseudobivalent involved difficulties since the latter was only slightly heteromorphic in form due to the size resemblance of X and Y chromosomes (Fig. 4). However CMA 3staining appeared a foolproof method for the identification of sex-pseudobivalent as one of the largest pairs with GC-rich NORs located in X and Y chromosomes (Fig. 5). At anaphase I (AI), all the chromosomes segregated to opposite poles resulting in two daughter metaphase II (MII) cells with 11A + m + X and 11A + m + Y, respectively (Fig. 6a). In the studied MII plates, X and Y-chromosomes were distributed randomly among other chromosomes (Fig. 6b). DAPI staining did not reveal any differentiation along the length of the chromosomes (Fig. 7). Notes to the distribution and reproduction in Bulgaria In 2008 and 2011, we collected adult specimens of L. patruelis in water bodies in Struma River Valley near Rupite (Bulgaria). Several water bodies in Struma River Val- ley, close to these localities were checked by plankton net, and in July 2012, four larvae and five exuviae were found in the Marena artificial pond near General Todorov Village representing thus the first evidence of breeding of L. patruelis in Bulgaria. Marena would be classified as semi-natural mesotrophic to eutrophic lake with macrophytic vegetation (Tzonev et al. 2011). The hydrophytic coenoses in Marena make complexes with various hygrophytic communities, e.g. strips or patches of Typha spp., Scirpus lacustris, tall sedges (Carex spp.). Submerged vegetation are mixed by Myriophyllum and Potamogeton. Larvae were found close to the shoreline, in the regions with submerged Figure 2-7. 2 Spermatogonial metaphases: two of larger chromosomes, X and Y, each show a subtelomeric unstained gap, representing the nucleolus organizing region (NOR) (arrow head) (routine staining) 3 Meiotic prophase: sex chromosomes are visible as a large, positively heteropycnotic and brightly fluorescent body (CMA 3 staining) 4 Metaphase I (n = 13) (routine staining) 5 Metaphase I: GC-rich NORs located on both X and Y chromosomes (CMA 3 staining) 6 After the first meiotic division all the chromosomes segregate to opposite poles (6a) resulting in two daughter MII cells (6b) with 13 elements each, 11A + m + X and 11A + m + Y, respectively (routine staining) 7 Metaphase I: DAPI staining did not reveal any differentiation along the length of the chromosomes. Bar = 10μm. vegetation. In laboratory, we observed that larvae and adults had used the stems of Myriophyllum as resting place or during stalking/ambush attacks against their preys (electronic supplementary material, video S1). Discussion The range of L. patruelis includes Balkan Peninsula, Anatolia, Israel, Syria, Saudi Arabia, Yemen, the United Arab Emirates, Kuwait, Iraq, Iran, Afghanistan, Oriental Region (Pakistan, India, Nepal, Burma), and recently this species was recorded from Italy (Polhemus 1995, Protić 1998, Perez Goodwyn 2006, Olivieri 2009, Fent et al. 2011. In Bulgaria, only few records of L. patruelis specimens migrating from southern parts of the Balkan Peninsula, attracted to light, were published up to 2000 year (Buresch 1940;Josifov 1960Josifov , 1974Josifov , 1986Josifov , 1999 (Fig. 8). During the last ten years, many new findings of L. patruelis were made by us in Bulgaria: Kresna Gorge, eastern Rhodopes, Maritsa River Valley (from Kapitan Andreevo to Peshtera) and southern Black Sea Coast (near Burgas). In some of these regions, this species was very abundant; more than 60 specimens per night were attracted to light (Kapitan Andreevo Checkpoint, August-September 2011). A number of facts (records of the breeding population in Marena; the existence of similar habitats in other regions with records of L. patruelis at light; the last years' tendency to milder winters) led us to suppose that this species would breed successfully also in other regions in southern Bulgaria (Maritsa River Valley, Burgas lakes). If such is the case, it would be a further evidence of the recent changes of European bug fauna caused by climate changes and global warming (Rabitsch 2008). We have studied L. patruelis in respect of male reproductive organs, karyotype and meiosis. The internal reproductive system in this species appeared to be quite similar to that in Diplonychus rusticus (Fabricius, 1871) (Belostomatinae), the only belostomatid species studied so far on this point (Pendergrast 1957, as Sphaerodema rusticum). In L. patruelis, each testis consists of the only follicle which is rolled up into a ball; each vas deferens is dilated to form a large vesicula seminalis; bulbus ejaculatorius (representing usually, if present, a dilated anterior end of the ductus ejaculatorius) and accessory glands (diverticula of the ductus ejaculatorius) are absent. Pendergrast (1957) found a similar condition in D. rusticus, however he did not provide information about the number of follicles in testes. We found that L. patruelis had 2n = 26 (22 + 2m + XY). The eight Lethocerus species studied so far with respect to karyotypes (Table 1) represent a large proportion of the 22 species currently known in this genus (Perez Goodwyn 2006). Hence, some preliminary inferences about cytological features of Lethocerus and also of the family Belostomatidae as a whole can be deduced. The genus Lethocerus shows a fairly wide range of chromosome numbers, with both extreme for Belostomatidae 2n = 4 and 2n = ca. 30, and three intermediate ones of 2n =8, 26 and 28 (Table 1). The species studied share the conventional cytological features of Heteroptera, such as holokinetic chromosomes (lacking centromeres, that facilitates karyotype evolution via occasional fusion/fission events; Kuznetsova et al. 2011), an XY sex chromosome system (with derivative neo-XY and multiple X n Y systems presumed to be inherent in three species), and m-chromosomes (detected to date Post-reduction data absent Chickering 1927a, 1932, Chickering and Bacorn 1933: as 2 + neoXY: (after Ueshima 1979 in L. patruelis and suggested in L. uhleri, see Table 1). Within the genus, L. patruelis seems to be similar to L. indicus in chromosome complement. This resemblance is confined not only to chromosome number and the presence of m-chromosomes but also to that the sex chromosomes in L. patruelis (present paper) and L. indicus (Bagga 1959, Jande 1959 are indistinguishable from autosomes at meiotic metaphases if a routine staining is used. In L. patruelis, it was due to the size resemblance of X and Y chromosomes causing the almost homomorphic form of the XY-pseudobivalent at MI. Noteworthy however that L. indicus was speculated to have the sex chromosome system of a neo-XY type originated as a result of the evolutionary translocation of both sex chromosomes to one pair of autosomes in the ancestral karyotype (Jande 1959). Another example of a neo-XY system seems to be Lethocerus sp. from Michigan. For this species, Chickering and Bacorn (1933) reported 2n = 4 with no distinguishable sex chromosomes. These authors suggested that this karyotype might has originated via a translocation of X and Y chromosomes to one pair of autosomes with a subsequent fusion between two more pairs of autosomes. Ueshima (1979) considered the karyotype of 2n = 24 + 2m + XY as the modal (the commonest) in the genus Lethocerus and the ancestral (i.e., plesiomorphic) one in its evolution. All other karyotypes were suggested to have originated from this ancestral one through autosome fusions and fragmentations, translocations of sex chromosomes to autosomes and loss of m-chromosomes (see Fig. 12 in Ueshima 1979). However, here it should be noted that the most common karyotype needs not to be plesiomorphic in a group (White 1973). In addition, the data available at that time for Lethocerus (see Table 4 in Ueshima 1979) were in fact not indicative of the modality of 2n = 24 + 2m + XY in the genus, and some data presented in Ueshima's scheme were not universally correct (Fig. 9). For example, in the karyotype formulae of some of the species (for instance Lethocerus sp. from New Orleans) Ueshima included m-chromosomes which however have not been mentioned in the original paper (Table 1). On the other hand, the ancestrality of a XY sex determination in Lethocerus is beyond question, since neo-XY and X n Y systems occurring in Belostomatidae (both), including Lethocerus (at least neo-XY), are clearly derivative being originated by X-autosome fusions or X-chromosome fissions, respectively. It cannot be doubted also that low chromosome numbers such as 2n = 8 in L. americanus and 2n = 4 in Lethocerus sp. from Michigan, are the derived characters brought about a series of autosome fusions during the course of evolution in this genus. It seems likely that the ancestral karyotype in Lethocerus includes 26 autosomes and XY mechanism as found in many representatives of this genus and Belostomatidae as a whole (Table 1). It is not possible even to suggest whether this karyotype includes a pair of m-chromosomes as was speculated by Ueshima (1979). It is evident that these minute chromosomes easily escape detection by bug cytogeneticists, and hence many species recorded as lacking m-chromosomes in fact have them in their karyotypes. CMA 3 staining of L. patruelis C-banded chromosomes revealed GC-rich sites corresponding to NORs in the X and Y chromosomes. This is the first case of NOR detection in Lethocerus. On the other hand, ribosomal genes have been already located in Belostoma chromosomes using various techniques such as fluorochrome staining, silver staining and FISH (Table 1). In Belostoma, five species were shown to have NORs also in sex chromosomes while three other species in a pair of autosomes. Noteworthy, the species with the same chromosome complement sometimes differ in rDNA location (for example, in sex chromosomes in B. cummingsi while in autosomes in B. dentatum, both with 2n = 26 + X 1 X 2 Y). In the greatest majority of living organisms, during the first division of meiosis all chromosomes reduce in number (reductional division), whereas during the second division the chromatids separate (equational division), and this pattern is named "pre-reduction" (White 1973). However true bugs characteristically have an inverted sequence of sex chromosome divisions in male meiosis, the so-called sex chromosome "post-reduction" when sex chromosomes undergo equational division at anaphase I and reductional division at anaphase II (Ueshima 1979, Kuznetsova et al. 2011, and this is also true for Belostomatidae (Table 1). Interestingly, L. patruelis appeared unique in showing no inverted sequence of sex chromosome divisions in male meiosis. In this species, X and Y chromosomes form a pseudobivalent at prophase and segregate to opposite poles at anaphase I, and the first division of meiosis is thus reductional both for autosomes and sex chromosomes. As a result of sex chromosome prereduction, second spermatocytes carry a single sex chromosome, either X or Y. The second division is then equational for all the chromosomes. Although pre-reduction of sex chromosomes is not usual in Heteroptera, it does occur in some groups (for example, all so far studied species of the family Tingidae have shown pre-reduction; Ueshima 1979, Grozeva andNokkala 2001). Moreover, closely related species occasionally differ in this pattern (Ueshima 1979, Grozeva et al. 2006, 2007 as is also true of Lethocerus species. Male meiosis in Heteroptera can further be characterized by radial configuration of one or sometimes both MI and MII plates. In this case, autosomal bivalents at MI and autosomes at MII form a ring on the periphery of the spindle, while the sex chromosomes are located in the center of this ring (Ueshima 1979). However in L. patruelis, both MI and MII plates are non-radial with random distribution of all the chromosomes on the spindle.
4,158
2013-07-30T00:00:00.000
[ "Biology" ]
Synthesis, Characterization, and Potential Usefulness in Liver Function Assessment of Novel Bile Acid Derivatives with Near-Infrared Fluorescence (NIRBAD) Conventional serum markers often fail to accurately detect cholestasis accompanying many liver diseases. Although elevation in serum bile acid (BA) levels sensitively reflects impaired hepatobiliary function, other factors altering BA pool size and enterohepatic circulation can affect these levels. To develop fluorescent probes for extracorporeal noninvasive hepatobiliary function assessment by real-time monitoring methods, 1,3-dipolar cycloaddition reactions were used to conjugate near-infrared (NIR) fluorochromes with azide-functionalized BA derivatives (BAD). The resulting compounds (NIRBADs) were chromatographically (FC and PTLC) purified (>95%) and characterized by fluorimetry, 1H NMR, and HRMS using ESI ionization coupled to quadrupole TOF mass analysis. Transport studies using CHO cells stably expressing the BA carrier NTCP were performed by flow cytometry. Extracorporeal fluorescence was detected in anesthetized rats by high-resolution imaging analysis. Three NIRBADs were synthesized by conjugating alkynocyanine 718 with cholic acid (CA) at the COOH group via an ester (NIRBAD-1) or amide (NIRBAD-3) spacer, or at the 3α-position by a triazole link (NIRBAD-2). NIRBADs were efficiently taken up by cells expressing NTCP, which was inhibited by taurocholic acid (TCA). Following i.v. administration of NIRBAD-3 to rats, liver uptake and consequent release of NIR fluorescence could be extracorporeally monitored. This transient organ-specific handling contrasted with the absence of release to the intestine of alkynocyanine 718 and the lack of hepatotropism observed with other probes, such as indocyanine green. NIRBAD-3 administration did not alter serum biomarkers of hepatic and renal toxicity. NIRBADs can serve as probes to evaluate hepatobiliary function by noninvasive extracorporeal methods. ■ INTRODUCTION Bile acids (BAs) are steroids synthesized by the liver from cholesterol.These compounds are characterized by their marked organotropism toward tissues of the so-called enter-ohepatic circuit.BA vectorial properties are due to the presence in hepatocytes and intestinal epithelial cells (mainly in the ileum) of transmembrane proteins accounting for highly effective BA uptake from sinusoidal blood and the intestinal lumen, respectively.Although Na + -independent transporters, belonging to the OATP family, are involved in BA uptake by hepatocytes, this process mainly occurs via Na + -dependent transporters encoded by a member of the family 10 of the solute carrier (SLC) superfamily of genes, namely, the hepatic Na +taurocholate cotransporting polypeptide (NTCP, SLC10A1).The intestinal apical sodium-dependent BA transporter (ASBT, SLC10A2) plays a similar role in the ileum. 1 Owing to the high affinity and specificity of both transporters for BA as substrates, these compounds have been used as shuttles for molecules with pharmacological properties targeted toward tissues within this circuit, such as chlorambucil, 2 nucleosides, 3 nitrogenous bases, 4 polyamines, 5 or cisplatin, 6,7 thus increasing their bioavailability in the liver and intestine while reducing adverse side effects. 8oreover, in the assessment of liver function within clinical settings, diverse methodologies leveraging labeled BA derivatives (BADs) have been explored (for a recent review, see ref 9).These include 18 F-labeled BA derivatives as positron emitter tomography (PET) tracers to study hepatic transporters. 10It is noteworthy that currently used approaches for the diagnosis of several hepatobiliary disorders (e.g., focal lesions, tumors, and cholestasis) also include cholescintigraphy using 99m Tc-labeled iminodiacetic acid derivatives and magnetic resonance imaging (MRI) using hepatocyte-specific contrast agents, such as gadoxetate.However, all probes mentioned lack enterohepatic vectoriality and consequently lack tissue specificity. 11o overcome this limitation, a wide variety of fluorescent BADs have been synthesized.This includes several amido fluorescein (amF) derivatives, such as cholyl-amF, cholyl-glycyl-amF (CGamF), chenodeoxycholyl-glycyl-amF, and ursodeoxycholyl-glycyl-amF, which have been used with in vitro and in vivo models to study BA transport, drug−BA interactions, and cytosol-nucleus traffic. 12,13They have facilitated the exploration of BA organotropism, and the usefulness of drug targeting of BAD vectorized toward cells expressing BA transporters. 6,14esides, cholyl-L-lysyl-fluorescein (CLF) has been used in vivo in combination with intravital imaging performed through confocal microscopy in anesthetized animals for assessing liver function. 15Dansylated cholic acid (CA) derivatives have proven valuable for determining BA amphipathic characteristics and aggregation behavior, as well as their binding to proteins and their handling by the liver. 16ll the fluorescent BADs mentioned above emit in the visible range of the spectrum, which limits their usefulness.The present study aimed to synthesize novel compounds (NIRBADs) labeled with tags emitting near-infrared fluorescence (NIR, 780−2500 nm) with greater tissue penetration and enhanced liver organotropism due to their conjugation with BADs.The aim of synthesizing these novel compounds was to enhance their usefulness by adding the advantage of enabling their visualization from outside the body without the need for invasive procedures. ■ RESULTS AND DISCUSSION Molecules with NIR fluorescence are helpful tools in clinical practice.For instance, indocyanine green (ICG) has been used to carry out extracorporeal detection of lymph nodes and vessels in cancer and other diseases with an ischemic profile, to study the liver function in patients with hepatic tumors before surgical removal and during laparoscopic cholecystectomy for the identification of biliary anatomy. 17However, ICG, like other available NIR probes, has the limitation of lacking tissueselective characteristics.In contrast, the NIR probes synthesized here are cholephilic compounds, i.e., efficiently taken up by hepatocytes and secreted into bile.Triazoles have proven helpful in generating a broad range of molecules with biological activity. 18,19These compounds are synthesized by the copper-catalyzed Huisgen 1,3-dipolar cycloaddition, the most popular example of the click reaction, which involves alkynes and azides. 20Using 1,2,3-triazole moieties as linkers offers advantages due to their stability under typical physiological conditions and their ability to form hydrogen bonds.Additionally, the capacity of 1,2,3-triazoles to mimic the topological and electronic features of amide bonds makes them well suited for designing peptidomimetics with enhanced medicinal properties. 18,21revious studies have established the click strategy's efficacy in producing BA derivatives by binding them to active molecules with diverse residues. 18An example has been BA binding to peptides to obtain novel compounds in supramolecular chemistry. 22Additionally, significant strides have been made in the field of antimicrobial development through the binding of BA to beta-lactams, resulting in the synthesis of innovative antibiotic agents. 23Furthermore, clickable conjugates of BAs and nucleosides have been synthesized and assayed in vitro as anticancer and antituberculosis agents. 19In the present study, we leveraged this background to synthesize a new family of compounds (NIRBADs) (Figure 1) targeted to the enter-ohepatic circuit through a BA moiety and capable of emitting NIR fluorescence due to the linked fluorochrome, alkynocyanine 718.This compound was selected for this purpose due to two characteristics: (i) near-infrared emission wavelength (Ex664/Em718 nm) and (ii) a terminal triple bond, which eliminated the need for structural modifications prior to carrying out the envisioned 1,3-dipolar cycloaddition reaction with BAazide derivatives. For the synthesis of NIRBAD-2 (Figure 3), functionalization of the hydroxyl group at the 3α position of the CA steroid core was performed.First, the carboxylic acid of CA was protected by Fischer esterification using thionyl chloride in methanol, obtaining the corresponding methyl ester (7).Next, due to the basic character of the hydroxide anion making it a poor leaving group, the tosylation of the hydroxyl group at position 3 was carried out to obtain the tosyl derivative of CA (8).Then, an S N 2 nucleophilic substitution reaction with inversion of the configuration of the tosylate group by iodine was performed, using KI and DMF as the solvent (9), resulting in the inversion of C3 configuration from R to S. Finally, to preserve the original hydroxyl configuration R, a subsequent S N 2 substitution of the iodine was performed using sodium azide in DMF, resulting in the (3R) isomer (10) due to the new inversion of the configuration.The next step was the deprotection of the carboxylic acid by a basic hydrolysis reaction with LiOH in methanol (11).Finally, the 1,3-dipolar cycloaddition reaction between the azide derived (11) from CA and alkynocyanine 718 was performed.Once the crude reaction product was obtained, it was purified by preparative thin layer chromatography (PTLC) to obtain the purified NIRBAD-2 compound (12). The synthesis of these compounds provides insight into their distinct properties arising from structural variations.Although NIRBAD-1 and NIRBAD-3 have undergone functionalization on their side chains, this has been achieved through chemically different linkers, i.e., an ester and an amide, respectively.Consequently, their biodistribution is anticipated to exhibit varying profiles due to the change of a hydrogen bond acceptor to a donor.Moreover, it is worth noting that esters and amides undergo metabolism at significantly different rates, further contributing to the potential dissimilarities. 24In contrast, NIRBAD-2 has been functionalized at the hydroxyl in the C3 position, maintaining the side chain in a carboxylate form.This distinction is also expected to influence its biodistribution.Furthermore, the stereochemistry of the chiral centers in the original BA, CA, has been preserved to retain its native structure.This characteristic ensures that the synthesized BA derivative is still efficiently recognized as a substrate by BA transporters. The analysis of the fluorescent properties of these compounds revealed that the excitation and emission wavelengths differed from those of the initial fluorescent probe (Figure 4A).Differences among NIRBADs were also found (Figure 4B− D).Moreover, the relative quantum yields of NIRBAD-1 and NIRBAD-3 were similar but lower than that of NIRBAD-2 (Figure S13).This diversity may likely be attributed to the fact that, while the wavelengths stem from the pi-electron delocalization within the indole-alkene conjugate system, slight modifications may arise from the altered chemical environment induced by the presence of triazoles.The lower signal found for ester NIRBAD-1 as compared to that of the amide NIRBAD-3 was not due to bleaching or hydrolysis, as their photostability (Figure S14) and chemical stability (Figure S15) were similarly preserved when kept in solution at 37 °C in the dark for up to 60 min, i.e., longer than the incubation time during uptake experiments (Figure 5). To assess the hepatocyte-targeting properties of NIRBADs, flow cytometry studies were carried out using CHO cells that either did not express any human BA transporter or stably express NTCP.The uptake of NIRBADs in the absence and presence of TCA revealed that NTCP can transport NIRBADs in a TCA-inhibitable manner (Figure 5).The order of efficacy for this transport was NIRBAD-3 ≫ NIRBAD-2 > NIRBAD-1.Based on these results, further in vivo assays were carried out using NIRBAD-3 as a proof of concept. Upon i.v.injection of NIRBAD-3 or alkynocyanine 718, the fluorescence emitted from the upper abdominal region was extracorporeally recorded.In the case of NIRBAD-3, an efficient liver load, followed by migration of the fluorescence toward the intestine, was observed (Figure 6).The hepatic elimination was achieved at approximately 60 min.Determinations carried out in blood samples collected 120 min after NIRBAD-3 administration revealed no change in serum biomarkers of hepatic and renal toxicity (Table 1). In contrast to NIRBAD-3, the fluorescence due to alkynocyanine 718 persisted in the liver with no transfer to the intestinal area.We also administered ICG for comparison purposes, whose fluorescence was dispersedly distributed in the abdominal region without showing any hepatic selectivity.Moreover, the fluorescence remained in this location throughout the 60 min experimental period (Figure 6). In conclusion, novel BA derivatives have been synthesized.They are selectively taken up by the liver and efficiently transferred to the intestine by biliary secretion, as expected for a cholephilic substance.This organotropic characteristic, together with their ability to emit NIR fluorescence, permits extracorporeal detection for monitoring their liver handling.The usefulness of these probes in noninvasively assessing liver function in health and diseases using experimental models deserves further investigation. General Chemical Procedures.Solvents were purified by standard procedures and distilled before use.Reagents and starting materials obtained from commercial suppliers were used without further purification.Melting points are given in °C. 1 H NMR spectra were recorded on a Bruker Avance spectrophotometer at 400, 200, and 100 MHz, as appropriate. 1H NMR chemical shifts are reported in ppm with tetramethylsilane (TMS) as the internal standard or using the residual solvent 1 H resonance as a reference.The coupling constants, J, are reported in Hertz (Hz).Data for 1 H NMR are reported as follows: chemical shift (in ppm), number of hydrogen atoms, multiplicity (s = singlet, d = doublet, t = triplet, q = quartet, quint = quintet, m = multiplet, br s = broad singlet).Splitting patterns that could not be clearly distinguished are denoted as multiplets (m).Highresolution mass spectral analyses (HRMS) were performed using ESI ionization and a quadrupole TOF mass analyzer.All compounds were routinely checked by TLC using precoated silica gel 60 F254, aluminum foil, and the spots were detected under UV light at 254 and 365 nm or were revealed spraying with 10% phosphomolybdic acid in ethanol.Flash chromatography (FC) was performed on 70−200 mesh silica gel using a different composition of the mobile phase according to the polarity of the compound. Cell Culture.The Chinese ovary hamster cell line, CHO-K1 (ATCC CCL-61), was purchased from the American Type Culture Collection (LGC Standards, Barcelona, Spain) and maintained in DMEM high-glucose medium (Merck, Madrid, Spain) supplemented with 50 g/L of L-proline (Merck), GlutaMAX solution (Fisher Scientific), 10% heat-inactivated fetal bovine serum (FBS), and 1% penicillin−streptomycin− amphotericin B (Fisher Scientific).Cells were cultured at 37 °C Bioconjugate Chemistry in a 5% CO 2 atmosphere with 80% relative humidity.To ensure the absence of mycoplasma contamination in the culture, periodic PCR tests were conducted using the mycoplasma Gel Form Kit (Biotools B&M Laboratories, Madrid, Spain).Monoclonal cells stably expressing the ORF of NTCP were obtained by transduction with recombinant lentiviral vectors (pWPI) added to target cells at an appropriate multiplicity of infection (MOI = 25) in the presence of hexadimethrine bromide (Polybrene, Merck) as described elsewhere. 31The clone with the highest capacity to carry out the uptake of CGamF, a fluorescent NTCP substrate, 32 was used in further studies.Flow Cytometry Determinations.Both wild-type (WT) CHO cells (transduced with pWPI empty vectors), used here as a control (Mock), and cells stably expressing NTCP (CHO-NTCP) were cultured according to previously established methods. 33To assess drug uptake, experiments were carried out using three to five different cultures for each data point.Subconfluent cultures were resuspended in uptake medium (96 mM NaCl, 0.8 mM MgSO 4 , 5.3 mM KCl, 1.1 mM KH 2 PO 4 , 1.8 mM CaCl 2 , 11 mM D-glucose, and 10 mM HEPES/Tris, pH 7.4).The cells were then incubated in the presence of 10 μM NIRBADs with or without 100 μM TCA at 37 °C for 15 min.Uptake was stopped by rinsing the culture dishes with 0.9 mL of ice-cold buffer, and the intracellular fluorescence was measured with a flow cytometer (FACSCalibur, Becton Dickinson, Madrid). In Vivo Assays.Male Wistar rats weighing 220−250 g were obtained from the Animal House, University of Salamanca, Spain.They were housed under controlled environmental conditions of temperature (20 °C) and light (12 h/12 h light/ dark cycle).They had free access to water and standard rodent chow (Panlab, Madrid, Spain).All experimental methods adhered to ethical guidelines and regulations, with protocols approved by the University of Salamanca Ethical Committee for Laboratory Animals, after confirming that they complied with ethical approval from the Spain Ministry of Health and followed European guidelines for the care and use of laboratory animals, in accordance with the NIH Guide for the Care and Use of Laboratory Animals.All experiments were conducted under pentobarbital anesthesia (50 mg/kg b.wt., i.p., Nembutal N.R.; Abbot, Madrid), which was also used for euthanizing the animals at the end of the experiments. Statistical Analysis.Post hoc analyses, such as paired or unpaired Student t tests, were applied appropriately to calculate the statistical significance of differences among groups.Differences were considered significant when p < 0.05.Microsoft Excel (version 15.32) and GraphPad (Prism5) were used for these purposes. ■ ASSOCIATED CONTENT * sı Supporting Information Figure 5 . Figure 5. NTCP-mediated NIRBAD uptake.CHO cells transfected with empty vectors (Mock) or lentiviral particles to induce stable expression of NTCP were incubated for 15 min at 37 °C with 10 μM NIRBAD-1, NIRBAD-2, or NIRBAD-3 in the absence (white bars) or in the presence of 100 μM taurocholic acid (TCA) (black bars).Uptake was determined by flow cytometry.The detected fluorescence expressed as arbitrary units of fluorescence (AUF) per cell and 15 min were normalized in each experiment by considering 100% the value of AUF in Mock cells incubated in the absence of TCA (Control).Results are mean ± SEM from six different measurements carried out in three separate cultures.*p < 0.05; comparing with uptake by Mock cells.†p < 0.05; comparing with uptake in the absence of TCA (Control) in each experimental group by paired t-test. Figure 6 . Figure 6.Time course of extracorporeal fluorescence detected in the abdominal area after intravenous administration of 1 μmol NIRBAD-3, alkynocyanine 718, or indocyanine green to anesthetized Wistar rats.These compounds were first dissolved in DMSO and then diluted with saline to inject 1 mL (<12% DMSO).At these doses, neither DMSO nor NIRBAD-3 caused acute liver/renal toxicity.
3,850
2024-07-03T00:00:00.000
[ "Medicine", "Chemistry" ]
Enhancing technology of producing secure IoT devices on the base of remote attestation The goal of the work is to enhance the technological process for the production of components of integrated secure systems of the Internet of Things for solving problems of operational control and reaction in emergency situations. The most important requirement for such systems is the need to ensure the properties of reliability and security of software and hardware elements of the end devices, taking into account the specificity of such systems. To achieve the goal in the paper the mechanisms for protection of Android applications from the threats of integrity violation of the software and of critical data on the base of remote attestation principles are modeled. Analytical and experimental evaluations of the implemented protection components and the protocol of their interaction taking into account limitations on the computing and communication resources of the target device are performed. Introduction A problem of protecting of systems for operational control and reaction in emergency situations from unauthorized modification threats is becoming increasingly important and is caused by the susceptibility of software platforms of mobile and embedded devices such as Android, Raspberry Pi and others to threats of integrity and authenticity violation of the code and data used. To solve this problem it is necessary to develop mechanisms for embedding protection means within the technological process of producing such systems. In general application of algorithms controlling immutability of the software, which are built directly into the program they protect, can increase the protection level. However the local nature of the protection and limiting its persistence as well as situating the program in an environment being non-trusted and uncontrolled by the software developer or the owner of the digital rights leads to the fact that such protection mechanisms can be neutralized by an intruder if there are sufficient tools and resources. The remote validation mechanism, investigated in the paper, is based on the use of a client-server approach to protection and allows increasing security of the software under resource limitations of the mobile platform as well as limitations of the communication channel bandwidth. The expansion of the technological process for producing secure systems for operational control and reaction in emergency situations includes the stage of forming a cloud based computing structure responsible for verifying remote devices by using a client-server approach and building a secure communication protocol. In the paper modeling and analysis of particular protection algorithms are performed within the framework of an integrated approach to the implementation of software protection components, implementing remote attestation with the use of Android platform. The distinguishing features of the results achieved in the paper include, in particular, experimental data obtained during the modeling of protection components under mobile operating system limitations. The paper is organized as follows. Section 2 provides an overview of existing works in the subject field. Section 3 reveals features of the approach to remote attestation of mobile applications. Section 4 describes results of modeling of specific remote attestation based algorithms that implement. Section 5 presents results of the experimental studies, whereas Section 6 concludes the paper. Related work Brasser et al. [1] and C.Preschern et al. [2] consider remote attestation as means of protection against malicious software intrusion attacks on embedded devices [3]. The features and methods that allow implementation of the attestation with minimal additional costs are demonstrated. At that C.Preschern et al. [2] propose adaptation of software methods for remote attestation to solve tasks of protecting critical systems with minimal forced revision of established procedures for safety properties certification. J.Ho et al. [4] show that in sensor networks the remote certification is used for detection of self-propagating network worms by sequentially infecting nodes using traffic detection methods. Srinivasan et al. [5] investigate remote software-based attestation to ensure the integrity of the operating system kernel and user applications. In particular the authors propose a technique that allows determining whether the already certified application was substituted by an intruder or not. M. Santra, et al. [6] propose the use of remote attestation and the three-phase protocol constructed on it, using a SELinux module for providing secure interaction in distributed information systems. The paper also substantiates effectiveness of the proposed approach, using methods of formal analysis and ProVerif verifier. T. Abu Hmed, et al. [7] propose software techniques for remote attestation of wireless sensor networks against tampering attacks into their work. These techniques are not based on the use of the accuracy factor of the measured runtime execution, thereby improving previously proposed integrity monitoring methods in wireless sensor networks [8]. D. Fu and X. Peng [9] analyze security of the mechanisms of one-and multi-hop attestation in wireless sensor networks. Tan et al. [10] propose a multi-level remote attestation protocol to monitor integrity of IoT-systems, taking into account their inherent computational limitations and device's power limitations. In [11], [12] the authors propose a reference architecture and partial models for mechanisms of IoT remote attestation, using cloud solutions to improve the targets of the remote attestation process. K. Ramachandran and H. Lutfiyya [13] prove the importance of remote attestation of software updates, using cloud computing and procedures for verifying its correctness [14]. Y.Zhang et al. [15] extend remote attestation application to ensuring the confidentiality by a modified Extended Hash Algorithm [16]. It allows increasing level of the confidentiality with comparable performance characteristics during the execution. T.Syed et al. [17] propose effective solutions for increasing scalability of mechanisms for remote attestation of device sets by using Big Data technology, including using multiprocessor systems [18], property based authentication mechanisms [19] and characteristics of these properties [20]. Increasing the efficiency of the server part of the authentication mechanism with a large number of instances of attested programs is also achieved by reorganizing and reducing the chain of trust used in the attestation process [21]. H. Li et al. [22] propose models of remote attestation based on the paradigm of attack graphs to tackle tasks of monitoring and attestation of software components [23]. Approach to remote attestation of mobile applications Remote attestation of a mobile application includes software local and remote components that are located within a non-trusted and trusted environment, respectively, as well as a secure protocol for their network interaction. The interaction between the components is based on roles of a client the attested entity, and the server the attesting one. The protocol assumes implementation of the protection functions of the protocol itself from possible interception and modification of packets at the transport level. The payload of the protocol includes program identifiers and numeric values that characterize the current state of elements of the program code and critical data of the application. Specific algorithms used within the framework of the protection mechanism on the base of remote attestation principles assume, first, introduction of specific constructions emplaced into the objective code at the stage of forming the syntactic tree of the application and, second, isolation of basic blocks and particular instructions in the code. Security constructions do not directly perform any code integrity checks and data locally, but send their snapshots to the side of the trusted server. This fact greatly complicates successful intruder's modification of the application, being not subsequently detected on the server side. A typical scenario for applying remote attestation to the protection of mobile devices involves remote control by a mobile application store or content provider over multiple instances of client applications. In case of a violation it warns on the violation on a specific device and stops its further maintenance until the detected violation is rectified. Protection algorithms A control flow checking algorithm is based on a control flow graph of the protected program, built statically. The graph is used in dynamics for remote control of the correctness of the process of its execution. This algorithm allows ensuring the correctness of the execution of a sequence of commands, including branching structures, loops, handling of exceptional situations, etc. The control flow checking algorithm includes two stages, namely static and dynamic. Statically one prepares and embeds the attesting module constructions in the program code. The initiating construction establishes a connection to the remote attesting module via HTTP sockets. Program markers, which are operations send (A) of sending a specific identifier A to the server side, are situated in the program code on the boundaries of the base blocks. The attesting module function on the client side contains sending a sequence of identifiers of program markers during their passage in the execution process (dynamically). On the server side of the connection one constructs a regular expression, which determines the correct chains of operation of program markers within the static stage. The regular expression is used to build a transition graph, which nodes determine program markers and arcs denote permissible transitions between them. At the dynamic stage when receiving the identifiers of program markers from the client side the process of traversing the graph is performed and its correctness is checked in accordance with the structure of the graph. As an example Fig. 2 schematically shows a fragment of the code of the protected program with built-in functions for sending program markers. The regular expression in the infix form constructed for the given fragment on the server side is A(B(C|D)E)*F. The graph of transitions with the final vertex F corresponding to this regular expression is shown in Fig. 3. Fig. 3. Transition graph for the remote attestation algorithm. The checksum algorithm requires the existence of invariant data structures that are critically important in relation to the task of ensuring the integrity of the protected code and data. Cryptographic algorithm MD5 is used as the basis. The sending to the server side of the connection and checking the received token is done by using the send md5(criticulStructure) and verify(getNextToken()) functions, respectively. Experiments and discussion The evaluation of the solutions constructed in the work is done by defining and analyzing values of a number of indicators, namely, the indicators of efficiency, reliability and resource consumption [24]. The efficiency indicator is due to the hardware limitations of the Android platform and the limited bandwidth of the communication channel, which affect the stability and continuity of the application, as well as the usability of the end user. The efficiency is calculated on a test scenario by using a system function System.currentTimeMillis() as an average value of the time delays that occur as a result of executing instructions for sending program markers and checksums. The results of the conducted experiments showed that when the ratio of the number of built-in instructions of the evaluating module to the number of instructions of the target program not exceeding 20%, the average delay value did not exceed the established allowable limit of 200 ms. Calculation of the resource consumption indicators is performed by using the jmap and jstat utilities. These ones allow estimating the increase in the consumption of the consumed RAM on the client side after adding the attestation functions to the code. Based on a series of measurements made on the test application, it was determined that the increase in the memory consumption did not exceed 21% in comparison with the unprotected version of the software application. To evaluate the reliability indicator of the proposed security solution, fuzzy testing of the protected application was performed on pre-generated tests, including random and boundary values of the input data. Testing a series of 250 samples of input data revealed no false positive and false negative errors. Therefore it confirms the correctness of the proposed approach to the remote attestation and the operational capability of its software implementation. Applicability of the proposed approach to protection of the integrity of Android applications is due to the achievable level of deployment automation of the proposed security solutions as well, including the choice of location and placement of attesting instructions in the code. This makes it possible to solve the problems of efficient selection and adaptation of existing software tools for processing Java code, both at the source code level and directly by using bytecode analysis tools. The experiments on a test bench using ZigBee Series 2 microcontrollers have confirmed the applicability of the used extended technological process for the production of systems for operational control and reaction in emergency situation in practice. The experiments performed also confirmed that the expected effect of the implementation of the results obtained is, first, in improving the security indicators of the final operational control and reaction system in emergency situations, and, second, in expanding the providing functionality of the system by introducing the stage of integrating remote certification in the used technological process. Conclusion An approach to remote attestation of mobile applications, using control flow checking and checksum checking algorithms has been investigated. Within the proposed and tested technological process a hardware/software implementation of the algorithms was performed by using Android platform as an example to serve as a basis for obtaining experimental characteristics of these algorithms. As a direction for future research it is planned, first, to develop techniques to analyze security of mobile applications and, second, to increase their security, including at the source code level and object code one.
3,056.2
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Information management and optimization methods in architectural construction drawings: A case study of the “coconut forest settlement” in Hainan . This paper investigates information management and optimization methods within architectural construction drawings, using the “Coconut Forest Settlement” project in Hainan as a case study. Five key categories of methods are explored: clarification, structuring, standardization, precision, and lightweight. These methods address issues such as the lack of case-based analysis, the need for better information management, and the reduction of information redundancy. By classifying and discussing these strategies, the study highlights the enduring relevance of construction drawings in the digital age. Furthermore, it envisions the transferability of these methods to different contexts due to their emphasis on logical commonalities and the ever-present need for efficient information management. Practical examples from other fields, such as Revit, Rhino, and Grasshopper, are cited to demonstrate the potential applicability of these methods beyond traditional construction drawings. This paper contributes to the enhancement of information management in architectural design, fostering innovation and improved efficiency across various applications. Opportunities and challenges of construction drawing under the development of digital technologies The development of digital technologies has brought about a revolutionary transformation in the field of architecture [1].Within this context, various aspects of the construction industry and traditional means of conveying information have been significantly impacted [2].Construction drawings have long been the primary method for expressing architectural design information, but digital design technologies have the potential to shift the process from traditional hand-drawn lines to model-based image exports, allowing for precise adjustments and annotations, thereby greatly improving the efficiency of construction drawing production [3,4]. However, against the backdrop of nonlinear architectural forms becoming mainstream in design [5][6][7], construction drawings are beginning to exhibit a certain degree of inadequacy in terms of information representation.Specifically, for nonlinear architectural components, their dimensions cannot be accurately conveyed through traditional plans and elevation drawings.Even when employing unfolded surface representations for precise dimensioning, a level of abstraction becomes evident in guiding construction execution [2,8]. Furthermore, a closer examination of the practical applications of construction drawings in construction projects reveals that the information conveyed in these drawings is primarily intended for on-site construction by construction teams.In contrast, manufacturers of prefabricated components utilizing computer numerical control (CNC) machining typically require digital models from the design team to serve as the basis for component dimension information [2].Despite this, construction drawings remain an indispensable technical adjunct in the administrative management procedures of construction projects.They are subject to drawing reviews prior to construction commencement, and, upon project completion, they are used for final acceptance procedures within the project management framework.Additionally, construction drawings continue to serve as the basis for cost estimates in today's construction projects [9]. Hence, it can be argued that the development of digital technologies has gradually shifted the role of construction drawings from technical guidance to a type of administrative document emphasizing project management.However, it is precisely due to this evolving role of construction drawings in the digital technology landscape that information representation within these drawings has garnered significant attention.A complete set of construction drawings contains a wealth of information about the building project, with variations in spatial scales, information accuracy, and types of construction objects.Therefore, to ensure clear information representation in construction drawings, efficient management and optimization of the information within them becomes paramount. Research gaps in construction drawing information management and optimization methods Through a review of existing literature, this study identified several research gaps in the field of construction drawing information management and optimization methods: 1. Lack of case-based analysis: Existing scholarly work on construction drawings has predominantly focused on teaching construction drawing techniques to university students [10][11].However, there is a lack of practical analysis by architecture and construction professionals based on real project experiences, both in terms of the techniques employed and the challenges faced.This abstraction in the analysis of construction drawings diminishes its applicability to actual construction drawing practices. 2. Neglect of information management and optimization: Scholarly attention has primarily concentrated on the technical aspects of construction drawing, including considerations for interdisciplinary collaboration in construction drawings [12], the role of construction drawings in construction projects [13], and drawing standards for construction drawings [10][11].However, there is a notable absence of dedicated research on information management and optimization methods within construction drawings. 3. Absence of categorized analysis for specific methods: Many studies related to construction drawings merely provide lists of specific methods, often in a fragmented manner [13][14][15], without categorizing these methods at a logical level.While enumerating specific issues and analyzing them is essential for addressing immediate problems, a more comprehensive impact can be achieved by classifying these methods according to their corresponding technical categories.Such classification would enable future scholars and practitioners in construction and design to expand their thinking based on this framework, offering a more forward-thinking perspective on construction drawing techniques. To address these three key issues, this study utilizes the "Coconut Forest Settlement" (CFS) in Hainan as a case study to provide detailed, category-based discussions of the information management and optimization methods evident in its construction drawings.This research aims to illustrate the technical value of construction drawings in the context of current developments in digital design technology, emphasizing their role as administrative documents in engineering management processes.Additionally, this study offers insights into the potential transformation and application of these summarized construction drawing information management and optimization methods in various technological scenarios, thus enhancing the technical core of construction drawings for future advancements. Introduction to the CFS project The Coconut Forest Settlement is located on Dongyu Island in Boao, Qionghai City, Hainan Province, China.The architectural clusters are situated near the island's lakeside (Figure 1).The architectural design draws inspiration from the harmonious blend of traditional tropical coconut forest village architecture and natural vegetation (Figure 2).The individual forms of each building unit are derived from the traditional Hainan straw hat, while the overall spatial layout of the buildings references traditional village settlements, giving the impression that they have naturally grown from the coconut grove, emphasizing the inseparable relationship between humans and nature. The Coconut Forest Settlement utilizes high-performance and environmentally friendly bamboo and wood as the structural framework.The architectural design promotes natural airflow from the bottom to the top, achieving passive cooling effects.The outer surfaces of the roofs of each building unit are covered with 1,518 photovoltaic panels, which collect and convert solar energy to supply garden lighting, thereby reducing energy consumption and carbon emissions.Some of the construction drawings for the Coconut Forest Settlement are presented in Figures 3-6.This study will conduct a categorized analysis of the drawing representation and the specific information management and optimization methods used in the AutoCAD drawing process. Clarity: enhancing visual information communication Construction drawings serve as a set of engineering practice illustrations with the purpose of guiding construction personnel in the actual processing and installation of construction objects according to the specified drawings.To achieve this purpose, it is essential that construction personnel can accurately recognize and interpret the information presented in the drawings.Therefore, the expression of information in construction drawings should be clear and legible to enhance the efficiency of information communication. Layout of construction drawing sheets. To clearly convey the design information contained in the drawings, the process of creating construction drawings involves the layout of drawing sheets to ensure that the information is presented clearly and legibly while maintaining a certain degree of aesthetic appeal.When performing sheet layout, the following points are considered: a. Sheet spacing: This refers to the arrangement of various graphical elements and annotations related to different construction aspects within the same construction drawing.Adequate space is left between these elements to visually distinguish and separate them, ensuring clarity for construction personnel. b. Sheet alignment: This involves emphasizing the logical relationships between drawings on the sheet.For instance, in architectural drawings, elements like the roof, floor plans, various elevations, and cross-sections are expressed.When multiple types of drawings are present on the same sheet, they are aligned according to their respective axis numbers to clearly convey the corresponding relationships between different architectural drawings. c. Feature distinction: Construction drawings use specific line styles, colours, and line widths to describe different elements of the construction.For example, in architectural construction drawings, blue lines may represent the elevations, light grey may signify the fill materials, yellow indicates the building sections and red dashed lines represent the building axes.Specific colours and line widths are designated for printing to visually distinguish the various meanings of lines in the final printed paper drawings.It is important to note that while the specific colours may vary, the underlying principle remains the same. Annotation in construction drawings. To provide a clear explanation of construction methods, materials, and dimensions, the process of creating construction drawings involves annotating the drawing content with specific points to offer more detailed explanations.The following considerations are made during the annotation of construction drawings: a. Annotation scale: When annotating different aspects of the same drawing with detailed explanations, an annotation scale matching the final viewport is used.This ensures that the size of annotation text and numbers for different explanatory objects remains the same in the final drawing, even if the drawing scale for various objects on the same sheet varies. b. Annotation placement: Annotations are systematically placed within the drawing according to specific conventions.For example, in architectural construction drawings, elevation and cross-section annotations typically include elevation information combined with vertical dimension details on the right side of the drawing, while construction materials and practices information is annotated on the left side.This alignment conforms to the reading habits of construction personnel and complies with drawing standards, ensuring clear communication of annotation information. c. Annotation alignment: Annotations in construction drawings are aligned within the drawing.For instance, in dimension annotations, equal distances are maintained between annotation lines and material annotations are aligned with the end of the annotation lines, ensuring that dimension lines are aligned on the same straight line, creating a neat appearance, and making it easier for construction personnel to access drawing information. d. Annotation simplification: In a single drawing sheet, annotation information, especially for construction materials and practices, is consolidated and streamlined.Construction drawings typically achieve this in two ways.The first method involves adding annotation points along annotation lines to represent multiple identical parts on the same straight line.The second method employs a single annotation line that branches out to point to multiple non-collinear locations.However, the latter approach may result in intersections between construction method annotations and other types of annotations, affecting aesthetics and the clarity of information communication to some extent. Structuring: distinguishing the hierarchy of drawing information Architectural construction drawings contain information related to different spatial scales and varying levels of information precision.Moreover, the design content that needs to be conveyed through drawings encompasses various categories.To enhance the efficiency of information representation, construction drawings employ a structured information management approach to differentiate and categorize various types of information. Indexing relationship between master plans and detailed plans. Construction drawings are divided into master plans (Figure 3), node detailed plans (Figure 4-5), and practice detail plans (Figure 6) based on their content.These three categories are hierarchical, with master plans providing an overview of the overall layout of the building, node detailed plans further elaborating on each individual building within the architectural cluster, and practice detail plans explaining the specific aspects of construction practices at the level of individual building components.To express this relationship, construction drawings use indexing symbols on larger-scale drawings to refer to more detailed drawings, creating a logical hierarchy of information explanation from the general to the specific. Layer management of construction drawing files. In architectural construction drawings, differentiation of information types is achieved through the use of layers, aligning with the discussion of feature distinction in Section 2.2.1 of this study.Construction drawings establish standard layers based on the types of information to be conveyed.These layers typically include architectural elevations, architectural floor plans, architectural section lines, and architectural pavement and fill patterns.In addition to layer differentiation, various colours and line styles are used to further distinguish content.During the drawing process, design content is placed on the corresponding layers for rendering, ensuring that when electronic drawing files are completed, information can be individually displayed during actual construction based on different layers.This enhances the efficiency of information communication. Standardization: reducing redundant expression of repetitive information Standardization focuses on the consistent expression of identical information in architectural construction drawings.To facilitate modifications and management in construction drawings, various methods are employed for information management to reduce the independence of the same information and enhance data connectivity.From a logical perspective, standardization is an optimization method aimed at addressing the issue of "information islands," with the goal of establishing relationships between identical pieces of information. Common details. Common details fall within the category of detail plans discussed in Section 2.3.1.Depending on the uniqueness of the content, detail plans can be divided into special practice detail plans and common practice detail plans, the latter referred to as "common details." Common details in architectural construction drawings represent architectural details that can be reused and appear repeatedly, such as connection nodes used in building roofs, standard-sized photovoltaic panel components, and the use of railings in buildings.Common details express these recurring components through a single drawing, which is then indexed for use in drawings of individual buildings.This reduces information redundancy within the entire set of construction drawings, optimizing information representation.In other words, for the construction team, the specific content of common details informs them that the building requires a certain number of identical building components, and these components share the same dimensions.This effectively and clearly enhances the construction team's understanding of the construction object and reduces the likelihood of errors resulting from misinterpretation. Blocks. The use of blocks is also applied to repetitive objects in construction drawings, such as the elevation representation of structural columns and the side view of benches.By creating blocks using the "block" command to represent the content in these drawings and copying and using them at different locations, any modifications to the content in one block can be applied to all identical blocks, improving the efficiency of drawing. Standard sections. Standard sections are typically applied to the repetitive, large-scale construction parts that appear uniformly in drawings, such as building boardwalks and railings.In the representation of standard sections, only one repeating unit is expressed, and the content on both sides is omitted using dashed lines.This conveys the modular pattern of the construction object in terms of dimensions and materials, thus simplifying the explanation of the construction object using concise information. Precision: accurate conversion for controlling complex form dimensions In the era of digital technology, nonlinear architectural design has become a prevailing trend.This nonlinear aesthetic leads to an increasing number of forms that are challenging to express directly through traditional architectural construction drawings, including plans, elevations, and sections.In response to this, construction drawings employ various information optimization methods to transform complex information, enabling precise dimension representation. Accurate dimension representation of complex form elevations through unfolding. When the design content represented in architectural construction drawings exhibits curved lines in the plan or variations in slope in the vertical direction, the form dimensions cannot be expressed through traditional elevation drawings from a true perspective.In such cases, construction drawings use elevation unfolding drawings to represent the true dimension information and the variation trends on the elevation.In the process of representation, the total length of the curved lines in the design content is first measured in plan, and then an unfolding curve is drawn based on its actual length.Objects located on the curve are drawn according to their dimensions, providing accurate data for quantity calculations. Expression of irregular dimensions in panels. For irregular surfaces, during the process of dimension representation in construction drawings, the surface is expressed in a flat, unfolded form.In addition, seam lines are set based on processing requirements.When representing the dimensions of each panel, a uniform grid (e.g., 100mm x 100mm grid) is used for dimension information, providing precise data for CNC manufacturers as a basis for production. Lightweighting: reducing file volume by minimizing unnecessary precision Lightweighting is an optimization method for construction projects in a multi-disciplinary context.Construction drawings, as core documents for conveying construction content in a construction project, are shared among different disciplines and companies.Therefore, compatibility issues, including file version compatibility and minimum computer performance compatibility, need to be considered.As a result, the process of creating construction drawings incorporates methods for reducing file volume. 2.6.1.External references.For architectural construction drawings, the use of external references is often seen in the architectural site plan.During the preparation of construction drawings, the design content is treated as a separate file, and the external reference command is used to import files containing site survey information provided by the surveying profession into the architectural design files as "background" information.This reduces the total amount of information contained within individual files, making it easier to understand and manage the information in each file.It also reduces the file lag issue that can occur during the transfer of construction drawing information due to computer performance problems, thereby enhancing drafting and reading efficiency. Simplification of details in general plans using basic line types. Considering the analysis of the indexing relationship mentioned in Section 2.3.1 between construction drawings of different spatial scales and expression accuracy, it can be observed that in most cases, construction drafting simplifies the content indexed in the general plan.For example, in a general plan, elements like kickboards in a building corridor are represented simply as polylines with a certain line width (Figure 3), whereas in the common detail plan, the actual construction details of these elements are provided (Figure 6).This method ensures that the general plan provides an overview of the location of elements like railings while also offering specific details in the common detail plan.As a result, it avoids redundant expression of detailed practices in the general architectural site plan, achieving file lightweight. Conclusion and perspective In this study, using the construction drawings of the "Coconut Forest Settlement" project in Hainan as a case, we have categorized the methods of information management and optimization within architectural construction drawings into five types: clarification, structuring, standardization, precision, and lightweight.By combining these methods with specific practical scenarios, we have addressed three key issues in the field: the lack of analysis based on real case studies, the insufficient focus on information management and optimization methods, and the absence of method classification.This demonstrates that, in the current context of digital design technology development, construction drawings, as administrative documents in the engineering management process, still hold significant value for in-depth research. Looking ahead to the five types of information management and optimization methods discussed in this study, we believe that these methods have the potential for cross-application in various contexts.The reasons for this include: 1. Emphasis on logical commonalities: Although we have discussed these methods in the context of architectural construction drawings, after classification, they form a logical framework for information management and optimization that goes beyond the specific application.Therefore, these five types of logic are no longer confined to the realm of architectural construction drawings and can be applied to other contexts. 2. The demand for information management and optimization methods in other contexts: Through practical examples from other contexts, we find that the five types of information management and optimization logic summarized in this study are also applicable in the context of information technology used in other architectural designs.For instance, in Revit, the division of "families" essentially uses structured layers and standardized objects to enhance information management efficiency and clarity of expression.In the Rhino modelling process, the management of layers and the use of Instance objects are similar to the concept of "blocks" in construction drawings, reducing redundant expression of repetitive information and achieving lightweight files for delivery.In Grasshopper, the use of "Groups" for arranging connected components clearly demonstrates the logic of components, and the "Cluster" command allows for the overall packaging and reuse of components.These different contexts all reflect the five types of information management and optimization methods summarized in this study. In conclusion, future research can be conducted based on the five logical patterns of information management and optimization methods extracted from architectural construction drawings in this study.This will allow the core technical principles of traditional construction drawings to be inherited and applied in new technologies, facilitating a more comprehensive update of architectural design techniques. Figure 4 . Figure 4.The rooftop and plan view of the CFS. Figure 5 . Figure 5.The elevation and section of the CFS. Figure 6 . Figure 6.The detail of the CFS.
4,758.6
2024-05-20T00:00:00.000
[ "Engineering" ]
Unrest in Kazakhstan: Economic background and causes Abstract The article studies the economic background and reasons for the protest actions of the population on the example of major riots in the regions of Kazakhstan in early 2022. The theory of relative deprivation explains the occurrence of unrest by the growth of social tension in groups of the population who are dissatisfied with living conditions. According to the authors, the causes of the unrest were economic factors. There are many studies on political, inter-ethnic, inter-religious, and other factors of protest actions, but not enough research on the economic factors of urban unrest. The study aims to identify the economic causes of the outbreak of violence in the country’s regions. The research methodology was based on comparative statistical analysis and building a probit model based on panel data. We have established that the growth of the subsistence minimum, the increase in the proportion of the population with incomes below the subsistence minimum, and, especially, the depth of poverty and the acuity of poverty are reasons for social tension, which, after the small trigger, turned into large-scale urban unrest. Moreover, neither income inequality nor rising unemployment was a significant factor in the protest actions. The results indicate the need for the authorities to monitor the socio-economic indicators of the regions and take measures to prevent their significant deterioration, especially the depth and acuity of poverty. A similar empirical approach can be applied to analyzing the economic causes of unrest in regions of other countries. Introduction Protests and demonstrations have been part of human history for centuries and have occurred in almost every country.People have come together to express their dissatisfaction, advocate for change and demand justice on various issues.Protests can take many forms, from peaceful marches and sit-ins to riots and violent clashes with law enforcement (Roberts, 2009). Only in recent years in different countries have many protests turned into riots.For example, protests in Hong Kong (2019Kong ( -2020) ) sparked by a proposal to introduce an extradition law; the Black Lives Matter (2020) protests in the US following the killing of a black man by police officers; the Arab Spring uprisings (2010)(2011)(2012) caused by political oppression and lack of political freedom; Indian farmers' protests (2020-2021) against new agricultural laws; yellow vest protests (2018)(2019) caused by higher fuel taxes; protests in Sudan (2018Sudan ( -2019) ) caused by corruption and economic hardship (Global Protest Tracker, 2023). Protests often have a specific trigger event or incident that catalyzes people to come together and demand change.However, the root causes of social and political unrest are often much deeper and more complex, rooted in long-standing grievances and systemic problems built up over the years and even decades.For example, the murder of George Floyd by police officers triggered the Black Lives Matter protests, and the real cause was years of racial injustice and police brutality against black people.Higher fuel taxes initially drove the Yellow Vest protests, but the real reasons for the grievances were economic inequality and social injustice. Studying protests can provide valuable information about people's problems and concerns about their government and society.Deep-seated grievances about inequality, injustice, and lack of access to basic needs and services often spark protests.Researching the root causes of protests and unrest can be a valuable tool for governments and societies to understand and address deeply rooted social problems.However, concrete action and policy changes must accompany it to be effective. In early January 2022, protests broke out across Kazakhstan, triggered by a sharp increase in gas prices.The protests quickly turned into riots, with demonstrators storming government buildings and clashing with police.The unrest in Kazakhstan has highlighted long-standing problems, including corruption, a lack of political freedom, and economic inequality. The article aims to identify the economic factors that caused the protest actions in Kazakhstan in 2022.Many economic reasons can stimulate protest movements around the world, including: (1) Income inequality.As the gap between rich and poor widens, people who feel left out or excluded from the benefits of economic growth may protest for a more equitable distribution of wealth.The January events of this year showed that income inequality in the regions and Kazakhstan as a whole had reached a peak.According to President K-Zh.K. Tokayev, 162 people, or 0.001% of the population, own 55% of the wealth of Kazakhstan.At the same time, 11711,334 people (96.6% of the total population) have less than $10,000 (Tengrinews, 2022). (2) Unemployment and job insecurity.High unemployment or precarious employment can lead to economic hardship and uncertainty for individuals and families, prompting them to demand better job opportunities and security. (3) Cost of living.Rising prices for necessities such as food, housing, health care, and education can place a significant financial strain on individuals and families, leading to protests in favor of affordable goods and services. (4) Poverty.When large segments of the population live in poverty and struggle to meet their basic needs, they may feel marginalized, excluded, and frustrated by the political and economic system.In such situations, protests and demonstrations can be a way for people to voice their dissatisfaction and demand change. To achieve the study's goal, we formulated and tested four hypotheses using a binary choice panel data model-panel probit model.The rest of the paper is organized as follows.The second section presents a literature review examining the background and causes of protests worldwide.The third section describes the methodology and data used for the analysis.The fourth section contains the results of statistical analysis and calculations based on the probit model.The fifth section presents a discussion and interpretation of the results, and the last section presents the study's conclusions. Literature review Social tension, discontent, and mass protests occurred in almost all countries.In this regard, this problem has received comprehensive coverage in the scientific literature.Authors from all over the world have studied the background, causes, and consequences of such phenomena.The topic of protests and riots is relevant as they quickly develop into mass robberies, vandalism, and an outbreak of violence, denoting elements of the crime.Moreover, they lead to insecurity, such as worsening poverty, lack of food and fuel, death of people, rising unemployment, etc (Mongale, 2022). The expression "urban protests" is also often found in studies.A large and rapidly growing urban population can increase competition for scarce urban resources, which fuels discontent expressed in protests (Castells-Quintana et al., 2022;Gizelis et al., 2021;Goldstone, 2010;Urdal & Hoelscher, 2012). Despite strong theoretical expectations associated with the destabilizing effect of population growth, empirical evidence remains mixed.Nationwide, Fox and Bell (2016) finds a negative and negligible association between urban growth and protests.This could depend on the scale used to measure the variable.However, the findings of numerous subnational studies have not consistently demonstrated a connection between population growth and protest activity (Dorward & Fox, 2022).Bahgat et al. (2018) find no evidence for an association between urban population growth rates and social unrest.These results reflect that rural-urban migration, the most likely cause of social tensions, plays an increasingly limited role in urban population growth (Fox, 2017;Menashe-Oren & Bocquier, 2021).On the other hand, Ostby (2016) discovers a positive correlation between urban unrest and the level of relative deprivation felt by rural-to-urban migration. A recent study found that urban population growth driven by extreme flood displacement was associated with a higher likelihood of urban social unrest (Castells-Quintana et al., 2022).The authors distinguish between push factors (when adverse conditions elsewhere force people to move to urban areas) and pull factors (when migrants are attracted to urban areas by rich opportunities) that stimulate migration from rural areas to cities. Their results show that involuntary resettlement leads to social unrest in cities. Among the causes of protests and riots, the authors also often include the process of globalization.Specifically, the existence of globalization losers, particularly in the lower middle class; increased competition from migrants for jobs and social benefits; the psychological readiness of some middle-class losers to attribute the worsening of their socioeconomic situation to the global "conspiracy of the elites"; erosion of the traditional industrial society structure and, as a result, degradation of the current political system (Sergeyev et al., 2018). The nature of a country's political regime also determines the opportunities, resources, and motives for protest mobilization (Chenoweth & Stephan, 2011;Dalton et al., 2010;Tilly & Tarrow, 2015).Countries closer to democratic and autocratic ideals are less likely to protest than more hybrid countries.More democratic countries with competitive leadership and free speech are dampening the grievances at the heart of the protests.Moreover, in countries with more authoritarian regimes, there are fewer protests, as they limit the opportunities for protest with censorship, restrictions on freedoms, and police/military surveillance, and more often than democracies, suppress protests forcibly. Another essential prerequisite for the population's discontent and protests is the country's food security.However, these concerns vary both within and between countries.Sanchez and Namhata (2019) found a negative relationship between increased cereal production and protests; high volatility in domestic food prices increases the likelihood of protests. Protests fuel tensions between the police and the public due to a lack of public confidence and support for the police and increased political dissent.Some authors argue that excessive police violence provokes a political backlash, reducing overall support for the security apparatus and increasing the willingness of some populations to engage in public dissent (Curtice & Behlendorf, 2021;King, 2013). Research indicates that riots are more likely to attract people living in poorer and marginalized residential areas (Kawalerowicz & Biggs, 2015;Lightowlers, 2015).Other authors argue that individuals and groups, with little chance of influencing political programs and decisions through more traditional political action, stage mass protests (Akram, 2014;Wacquant, 2008).A study by Nikitina et al. (2022) postulate that participation in protest actions for modern youth manifests ideological sympathies and the need to belong to a social group.However, protest behavior is associated with the lack of institutionalized channels of influence on decision-making, with an opinion about a high level of corruption and disagreement with ongoing political processes. Observations of researchers and surveys of participants in protest movements (Holdo & Bengtsson, 2020;Muñoz & Anduiza, 2019) have also shown that available local incentives correlated with individual motives lead to participation in riots only when the event that makes riots justified destabilizes fragile local equilibrium.Social movements often face tactical diversification, in which protesters fall into two groups: core supporters who justify and support violent action; and those who reduce their support for the protest after the onset of street violence, pogroms, and riots. When studying protests, their causes and consequences, the authors often used econometric methods, such as regression analysis of panel data, including methods of instrumental variables, the difference-of-differences method, time series analysis, and non-linear panel analysis (Bezzola et al., 2022;Dosso, 2023;Hierro et al., 2017;Hillesund, 2023;Scapini et al., 2021;Walls & During, 2020;Watanabe et al., 2020).However, among the existing studies, we did not find studies that examine the causes of protests using binary choice models. Data and methodology According to the administrative-territorial division of 2021 in Kazakhstan, there were 17 regions, including 14 regions and three large cities of Republican significance.To characterize the socioeconomic condition of these regions, we used the following indicators: Gini10, Gini20 -Gini coefficients for decile and quintile groups, respectively; CF -coefficient of funds, the ratio of 10 percent of the most and 10 percent of the poorest population; Cons -income used for consumption, on average per capita per month, tenge; SL -subsistence level, equal to the cost of the minimum consumer basket, in tenge; ConsSL -the purchasing power of income, the ratio of income used for consumption to the subsistence level, in percent; BelSL -the share of the population with incomes used for consumption below the subsistence level, in percent; Depth -coefficient of the depth of poverty, the average deviation of the income level of people who are below the subsistence minimum from the subsistence level; Acuity -coefficient of the acuity of poverty, the average of the squared deviations of the share of income deficits of the members of the surveyed households from the established criterion; Unemp is the unemployment rate in percent. All annual data on these indicators by region from 2017 to 2021 were obtained from the website of the Bureau of National Statistics of the Agency for Strategic Planning and Reforms of the Republic of Kazakhstan (KazStat, 2023).The selected variables are closely related to each other; therefore, in order to avoid specification errors, they were not considered within the framework of one model.Table 1 contains descriptive statistics for the indicated indicators for five years. Protests and riots in January 2022 took place in many cities of the regions of Kazakhstan: Zhanaozen (Mangistau region), Almaty city, Taldykorgan (Almaty region), Shymkent (Turkestan region), Astana city, as well as in the administrative centers of the regions of Atyrau, Aktobe, Pavlodar, East Kazakhstan.The seizure and arson of government buildings, looting, and killings of both law enforcement officials and civilians accompanied the protest activity. The prerequisites for the unrest could be increased income inequality, a decrease in real incomes of the population due to a decrease in their purchasing power, an increase in the proportion of the population with incomes below the subsistence level, an increase in poverty, and an increase in the unemployment rate.In Kazakhstan, in January 2022, the trigger for protest actions of the population, which then spread to other regions of the country and turned into riots, was a sharp increase in the price of gas for refueling cars in the city of Zhanaozen, Mangystau region. Economic deprivation can create protest moods among the least well-to-do part of society.One reason may be income inequality (Aidt & Leon-Ablan, 2022;Zuniga-Jara, 2022).Hypothesis H1.Income inequality was the main reason for the protest actions and riots of the population in Kazakhstan in January 2022. The primary source of income for most adults is wages.Without a permanent, a permanent well-paid job, people experience want and deprivation.With a large migration from rural areas to cities, the opportunities to provide jobs for young migrants are limited.Rising unemployment increases the risk of urban protests (Castells-Quintana et al., 2022;Gizelis et al., 2021;Goldstone, 2010;Urdal & Hoelscher, 2012).Hypothesis H2.Rising unemployment was the main driver of urban protests and unrest in the regions of Kazakhstan in 2022. Rising prices, outpacing the growth of cash income, reduce the real income of households and are reflected in the growth of the subsistence level, aggravating the situation of the poorest segments of the population.Mainly, rising food prices affect protest moods (Sanchez & Namhata, 2019).As a result, the level of poverty is rising.Kazakhstan's poverty line is at 70 percent of the subsistence level.This concept is conditional and depends on the government's financial ability to support low-income people.Therefore, to isolate the part of the population with low incomes, we further use the indicator of the share of the population with incomes below the subsistence level. Hypothesis H3.The rising subsistence level, a decline in the purchasing power of incomes, and an increase in the share of the population with incomes below the subsistence level were the main factors behind street protests and riots in Kazakhstan's cities in 2022. However, the decrease in real income affects different segments of the population differently.Households with relatively high real incomes are more likely to cope with the difficulties that have arisen than those with low real incomes.The composition of the part of the population with incomes below the subsistence level is heterogeneous.There is a so-called inequality among people experiencing poverty.In particular, rural migrants in big cities experience deprivation due to difficulties in finding employment and housing (Ostby, 2016).They include many young people who often settle in marginalized residential areas and can quickly become involved in street riots (Kawalerowicz & Biggs, 2015;Lightowlers, 2015).People in extreme poverty are more likely to join protest actions, blaming the authorities for everything, than those who can more easily survive temporary hardships. Hypothesis H4.The increase in the depth and acuity of poverty was the leading cause of street protests and riots in the regions of Kazakhstan in 2022. To test the formulated hypotheses and identify the main economic factors that could contribute to the emergence of protest moods in the country's regions, an empirical study should be carried out based on the data collected by the regions of the country for the period from 2017 to 2021.Methods of statistical and econometric analysis are the basis for this study. It is suitable to use a binary choice panel data model-a panel probit model-to quantify the influence of economic factors on the likelihood of protest actions leading to urban unrest.We used Gini10, CF, Unemp, SL, BelSL, Depth, and Acuity data to estimate model parameters as independent variables from 2017 to 2021 (KazStat, 2023).In this model for 2022, the dependent variable Unrest is equal to 1 for regions where protests and riots were brewing and, in January 2022, took on an enormous scale: in the cities of Mangystau, Turkestan, Akmola, and Almaty regions, including in the cities of Almaty, Shymkent, and Astana.For other regions and in other years for all regions, the Unrest variable is equal to 0. The components of x it vector are the values of independent variables with a lag of 1 and a constant, i.e. The independent variables were used with a lag to eliminate the simultaneity problem.The existence of a latent quantitative variable Unrest � it is assumed to be related to the independent variables x it and unobservable characteristics ε it by a linear additive relation where β is the vector of coefficients, the unobserved quantity ε it has a standard normal distribution.The latent variable determines the value that the dependent variable will take: In this case, the probability that the dependent variable Unrest it will take the value 1 is Here F(•) is the standard normal distribution function (Verbeek, 2004).If the estimated coefficient for the independent variable is significant and positive, then the growth of this variable increases the probability that the dependent variable Unrest it will take on the value 1; in other words, it increases the probability of protest actions in the corresponding region.The probit model makes it possible to identify those socio-economic indicators that influenced the emergence of unrest in the country's regions.Calculations are based on the use of the STATA software. Results The economic prerequisites for unrest in a relatively prosperous country such as Kazakhstan could not arise overnight but accumulated and matured over a long period. Socio-economic situation in 2021 Unrest in the regions of Kazakhstan occurred at the beginning of 2022.Therefore, let us pay attention to the values of socio-economic indicators in the previous year, 2021.In Table 2, in bold type are the maximum or minimum values of indicators for all regions of the country.Note that in this table, the decile and quintile Gini coefficients Gini10 and Gini20, respectively, show that the highest level of income inequality was in the city of Almaty.The coefficient of funds CF, equal to the ratio of incomes of 10 percent of the most and 10 percent of the poorest population, showed the highest value also in the city of Almaty. The subsistence level SL was the highest in the Mangystau region.However, the purchasing power of income ConsSL, defined as the ratio of income used for consumption to the subsistence level, was the lowest in the Mangystau region.The proportion of the BelSL population with incomes used for consumption below the subsistence level was the largest in the Turkestan region. Extreme forms of poverty, depth of poverty Depth, and acuity of poverty Acuity had the highest values in all regions in the Mangystau region.The highest unemployment rate in Unemp was in the city of Almaty. Based on these observations, we can conclude that in 2021 the most unfavorable socioeconomic situation developed in the city of Almaty, Mangystau, and Turkestan regions.Meanwhile, in some other regions, for which the values of the indicators in Table 2 are close to their maximum or minimum values, the situation was also unfavorable.For example, the values of the Gini index of 0.319 in the East Kazakhstan region and 0.312 in the Pavlodar region are close to its maximum value of 0.321 in the city of Almaty. Unemployment rate and purchasing power of income Sufficiently high employment and low inflation are important stabilizing factors in every country. Figure 1 one can see that the unemployment rate in all regions has decreased or remained at the same level, except for the regions of Akmola and Kyzylorda.However, in the city of Almaty, the Turkestan region, and the city of Shymkent in 2021, it still markedly exceeded the average unemployment rate in the country. The real incomes of the population per capita in the country have been growing over the past five years; however, this growth has been different across regions.In the country as a whole, it rose to 123.5 percent of the 2016 level.In the Mangystau region in 2021, it decreased over this period to a level of 98.2 percent. The indicator "Purchasing power of income" is estimated as the ratio of income used for consumption to the subsistence level.It reflects the availability of food and other goods, especially for people experiencing poverty.Figure 2 shows that its lowest values were in the Turkestan and Mangystau regions.Moreover, the purchasing power has decreased in all regions.In 2021, compared to 2017, it decreased by 19.1 percent in the country and 30.7 percent in the city of Almaty.In the Mangystau region, where the value of the purchasing power of income was the lowest, it decreased by 13.4 percent. Income inequality A significant gap in the incomes of citizens can have a significant impact on social tension in society.In most regions, the Gini index for decile groups in 2021 increased compared to 2017.Its highest values were in the city of Almaty, East Kazakhstan, Pavlodar, and Karaganda regions.However, note that for the Mangystau and Turkestan regions, in which the most extensive protest actions took place, the Gini index has the lowest values in all regions. The situation is approximately the same regarding the coefficient of funds, defined as the ratio of 10 percent of the richest and 10 percent of the poorest population.Moreover, this indicator has the highest value in Almaty city-the largest city in Kazakhstan. Poverty indicators The indicator "The share of the population with incomes used for consumption below the subsistence level" clearly distinguishes Turkestan and Mangystau regions (Figure 4).Compared to 2017, it increased in 2021 in all regions.Kazakhstan has set the poverty line at 70 percent of the subsistence level. The factors of protests and riots may not be poverty itself but its extreme forms, measured by the indicators "Depth of poverty" and "Acuity of poverty."In Figure 5, the Mangystau and Turkestan regions show the greatest depth of poverty.The most favorable situation for this indicator is in the Atyrau region and the city of Astana. The "Acuity of poverty" indicator captures inequality among poor people (Figure 6).The Mangystau region stands out on it.The city of Zhanaozen locates in the Mangystau region, and protests by the population began there on 2 January 2022.In the following days, they continued in other regions of Kazakhstan and escalated into violent riots. Thus, the data on the socio-economic indicators of the regions of Kazakhstan from 2017 to 2021, presented in Figures 1-6, show that their extreme values were achieved in the Mangystau and Turkestan regions, the city of Almaty, as well as for individual indicators in the Akmola, East Kazakhstan, Pavlodar, and Karaganda regions.The most significant unrest occurred in Almaty city, Mangystau, and Turkestan regions.Conclusions on the histograms of indicators of income inequality, the purchasing power of income, unemployment, the share of the population with incomes used for consumption below the subsistence level, and the depth and acuity of poverty do not contradict the H1-H4 hypotheses.To answer the question of which of these indicators were the main factors in the unrest in Kazakhstan in January 2022, it is necessary to perform a quantitative analysis of the data. Probit model Table 3 presents the results of calculations for the probit model.Of all possible specifications, the table includes only a part containing estimated coefficients that are significant at the 5 percent level.Specifications 1 and 2 show that the Gini income inequality index Gini10 and the coefficient of funds CF had no significant effect on the likelihood of disorder.The same result occurs when evaluating regressions with only one independent variable, Gini10, Gini20, or CF.Hence, income inequality was not the main reason for the protest actions and riots of the population in the cities of Kazakhstan in January 2022, and it rejects the H1 hypothesis. None of the specifications is the estimated coefficient on the unemployment variable Unemp significant at even the 5 percent level.Therefore, the H2 hypothesis that unemployment was a significant factor in the unrest in the regions of Kazakhstan in 2022 is rejected. The increase in the subsistence level SL reflects the rise in prices for consumer goods and the decline in real incomes of the population.Specification coefficients 3 and 5 show the positive impact of the growth of the SL indicator on the probability of protest actions.An increase in the BelSL indicator-the share of the population with incomes below the subsistence level-according to the estimated coefficients of specifications 3 and 6 also contributes to an increase in the likelihood of protests and riots.Thus, the indicators Cons -income used for consumption and ConsSL-the purchasing power of income did not significantly impact it.Consequently, the increase in the cost of living and the proportion of the population with incomes below the subsistence level were the main factors in street protests and riots in Kazakhstan in 2022.Hypothesis H3 is confirmed for the subsistence level SL and the share of the population with incomes below the subsistence level BelSL, but not for the purchasing power of ConsSL. In specifications 1, 4, and 7, the estimated coefficients show the positive effect of Depth on the occurrence of unrest.In specifications 2 and 8, the estimated coefficients also suggest that the acuity of poverty Acuity contributed to the unrest.These results support the H4 hypothesis that depth of poverty Depth and acuity of poverty Acuity were significant drivers of unrest in the regions of Kazakhstan in 2022. Discussion In studies on this topic, the authors point to various causes of urban unrest: income inequality, ethnic and religious contradictions, the influx of migrants into cities, police violence, the tendency Source: KazStat (2023) of a specific category of young people to commit illegal acts, disagreement with the political processes in the country, and others.The composition of the protesters is heterogeneous.Some of them start to protest actions and go to demonstrations with their demands to the authorities.Moreover, others join the protests when they have already begun.Moreover, the latter are responsible for riots, acts of vandalism, robbery, and arson of buildings.Political extremists can use protests to seize power. There are no significant racial and interethnic conflicts in Kazakhstan.There are also no significant mass conflicts between migrants from rural areas and the urban population.However, urban poor people, including migrants from rural areas, experience relative deprivation, as they usually do not have the professions and qualifications required by the city.They usually do not have well-paid jobs and good living conditions.This creates conditions for their marginalization and incentives to easily join protest actions and riots. For our study, we were interested in the reasons that forced people to take to the streets with protest demands against the authorities.Such actions can escalate into major urban riots.Why did big riots take occur in some regions of Kazakhstan while there were no significant protest actions in others?The preconditions for unrest are primarily economic factors.In a relatively prosperous country, wealthy people will be unlikely to participate in street protests and urban riots.Many researchers point to income inequality as the possible cause of social unrest, due to which some have great privileges while others experience deprivation (Aidt & Leon-Ablan, 2022;Zuniga-Jara, 2022).However, Jo and Choi (2019) conducted a study across 45 countries and found no relationship between the Gini coefficient and the population's protest participation.Another study by Grusky and Wimer (2010) also found no impact of income inequality for street protests using a questionnaire method.Our study's empirical analysis of data on the Gini index and the coefficient of funds using a regression probit model did not reveal a significant effect of income inequality on the emergence of protest actions and their transition to major riots (Table 3).Indeed, as can be seen in Figure 3, the Gini index reached the highest levels in the city of Almaty, East Kazakhstan, Pavlodar, and Karaganda regions.Nevertheless, only in the city of Almaty were there significant protest actions.In contrast, the Gini index had the lowest values in the Mangystau and Turkestan regions and the city of Shymkent, where there were extensive riots.Hence, there is no reason to believe income inequality could cause social tension. However, the protest movements began precisely in the Mangystau region, the main oil-producing region of Kazakhstan.Sakal (2015), based on an analysis of the distribution of Kazakhstan's oil wealth in the light of global initiatives, as well as official reports and statistics, proved that Kazakhstan's natural resource policy has failed to improve the standard of living of the majority of people in Kazakhstan, especially people experiencing poverty and those who live in oil-producing and rural areas, despite rising Source: KazStat (2023) oil prices and incomes.Heim and Salimov (2020) have explored the impact of oil revenues on the economy of Kazakhstan with a focus on social development.Sanghera and Satybaldieva (2021) also have explored the region's income and wealth inequality. For most people, having a steady job provides a regular income that prevents them from falling into the poor category.One can assume that unemployment is one of the significant reasons for the protest actions.Gizelis et al. (2021), Castells-Quintana et al. (2022), and others reached this conclusion.However, calculations using the probit model did not reveal a significant impact of unemployment on the likelihood of unrest in the regions of Kazakhstan.Indeed, among the country's regions, it was highest in Almaty city and the Turkestan region, including its administrative center Shymkent (Figure 1).However, unemployment levels were not high in other protesting Mangystau and Almaty regions. Nevertheless, in Table 3, the coefficient at the Unemp variable is insignificant.Consequently, unemployment was not the cause of unrest in Kazakhstan's regions since the unemployment rate in Kazakhstan was lower than in the countries studied in the above articles.The explanation can be like this.Rising unemployment affects the entire population and, more likely, to a lesser extent, the poorest population, for whom employment is already weak.More affluent households are less prone to protest actions since they have reserves with which they can more easily survive a temporary job loss. The fall in real incomes due to rising prices for consumer goods and services leads to an increase in the subsistence level.Probit model calculations show that its growth significantly increased the likelihood of unrest in the country's regions.At the same time, the purchasing power of income, calculated relative to the subsistence level, seemed to be a factor in the unrest (Sanchez & Namhata, 2019).Nevertheless, it is not.The association of purchasing power with unrest is ambiguous, being lowest in Turkestan and Mangystau regions but highest in Almaty city (Figure 1).The growth of the subsistence level without a corresponding increase in real incomes leads to an increasing population share below its level.Moreover, this figure turned out to be a significant factor in the unrest in the regions of Kazakhstan.The deepening of poverty is associated with less food availability.Based on data from food riots in 14 African countries, Berazneva et al. (2013) argue that worsening poverty increases the likelihood of unrest.The sharp increase in food prices significantly impacts people experiencing poverty, thus setting the stage for street protests (Bush, 2010).Although there is another observation, Vasquez (2017) describes the situation in Peru as a paradox, where from 2000-2015, there was a high growth rate, a decrease in poverty rates, and at the same time, there was an increase in social unrest.Probably, along with the increase in the income of people in Peru, there were other reasons for the protest moods of the population. However, the fact that the real incomes are below the subsistence level influences people's dissatisfaction.Moreover, to an even greater extent, their dissatisfaction with their living conditions is influenced by the extent to which their real incomes turned out to be below the subsistence level.These people are in dire need and ready to go to street protests with demands on the authorities.Econometric calculations on the probit model confirmed that the depth of poverty and the acuity of poverty were significant factors in increasing the likelihood of past protests and riots. Conclusion This article examines the economic background that, after a minor cause, led to sizeable urban unrest in the regions of Kazakhstan in January 2022.Violence, pogroms, robberies, murders, seizure of government buildings, and arson accompanied them.They were unexpected in a relatively prosperous country.However, we argue that there were objective economic reasons for the unrest, which gradually accumulated, but the authorities did not pay attention to them promptly and did not respond appropriately. As a result of the statistical and econometric analysis, we found that due to the decline in real incomes of the population, the main prerequisites and causes of unrest in some areas of Kazakhstan in 2022 were the growth of the subsistence level, which increased the share of the population with incomes below the subsistence level, and especially the increase in the depth and acuity of poverty.We argue that economic factors underlie the maturation of protest moods among the poorest segments of the population.Then a small trigger is enough for these protest moods to turn into significant riots.The results indicate the need to monitor the socio-economic indicators of the regions and take measures to prevent their significant deterioration, in particular, the depth and acuity of poverty.It is necessary to develop a system for collecting and analyzing data on the socio-economic situation of the regions, which will allow us to identify problem areas and trends and evaluate the effectiveness of the measures taken.The development of early warning mechanisms and monitoring systems will make it possible to quickly identify regions with unfavorable socio-economic situations and take measures to neutralize them.In this case, social programs, and policies to reduce poverty, combat inequality, create jobs, and increase the availability of essential services (education, health care, housing, etc.) can improve the socio-economic situation.At the same time, supporting entrepreneurship and attracting investment to the regions will help stimulate economic growth, create new jobs, and improve the population's living standards.A similar empirical approach is suitable for analyzing the economic causes of unrest in regions of other countries. Disclosure statement No potential conflict of interest was reported by the authors. shows unemployment rates by regions of Kazakhstan in 2017 and 2021.Comparing them, Source: KazStat (2023) Note: Bold indicates the maximum or minimum value in each column.
8,071
2023-09-27T00:00:00.000
[ "Economics" ]
Dark period transcriptomic and metabolic profiling of two diverse Eutrema salsugineum accessions Abstract Eutrema salsugineum is a model species for the study of plant adaptation to abiotic stresses. Two accessions of E. salsugineum, Shandong (SH) and Yukon (YK), exhibit contrasting morphology and biotic and abiotic stress tolerance. Transcriptome profiling and metabolic profiling from tissue samples collected during the dark period were used to investigate the molecular and metabolic bases of these contrasting phenotypes. RNA sequencing identified 17,888 expressed genes, of which 157 were not in the published reference genome, and 65 of which were detected for the first time. Differential expression was detected for only 31 genes. The RNA sequencing data contained 14,808 single nucleotide polymorphisms (SNPs) in transcripts, 3,925 of which are newly identified. Among the differentially expressed genes, there were no obvious candidates for the physiological or morphological differences between SH and YK. Metabolic profiling indicated that YK accumulates free fatty acids and long‐chain fatty acid derivatives as compared to SH, whereas sugars are more abundant in SH. Metabolite levels suggest that carbohydrate and respiratory metabolism, including starch degradation, is more active during the first half of the dark period in SH. These metabolic differences may explain the greater biomass accumulation in YK over SH. The accumulation of 56% of the identified metabolites was lower in F1 hybrids than the mid‐parent averages and the accumulation of 17% of the metabolites in F1 plants transgressed the level in both parents. Concentrations of several metabolites in F1 hybrids agree with previous studies and suggest a role for primary metabolism in heterosis. The improved annotation of the E. salsugineum genome and newly identified high‐quality SNPs will permit accelerated studies using the standing variation in this species to elucidate the mechanisms of its diverse adaptations to the environment. | INTRODUCTION Eutrema salsugineum (formerly Thellungiella halophila) is a model species for the study of plant stress tolerance (Amtmann, 2009;Griffith et al., 2007;Pilarska et al., 2016;Wong et al., 2005). The two most commonly studied accessions, Shandong (SH) and Yukon (YK), are native to the Yellow River region of China (Bressan et al., 2001;Inan et al., 2004) and the Yukon territories of Canada (Wong et al., 2005), respectively. These accessions contrast in cold tolerance (Lee, Babakov, de Boer, Zuther, & Hincha, 2012), water stress tolerance (MacLeod et al., 2014;Xu et al., 2014), and disease resistance (Yeo et al., 2014). In response to water stress, for example, YK accumulates more cuticular wax (Xu et al., 2014), exhibits delayed wilting due to higher leaf water content, and maintains a higher leaf area, as compared to SH (MacLeod et al., 2014). Some differences in adaptive mechanisms are linked to metabolism, such as a more pronounced increase in fructose and proline content after cold acclimation in YK compared to SH (Lee et al., 2012). Plant metabolic profiling permits the simultaneous measurement of multiple intermediates and the products of biochemical pathways. Similar to RNA-seq, metabolic profiling can be used to investigate the metabolic and physiological status of biological systems (Fiehn et al., 2000) and may provide biochemical bases for differences in growth and physiology (Meyer et al., 2007). Metabolite profiling has revealed correlations between particular metabolites and growth in Arabidopsis thaliana (Meyer et al., 2007), but it is not clear whether these metabolites are more universally linked to heterosis. Together, transcriptome profiling and metabolite profiling provide complementary experimental evidence to guide the construction of rational hypotheses for the biochemical basis of variation in growth. The goal of this study is to identify the metabolic and transcription bases for the growth differences between the two E. salsugineum accessions. We utilized contrasting genotypes to identify genetic differences, expression divergence, and metabolic compounds associated with observed phenotypic variation. The YK accession has a higher water-use efficiency than SH (J Yin et al., manuscript in preparation). Among several traits that differ in these accessions is a distinct change in transpiration during the dark period. We therefore employed transcriptome and metabolic profiling to provide insight on the observed differences in these accessions and chose the dark period for tissue collection because of this observation. We obtained gene expression and metabolite concentration data from SH and YK accessions during the dark period by RNA-seq and gas chromatography-mass spectrometry (GC/MS), respectively. We utilized RT-PCR to confirm DEGs implicated by our RNA-seq experiment, validating 23 of 25 candidates. By mining the RNA-seq experiment, we validated previously identified SNPs and discovered additional SNPs that differentiate these E. salsugineum accessions. We propose that observed differences in metabolite accumulation could contribute to differences in biomass. (Margulies et al., 2005). SNPs were detected using mpileup from SAMtools v0.1.18 with mapping quality ≥15, and depth ≥3 (Li et al., 2009); 454 sequencing has a high error rate for detecting indels (Margulies et al., 2005), so only SNPs resulting from substitutions were retained. The two accessions are substantially inbred lines and should be homozygous at each base position. Hence, only monomorphic base positions within each accession were considered for detection of differences between the two accessions. Custom Perl scripts were used to remove SNPs (i) that were heterozygous within either accession, (ii) that were not biallelic between accessions, (iii) that were supported by fewer than three sequence reads, (iv) for which the alternative allele accounts for fewer than 10% of aligned reads, and (v) that were heterozygous between the SH accession and the JGI SH reference. If more than four SNPs were detected within a 100-bp region using the VariantFiltration module from GATK v2.4.9 (McKenna et al., 2010), they were not included in the final SNP data set. SNPs that had a mpileup quality score 999 based on SAMtools were deemed "high-quality" SNPs. Sanger sequencing data of the YK accession, available from the National Center for Biotechnology Information (NCBI) (Wong et al., 2005), were also aligned to the reference genome using SSAHA2 (Ning, Cox, & Mullikin, 2001). SNPs were called using SAMtools and filtered for clustered SNPs (four SNPs within 100-bp region) using GATK as indicated. SNPs that were not biallelic or were heterozygous within YK were removed. Genes were identified via a reference annotation-based transcript assembly method using the Cufflinks package (Roberts, Pimentel, Trapnell, & Pachter, 2011;Trapnell et al., 2010). Reads from SH and YK were assembled separately and then merged using the cuffmerge command (Roberts et al., 2011;Trapnell et al., 2010). The intersect function within BEDTools v2.17.0 was used to identify genes not annotated in the JGI E. salsugineum genome (newly annotated genes). Same strandedness was not enforced when identifying newly annotated genes because of the nonstrand-specific protocol for 454 library preparation. Newly annotated genes that are unique from or overlap genes annotated by Champigny et al. (2013) but are present in the JGI reference genome were also identified using the same method (Table S1). The bioconductor package "DESeq" v.1.14.0 was used to identify genes likely to be differentially expressed between SH and YK without biological replicates (Anders & Huber, 2010). Gene expression was normalized, and the significance threshold for differential expression was based on a 0.2 false discovery rate (FDR; Benjamini & Hochberg, 1995). Genes were annotated by the best BLAST (Altschul, Gish, Miller, Myers, & Lipman, 1990) Genes were annotated by the best BLAST hit with the following threshold parameters: E ≤ 1 À30 ; sequence identity ≥30%; sequence aligned ≥30% of query sequence. cDNA was synthesized using a High-Capacity cDNA Reverse Transcription Kit (Invitrogen). Primers were designed using Primer Express software (v3.0.1). Primer specificity was then estimated by BLASTN using the E. salsugineum genome with all primer pairs. Primer efficiency was tested for all pairs of primers. cDNA was diluted five times by a fivefold gradient and then used as template for qRT-PCR, and the threshold cycles (C T ) were regressed against cDNA concentration (log). Slope of the regression line was estimated, and the efficiency was calculated as 10 À(1/slope) À1. For genes expressed in both accessions, primer efficiency was between 80 and 110% in both accessions. For genes that were only expressed in one accession based on RNA-seq data, primer efficiency was tested on both accessions, but only the accession with detected expression exhibited efficiency between 80 and 110%. Table S2 contains all primer sequences except gene XLOC_004723, for which acceptable qRT-PCR primers could not be designed. | Quantitative real-time PCR All qRT-PCR reactions were conducted in StepOnePlus TM Real-Time PCR Systems (Applied Biosystems, Invitrogen). Relative gene expression of target genes was quantified by the DC T method (Livak & Schmittgen, 2001). Relative gene expression was calculated as: where E is the primer efficiency for each pair of qRT-PCR primers. C T,X and C T,R is the threshold cycle of the target gene and the reference gene Actin2 (Thhalv10020906 m.g), respectively. | Metabolite profiling and data analysis Two metabolite profiling experiments were conducted. One experiment was performed using the same rosette tissue used for RNA-seq analysis (see above). A second metabolite profiling experiment was performed using tissue from SH, YK, and YK 9 SH F 1 plants with three replicates of five pooled plants per replicate. In both cases, identical extraction, derivatization, and analysis methods were used. Approximately 800 mg of frozen ground tissue was incubated in methanol at 65°in 1.75-ml tubes and centrifuged at 13,300 r/min. The supernatant, containing polar molecules, was decanted into a new tube. Chloroform was added to the pellet and incubated at 37°C for 15 min to solubilize nonpolar metabolites. Samples were then dried at room temperature for about 6 hr (polar) and 2 hr (nonpolar) in a centrifuge at 1,725 r/min and 30 lM Hg vacuum. Samples were stored at Singh, Ulanov, Li, Jayaswal, & Wilkinson, 2011). Data were analyzed by peak identification via comparison with spectra from standards, and relative concentrations of metabolites were obtained by comparison with internal standard peak area (Singh et al., 2011). Pairwise comparisons within each experiment were performed by two-tailed t tests between SH and YK (experiments 1 and 2), and SH or YK and YK 9 SH F 1 (experiment 2). In the second experiment, multiple comparisons among all three genotypes were conducted by Tukey's studentized range test (Tukey, 1949). A t test of F 1 against the mid-parental average of SH and YK was done using the variance estimated from F 1 hybrids. Within each experiment, genotype was treated as the only main factor. A nested analysis of variance (ANOVA) was also conducted using data from the two experiments. In the nested analysis, experiment and genotype were the two main factors. The experiment by genotype interaction was not included in the model. All identified metabolites are presented in Table S3. | Pathway analysis Pathway analysis was conducted on the annotated DEGs identified in transcriptome profiling and metabolites that differed in the same direction between accessions in both metabolite experiments. The fold difference between SH and YK was used to indicate up-or downregulation in YK compared to SH. The list of DEGs or metabolites with the fold changes were imported to MapMan software (Thimm et al., 2004). Significant pathways in which genes or metabolites were divergent from a 50/50 up/downregulation were identified using an uncorrected Wilcoxon signed-rank test (Wilcoxon, 1945). | Novel genes and single nucleotide polymorphisms (SNPs) were identified by transcriptome sequencing Whole rosettes of 4-week-old YK and SH plants grown in a 12hr:12-hr light-dark cycle were harvested in the middle of the dark period. Libraries of cDNA isolated from these rosettes were sequenced, and reads were aligned to the E. salsugineum SH reference genome (Yang et al., 2013). More than 1 million cDNA sequence reads, 95% of which aligned to the reference genome, were used for a reference-directed assembly of the transcriptome (Table S4), identifying 17,888 expressed genes (Tables S1 and S5). Of these, 65 genes were novel and not predicted in the reference genome (Yang et al., 2013) nor detected in a previous transcriptome analysis (Champigny et al., 2013). Only 20 of these 65 genes have annotated orthologs in the related species A. thaliana, A. lyrata, and/ or Schrenkiella parvula (Table S1). Presence-absence variation (PAV), defined as zero reads aligned to one of the two parents, was observed for 18.5% of the detected expressed genes with roughly equal numbers of genes detected only in SH or YK (Table S5). The transcript assemblies were processed to detect SNPs between SH and YK; 42% of shared genes contained a total of 14,808 SNPs, of which 4,873 were deemed "high quality" (Table 1; Table S6). Of the low and high stringency SNPs detected in our experiment, 73% (10,883 positions) and 79% (3,861 positions), respectively, were also identified by Champigny et al. (2013). We also compared our SNPs to available Sanger sequencing of cDNA clones from the YK accession (Wong et al., 2005) and identified 468 putative SNPs with reference to the SH reference genome. Of these, 441 have corresponding sequence data in the YK transcriptome we assembled and 88% (388 SNPs) had the same sequence variation in our assembly and the YK Sanger sequencing data (Table S6). | Transcriptome profiling and qRT-PCR identify differentially expressed genes (DEGs) between SH and YK To assess gene expression differences between SH and YK, we determined the number of reads aligned to each gene. Less than 0.2% (thirty-one genes) of the expressed genes were identified as candidate DEGs between SH and YK ( Table 2). Sixteen of these thirty-one candidates do not have homologous genes in A. thaliana, A. lyrata, or S. parvula, and twenty have been annotated in the reference genome (Table 2). Of those with homologs in one of these species, none have been associated with previously reported trait differences in SH and YK. Gene ontology (GO) enrichment analysis was not appropriate, given the small number of DEGs. To confirm DEGs based on RNA-seq, we designed qRT-PCR primers that matched unique positions in the reference genome based on BLAST analysis (Altschul et al., 1990). Unique primers could not be designed for five genes (Thhalv10022994 m.g, Thhalv10022932 m.g, Thhalv10014933 m.g, Thhalv10019398 m.g, and Thhalv10029246 m.g) due to paralogs with high sequence similarity, and no acceptable primer pair was identified for XLOC_004723. In total, qRT-PCR data confirmed our RNA-seq data for 23 of 25 genes. For the five DEGs that had no close paralogs and at least four reads in both SH and YK, expression differences based on RNA-seq were confirmed by qRT-PCR in four of five genes (Table 2). Among the 20 genes that had fewer than four reads in either SH or YK, expression differences by qRT-PCR was consistent with RNA-seq data in 19 (Table 2). In addition, when the low accession had fewer than four reads, there was no amplification in 16 of 20 cases (Table 2). | Metabolite profiling reveals higher accumulation of fatty acids and amino acids in YK and enhanced soluble carbohydrate accumulation in SH To identify potential metabolites and metabolic pathways that contribute to phenotypic and physiological differences between SH and YK, metabolite profiling was conducted in two independent experiments. In one experiment, metabolite concentrations in F 1 plants of a YK 9 SH cross were also determined. Concentrations of free fatty acids and long-chain fatty acid derivatives were higher in YK than SH (Table 3; Table S3). The concentration of ferulic acid was also greater in YK, indicating a potential for greater suberin and/or cutin accumulation in the YK accession. Based on maltose and glucose relative concentrations, the estimated starch concentration in SH was 1.7 times that of YK over the two experiments. Furthermore, the products of starch degradation were more abundant in SH than YK in both profiling experiments (Table 3). Maltose and glucose, primary products of starch degradation, were elevated in SH along with fructose, glycerol-3-phosphate, raffinose, and an unresolved disaccharide. This suggested more active 6-carbon metabolite catabolism via glycolysis in SH compared to YK during the night (Table 3; Table S3). SH accumulated higher concentrations of the disaccharides isomaltose and gentiobiose, whereas YK accumulated a very small amount of melibiose in one of the two screens. Tricarboxylic acid (TCA) cycle intermediates, on the other hand, were not consistently different between the accessions. Citric acid and fumaric acid were accumulated at higher levels in YK, and alpha-ketoglutaric acid was greater in SH (Table 3; Table S3). Ascorbic acid was higher in SH, but the ascorbic acid degradation product tartaric acid (2,3 dihydroxybutanedioic acid) was higher in YK (Table 3 and Table S3). All amino acids that differed between SH and YK were higher in YK (Table 3; Table S3). These included alanine, glycine, serine, threonine, and valine. Of these, only threonine was differentially accumulated in both profiling experiments. The amino acids alanine, glycine, and serine were only detected in one of the two experi- predicted mid-parent value. This included 65% of the fatty acids and 49% of carbohydrates detected (Table S3). We performed a twoway contingency test to determine if an observed difference in the accumulation of a metabolite was predictive of heterosis for that metabolite (Table S7). We found that metabolites with accumulation differences between the parents were neither more nor less likely to exhibit accumulation differences between the F 1 and the mid-parental values (Table S7). Hybridization can result in transgressive heterosis in which phenotypic values for the hybrids fall outside of the range of parental values. Of the 144 metabolites measured, transgressive heterosis was observed for 28. Of these, four metabolites were not observed in one of the two parents and 24 were detected in both parents and the hybrids (Figure 1). Of the 24 detected in all three genotypes, the transgressive heterosis was more likely to be negative (Figure 1; pvalue ≤ .0001 based on a Binomial exact test) and more frequently affected metabolites that did not differ in concentration between the parents (Table S7; p-value ≤ 0.05 based on v 2 test). Thus, heterosis for the metabolome was manifested by a decrease in metabolite pool sizes in hybrids and was not preferentially associated with metabolites that contributed to the variation between the two parents. This is consistent with our observation of decreased availability of primary metabolites in the faster growing YK (Table 3; Table S3). We propose that a metabolic consequence of enhanced growth is a reduction in pool sizes of primary metabolites and greater resource utilization for anabolic metabolism. | DISCUSSION In this study, we profiled both transcript and metabolite accumulation to identify genetic and biochemical variation during the dark period in two E. salsugineum accessions: SH and YK. We annotated novel genes present in the reference genome and identified DEGs between SH and YK in the middle of the dark period. We found that YK accumulates more fatty acids than SH, while SH accumulates sugars at higher concentrations. Although the transcriptomic and metabolic profiling results do not offer links to each other, they do offer insight into genetic and physiological differences between these accessions. Furthermore, we identified additional SNPs and provide validation of previously described SNPs, varying between these accessions that can be utilized for future research. | Validity of identified SNPs Based on the predicted transcriptome size (Yang et al., 2013), the SNP density from our analysis is 1 SNP per 10 kb of transcribed sequence; 14,887 SNPs identified from our transcriptome sequencing data were also present in Champigny et al. (2013). However, our SNP cluster filter, which removes neighboring SNPs to account for misaligned reads at insertion-deletion polymorphisms, removed 3,604 overlapping SNPs. The removal of this set of SNPs, likely enriched for false positives, contributed to the lower SNP number detected in our study. We also acknowledge that this contributed to F I G U R E 1 Heterosis for metabolite concentrations in F 1 hybrids. Metabolites in F 1 hybrids that were higher than the high parent or lower than the low parent at p-value ≤ .05 based on two-tailed t test are shown. The ratio is calculated as F 1 / high parent when F 1 had a higher concentration than the high parent and low parent/F 1 when F 1 had a lower concentration than the low parent our false negative rate. Of the 388 SNPs that matched between our analyses of pyrosequencing data and the Sanger data, 99 were removed by our procedure. We provide a compact file containing the subset of "high-quality" (see Materials and Methods) SNPs in Table S6. These have a SNP density of one SNP per 25 kb of transcript. This set of validated SNPs can be used for QTL mapping and fine-mapping studies (Matsuda et al., 2015;Trick et al., 2012;Wu et al., 2016). | Identification of genes expressed in the dark period in Eutrema salsugineum We provided expression data supporting 63% and 67% of the predicted genes in the two published reference genomes (Wu et al., 2012;Yang et al., 2013). This is likely an underestimate of the expressed genes in this species because we sampled only rosette tissue and only at night (Schaffer et al., 2001). We identified 66% of the genes identified in another transcriptome analysis of E. salsugineum (Champigny et al., 2013). Despite our study relying on lower coverage (49 vs. 89), and quantifying expression of genes only expressed in leaves at night, we annotated 65 genes not predicted in the reference genome or identified in the previous transcriptome characterization (Tables S1 and S5). Novel genes identified in this transcriptome study but not that of Champigny et al. (2013) are likely either only expressed at night or otherwise not expressed under the conditions of the previous study, which included a 21-hr photoperiod and low dark period temperatures. More than 97% of the genes identified in our study are homologous to genes in A. thaliana (Table S1) (Table S1). Therefore, we may have identified genes that are unique to this extremophile species (Wu et al., 2012). More than 80% of the expressed genes identified in this study were detected in both SH and YK (Table S5). This indicates a conserved transcriptome between these two accessions and is consistent with previous E. salsugineum transcriptome comparisons (Champigny et al., 2013;Lee et al., 2013). PAVs, as scored by read count in the transcript profiling experiment, accounted for 19 and 12% of all genes in this experiment and Champigny et al. (2013), respectively. Only 13% of these were consistently detected as PAV over the two datasets. These genes are likely "true" PAV genes (Table S1). Other genes detected in only one of the replicates were likely due to low mRNA abundance. PAV structural variation has been observed in A. thaliana (Bush et al., 2014), maize (Springer et al., 2009;Swanson-Wagner et al., 2010), and soybean (Haun et al., 2011). PAV genes are not typically essential (Bush et al., 2014) and may have minor effects on plant fitness (Swanson-Wagner et al., 2010). However, genes present in only one accession could contribute to the adaptation to specific selective constraints (Bush et al., 2014) and variation in quantitative traits (Swanson-Wagner et al., 2010). The observed phenotypic variation in growth rate and metabolism could be due to PAV, although no identified PAVs have been linked to trait differences in Eutrema. Further study utilizing molecular genetics to address the causes and consequences of natural variation in this species is needed to link these data types. | DEGs and constitutive response to abiotic and biotic stresses The low number of candidate DEGs between the two accessions of E. salsugineum used in this study was similar to other studies; 55% of these had no reads in one of the two accessions, which could be due to a lack of sufficient depth to detect very low expression. Champigny et al. (2013) identified 381 DEGs that were present in our transcriptome but not identified as DEGs in our data. RNA-seq experiments are often underpowered, and the lack of overlap with the previous study may be primarily an issue of low read depth and replicate numbers. Consistent with this expectation, 58% of the 381 DEGs identified previously have low expression (<4 reads; Table S1). In addition, 41% of the DEGs from Champigny et al. (2013) exhibit lower expression during the dark period in A. thaliana (Mockler et al., 2007), which may explain some of the lack of overlap in DEGs in the two studies. We identified 17 DEGs that were not identified by Champigny et al. (2013), possibly because they are only differentially expressed during the dark period or under our experimental condi- tions. It appears that the relative abundance of most transcripts is similar in SH and YK, as only 0.2 and 1.9% of all genes were DEGs in our study and Champigny et al. (2013), respectively. Fourteen of the 31 DEGs identified in our study were also identified by Champigny et al. (2013), and the expression patterns were the same in both studies. This suggests that differential expression for these genes between SH and YK is consistent over light and dark periods and the two growth conditions. Without exposure to abiotic or biotic stresses, YK expressed several stress-responsive genes (Table 2), as also noted by Champigny et al. (2013). Two plant defensin genes within the same family were highly expressed in YK (Table 2). In A. thaliana, the plant defensin type 1 family (PDF1) is comprised of seven genes (Shahzad et al., 2013) with highly conserved sequences and identical mature peptides (Thomma, Cammue, & Thevissen, 2002). AtPDF1 genes are induced by pathogens, nonhost pathogens, methyl jasmonate (MeJA), and ethylene De Coninck et al., 2010;Manners et al., 1998;Penninckx et al., 1996;Zimmerli, Stein, Lipka, Schulze-Lefert, & Somerville, 2004). Also, expression of AtPDF1 genes in yeast results in zinc tolerance (Shahzad et al., 2013 (Table S1). The transcript abundance of a gene encoding E. salsugineum peptide methionine sulfoxide reductase 3 (PMSR3) was higher in YK than SH (Table 2). There are five orthologous PMSR genes in A. thaliana, PMSR1 to 5 (Rouhier, Vieira Dos Santos, Tarrago, & Rey, 2006), that are also found in E. salsugineum. The expression of PMSR3 is induced by arsenate (Paulose, Kandasamy, & Dhankher, 2010). No function in tolerance or resistance has been established for this paralog in A. thaliana. However, knockout of either PMSR2 (Bechtold, Murphy, & Mullineaux, 2004) or PMSR4 (Romero, Berlett, Jensen, Pell, & Tien, 2004) results in decreased oxidative stress tolerance and overexpression of either gene increases stress tolerance in A. thaliana. Expression of PMSR4 (but not PMSR1, PMSR2, or PMSR3) was induced in response to UV and AgNO 3 in E. salsuginea SH (Mucha, Walther, Muller, Hincha, & Glaswischnig, 2015). It is plausible that the overexpression of PMSR3 by YK could provide greater oxidative stress tolerance in this accession. | Eutrema salsugineum accessions SH and YK differ in carbon metabolism In two experiments, 125 and 144 metabolites were detected. Although the total number and specific metabolites varied somewhat across the two experiments, differences were identified between SH and YK for the 85 metabolites detected in both experiments (Table 3; Table S3). Differences across the experiments may be due to slight differences in the growth chamber environments even with identical settings, as metabolite concentrations are strongly affected by environmental conditions and environment by genotype interactions (Soltis & Kliebenstein, 2015). Also, different soilless media mixes were used in the two experiments: A more bark-based media was used in the transcriptome and first metabolite experiments, whereas the soilless media used in the second experiment did not contain bark. However, there were consistent growth differences between the two accessions, and we focused our interpretation of the data primarily on those metabolites that were consistent across the two experiments. The derivatization method we utilized has been widely used to detect sugars (Gullberg et al., 2004), but is less accurate for identifying and quantifying amino acids (Kaspar, Dettmer, Gronwald, & Oefner, 2009). As a result, despite the fact that E. salsugineum accumulates higher concentrations of some amino acids than Arabidopsis (Eshel et al., 2017), many amino acids were not detected in our analyses. Higher concentrations of fatty acids and fatty acid derivatives were measured in YK, including several previously identified as structural components of membrane lipids, cuticle components, and wall-resident suberin (Table 3; Table S3). Fatty acids contain more energy than carbohydrates when used as storage compounds and can act as an efficient storage form of reduced carbon (Taiz & Zeiger, 2010). High production of fatty acids is typical of rapidly growing tissues (Ohlrogge & Jaworski, 1997;Qin et al., 2007). Verylong-chain fatty acids (VLCFAs; C20:0 to C30:0) play an important role in cell elongation and expansion (Qin et al., 2007). Several VLCFAs, including docosanoic, hexacosanoic, pentacosanoic, tetracosanoic, and tricosanoic acids, were more abundant in YK than SH (Table 3), consistent with our measurements of higher growth rates of YK as compared to SH in our growth conditions (manuscript in preparation). The VLCFA tetracosanoic acid, which plays an important role in root cell growth and expansion (Qin et al., 2007), was accumulated at a higher concentration in YK (Table 3; Table S3). In addition to carbon storage, lipids are important components of membranes and the leaf cuticle (Lynch & Dunn, 2004;Tresch, Heilmann, Christiansen, Looser, & Grossmann, 2012;Z€ auner, Ternes, & Warnecke, 2010). Our results are in agreement with Xu et al. (2014), who measured a greater accumulation of C22 and C24 fatty acids in the epicuticular wax of YK over SH. Overall, SH tissues had higher concentrations of sugars than YK. These measurements of nighttime sugar concentrations were similar to previous results obtained for fructose, glucose, and raffinose in leaves of E. salsugineum harvested during the day (Eshel et al., 2017;Lee et al., 2012). The products of starch breakdown, including maltose and glucose, were more abundant in SH (Table 3; Table S3), suggesting a higher rate of starch metabolism in the lower-biomass SH accession. This was consistent with the strong negative correlation between starch content and biomass observed in A. thaliana accessions (Sulpice et al., 2009). A study of the correlation between specific metabolites and biomass accumulation in A. thaliana revealed twenty-three metabolites that were correlated with biomass (Meyer et al., 2007). We detected fourteen of these twenty-three metabolites (Table 3; Table S3). Of these, five differed between SH and YK. The concentrations of ascorbic acid, glycerol-3-phosphate, and raffinose were negatively correlated, and putrescine was positively correlated with biomass in Arabidopsis (Meyer et al., 2007). The levels of these metabolites also corresponded to the differences in biomass between SH and YK, indicating that the relationships between metabolites and biomass found in A. thaliana were consistent in E. salsugineum (Table 3; Table S3). This suggests that, although stress tolerance is vastly different between these two species (Amtmann, 2009;Griffith et al., 2007;Lee et al., 2012), the metabolic markers for biomass accumulation may be similar. | Increased utilization rate as a hypothesis for metabolome heterosis More than 58% of the metabolites in F 1 plants were different from the predicted mid-parent concentration (Table S3), indicating a nonadditive effect of hybridity on the majority of the metabolome. The lower concentration of fructose and glucose in F 1 hybrids suggests high rates of starch and sugar depletion to support rapid growth (Lisec et al., 2011). Transgressive heterosis was more commonly observed for metabolites that were not different between the two parents (Table S7). This suggests that allelic variation affecting differential metabolite accumulation in the parents is not responsible for the observed heterosis in the F 1 metabolome. Although it is surprising that differences between the parents were not predictive of a YIN ET AL. | 11 metabolite association with heterosis, it may be that the metabolomic consequences of heterosis derived from secondary effects of an increased growth rate in F 1 hybrids, rather than a causative relationship between growth rate and specific metabolites or metabolite diversity. Differences in biomass polymers and metabolites involved in anabolic growth exhibited reduced pool sizes in the more rapidly growing YK as compared to SH (Table 3; Table S3), as well as in the very rapidly growing F 1 plants as compared to the parents (Table S3). Consistent with the hypothesis that utilization rate drives the heterotic effects on metabolite pool sizes, the transgressive effect overwhelmingly resulted in lower concentrations of metabolites in the hybrids (Figure 1; Tables S3 and S7). This hypothesis regarding the cause of metabolic heterosis may be a general phenomenon in plants. Indeed, the same associations have been observed in maize, in which largely negative overdominance for metabolites was found in the heterotic B73 9 Mo17 hybrids (Lisec et al., 2011). | CONCLUSIONS Our study contributes to the annotation of the E. salsugineum genome and provides evidence of transcriptional and metabolic differences between the SH and YK accessions. Very few differences in gene expression were detected in the middle of the dark period between these two accessions, but YK has constitutively higher expression of several plant systematic defense genes. The high-quality SNPs identified in this study can be used with previously identified SNPs to map traits that differ in these accessions, such as tolerance to various stresses. There is evidence for contrasting carbon metabolism in these two accessions, which correlates with observed growth differences. Furthermore, metabolite profiling of the accessions and F 1 hybrids supports the notion that the concentrations of key metabolites are correlated with growth rate, including the increased growth rate caused by heterosis. Our hypothesis was that combined transcriptome and metabolome profiling of two contrasting E. salsugineum accessions might elucidate the pathway(s) related to the phenotypic differences between these contrasting accessions. The difference in carbon metabolism identified via metabolome profiling provides insights for growth differences between SH and YK. However, none of the 19 DEGs that have been annotated in the reference genome are related to the observed metabolic differences. There are two plausible explanations: (i) the additional 11 DEGs that are currently unannotated in the reference genome could provide additional evidence for the link between metabolome and transcriptome, or (ii) by increasing the number of replicates in transcriptome study, more DEGs will be identified to support further pathway identification. ACKNOWLEDGMENTS Partial support to JY was provided by the U.S. Department of Agriculture, National Institute of Food and Agriculture-Agriculture and COMPETING I NTERESTS The authors declare that they have no competing interests. AVAILABILITY OF SUPPORTING DATA All the raw data supporting the results of this article have been deposited at Edgar, Domrachev, & Lash (2002) and are accessible through GEO Series accession GSE GSE67745 at (http://www.ncbi. nlm.nih.gov/geo/query/acc.cgi?acc=GSE67745). AUTHORS' CONTRIBUTI ONS JY and MJG performed the experiments; JY, BPD, and MVM designed the experiments; JY and BPD analyzed the data; JY, BPD, and MVM wrote the manuscript.
8,008
2017-07-15T00:00:00.000
[ "Biology", "Environmental Science" ]
Statistical discrimination of global post-seismic ionosphere effects under geomagnetic quiet and storm conditions ABSTRACT The retrospective statistical analysis of total electron content (TEC) is carried out using global ionospheric maps (GIM) for 1999–2015. TEC anomalies are analysed for 2670 earthquakes (EQ) from M6.0 to M10.0 classified into 2205 ‘non-storm’ EQs and 465 ‘storm’ EQs during geomagnetic storms. The geomagnetic storms are specified by relevant thresholds of geomagnetic indices AE, aa, ap, ap(τ) and Dst. Using sliding-window statistical analysis, moving daily–hourly TEC median μ for 15 preceding days with estimated variance bounds is obtained for each grid pixel of GIM-TEC maps. The derived ionosphere variability index, Vσ, is expressed in terms of ΔTEC deviation from the median normalized by the standard deviation σ. Vσ index segmentation is introduced specifying TEC anomaly if an instant TEC is outside the bound of μ ± 1σ. Efficiency of EQ impact on the ionosphere (Eσ) is growing with EQ magnitude and depth representing relative density of TEC anomalies within area of 1000 km radius around EQ hypocentre. Positive TEC ‘storm’ anomalies are twice as much as those of non-storm values. This observation supports dominant post-EQ TEC enhancement with Eσ peak decreasing during 12 h for daytime but growing by nighttime during 6 h after EQ followed by gradual recovery afterwards. Introduction The effects of earthquakes in the ionosphere are subject of intense studies during recent decades (Davies & Baker 1965;Koshevaya et al. 1997;Liu et al. 2006a, Liu et al. 2006a, 2006bHarrison et al. 2010;Hayakawa & Hobara 2010;Lin 2010;Arikan et al. 2012;Lin 2012;Astafyeva et al. 2013;Komjathy et al. 2013;Pohunkov et al. 2013;Devi et al. 2014;Perevalova et al. 2014). The diversity of pre-earthquake phenomena, such as local magnetic field variations, electromagnetic emissions at the different frequency ranges, excess radon emanation from the ground, changes in water chemistry, water condensation in the atmosphere leading to haze, fog or clouds, and atmospheric gravity waves rising up to the ionosphere, induces changes in the ionospheric total electron content (TEC) and the F2 layer peak electron density (Pulinets et al. 2003;Chen et al. 2004;Pulinets & Boyarchuk 2004;Rishbeth 2006;Liu et al. 2006aLiu et al. , 2006bDepueva et al. 2007;Varotsos et al., 2008Varotsos et al., , 2011Karatay et al. 2010;Le et al. 2011;Namgaladze et al. 2012;Freund 2013;Devi et al. 2014;Akhoondzadeh 2015;Heki & Enomoto 2015). Changes in magnetic field at the time of the earthquakes have been observed and reported in various publications such as Johnston et al. (1981), Yen et al. (2004) and Varotsos et al. (2009). Modification of the electric field and currents due to electric processes in the lithosphere and the lower atmosphere (Varotsos & 1999, to December, 2015 to the availability of GIM-TEC maps and results of analysis are provided in Section 4. The goal of this study is to obtain new evidence on seismic-ionospheric associations which are summarized in the Conclusions in Section 5. The statistical analysis of TEC data In this study, statistical data analysis is performed using global ionospheric maps (GIM) of the TEC provided by Jet Propulsion Laboratory (JPL). TEC is defined as the line integral of plasma density in the Earth's atmosphere and it provides an estimate of the total number of free electrons inside a cylinder with 1 m 2 cross-section area in the column from the bottom of the ionosphere (65 km) to the GPS orbit of 20,200 km. The TEC is an important observable in analysis of temporal variability of the ionosphere and the plasmasphere both under quiet and under storm conditions. The GIM-TEC maps are generated in a continuous operational way by several Data Analysis Centers since 1998, covering the period more than the entire solar cycle (Hernandez-Pajares et al. 2009). The vertical TEC is modelled by JPL in a solar-geomagnetic reference frame using bi-cubic splines on a spherical grid; a Kalman filter is used to solve simultaneously for instrumental biases and vertical TEC on the grid as stochastic parameters (Manucci et al. 1998). GIM-TEC have been initially provided with 2 h time resolution which are linearly interpolated in time to 1 h resolution, and the hourly files are provided by JPL since December 2008. The JPL maps are generated in the denser map grids (¡90:2:90 in latitude, -180:2:180 in longitude), and time specified for 0.5:1.0:23.5 h UT so these maps are preprocessed by linear interpolation into standard IONEX format for 0:1:23 h UT. The IONEX global map consists of 5183 grid values binned in 87.5 S to 87.5 N in step of 2.5 in latitude, 180 W to 180 E in step of 5 in longitude. The similar structure of map grids is applied when the source GIM-TEC maps are converted to geomagnetic coordinates binned in -87.5 N to 87.5 N in steps of 2.5 in geomagnetic latitudes, and 0 E to 360 E in steps of 5 in geomagnetic longitude using the International Geomagnetic Reference Field (IGRF) model. According to Liu et al. (2006a), the recurrence time of the M 5.0 earthquakes is 14.2 days. Therefore, in order to determine the reference background TEC distribution, we compute the sliding median of every successive 15 days of TEC at each grid point of the map. In the present study, we use TEC sliding median defined by a 15-day moving window, and the median value is assigned to the final day of the window, i.e. to the 15th day of the window. We use such type of 'forward' median approach because it has a potential for development of forecasting model similar to those in Gulyaeva et al. (2013);Muchtarov et al. (2013): m m;d s ¡ d i ðlÞ D medianðx d i ðlÞ:::x d s ðlÞÞ: (1) In the above equation, x denotes the GIM-TEC value at grid point l. d i and d s represent the first day and the final day of the sliding window, respectively. The subscript m indicates the map under investigation. Statistical study of an ionospheric parameter includes determination of median and dispersion, i.e. variability of the parameter around its median. The standard deviation s represents a measure of the dispersion of distribution which can be computed as where N T denotes total number of days in the sliding window which is set to 15 in this case. An interval within one standard deviation around the median accounts for approximately 68% of the dataset, while two and three standard deviations account 95% and 99.7%, respectively. The measure of TEC variability is further investigated as the TEC deviation (DTEC) from the median m, normalized by the standard deviation s for N t number of days prior to and during day d s : The algorithm is completed by introducing Ds segmentation with thresholds shown in Table 1 to result in the ionosphere variability Vs index with magnitudes from Vsn D -4 (extreme negative TEC anomaly) in step of DVs D 1 to Vsp D C4 (extreme positive TEC anomaly). Vs index represents the integer magnitude of TEC variability regarding quiet reference median in terms of s grades. Here, the ionosphere quiet state is within DTEC < §1s. If the value of instant TEC is outside of pre-defined bounds of m § 1s, the anomaly of TEC is detected (Akhoondzadeh 2015). The Vs grade segmentation -4:1:4 is similar to the ionospheric weather W-index (Gulyaeva et al. 2013), yet it differs from W-index by the dynamic thresholds expressed through the variable standard deviation, s. An advantage of Vs index is that it is more physically justified showing DTEC in terms of the relevant standard deviation. This scenario can be easily implemented with any physical parameter, such as the ionospheric critical frequency, foF2, the peak electron density, NmF2, and the peak height, hmF2, using the relevant reference value (mean or median) and the standard deviation for the selected parameter. Efficiency (Es) of the ionosphere response to impact of earthquakes is represented by the relative density of the extreme negative indices Vsn -2 (mVsn) on the specified fragments of a map corresponding to decreased density of electrons DTEC -1s as compared with quiet reference state; or similarly, the extreme positive indices Vsp 2 (mVsp) on the selected fragments corresponding to increased density of electrons DTEC C1s, to the total number of cells in the fragment(s) around the EQ hypocentre(s) on the map or series of EQs on the relevant maps (mtot): In this study, the available GIM-TEC maps from 1999 to 2015 have been processed to produce output global maps of the 15-day sliding median m, standard deviation s, magnitudes Ds and Vs index in IONEX format with spatial resolution of 2.5 £ 5 , in latitude and longitude, respectively. The histogram of annual frequency of occurrence of the specified magnitudes of Ds, in per cent, relative to the total number of about 45 £ 10 6 grid elements per year (5183 grids £ 24 h £ DOY, with days-of-year, DOY, equal to 365 or 366) is plotted in Figure 1 in increments of 0.5s. We note the asymmetry of the TEC enhancement (Ds > 0) and depletion (Ds < 0) occurrence. The sign of Ds depicts DTEC for an instant TEC being either greater than or less than the quiet reference median. An appreciable number of 'quiet' TEC with Ds D 0 is also seen in Figure 1 when TEC is equal to the median value (Equation 3). There are negligible year-to-year (solar cycle) changes in Ds . We note also a certain percentage of Ds occurrences exceeding §1s which are denoted as TEC 'anomalies'. The TEC data are extracted from GIM-TEC for the regions surrounding earthquake hypocentre at geographic latitude, ue, and geographic longitude, fe, within the radius of 1000 km, determined by u ue § 10 , f fe § 7.5 . The analysis of TEC for a rectangular region defined by (u i , f i ) to (u s , f s ) is provided by Gulyaeva et al. (2013, Appendix A) for the increments in u and in f given as Du and Df, respectively. However, the space around the EQ hypocentre with radius of 1000 km is not a simple rectangular region. It is rather represented by fragments of 24 cells comprised of a square of 4 £ 4 latitude/longitude grids and a rectangle of 8 £ 2 latitude/longitude grids surrounding (ue, fe) as illustrated in Figure 2. Global instantaneous Vs map in geomagnetic coordinates frame for 1 January 2012, at 06:00 UT is presented in Figure 2 as an example of global variability index distribution. The time for the map, 06:00 UT, is an integer hour just after the Japan's Izu Islands earthquake on 1 January 2012, at 05:27:54 UT (14:40:27 LT) with Mw D 7.0, and at a depth of 348 km (Lin 2012). The hypocentre of the earthquake was at [31.4 N, 138.2 E] in geographic coordinates, and [22.8 N, 208.1 E] in geomagnetic coordinates which is close to the crest of the equatorial ionization anomaly (EIA). The area selected for the analysis is designated by white points on IONEX grids surrounding the earthquake hypocentre (white star) in Figure 2. This earthquake occurred under quiet geomagnetic conditions (see the next Section for the classification criteria) nevertheless there is appreciable negative Vs anomaly southwards of the hypocentre which is detected earlier by Lin (2012) with the nonlinear principal component analysis (NLPCA) while the principal component analysis (PCA) was unable to detect the anomaly. The evaluation of the global distribution of earthquakes of M6.0C under quiet conditions and the geomagnetic storms in the selected fragments of the globe surrounding the EQ hypocentre is provided in the next section. Spatial distribution of earthquakes under quiet and storm conditions The aim of the present study is to reveal a novel empirical evidence of the earthquake related TEC anomalies under quiet space weather conditions and geomagnetic storms. We use earthquake data from the global Catalogue of the Advanced National Seismic System (ANSS) provided by the Northern California Earthquake Data Center (NCEDC 2014). The composite Catalogue of earthquakes created by ANSS is a world-wide earthquake catalogue which is generated by merging the master earthquake catalogues from contributing ANSS member institutions and then, removing duplicate events, or non-unique solutions for the same event. We use the monthly and annual data for earthquakes of magnitude M6.0 to M10.0 from the NCEDS Catalogue for a period from January 1999, to December 2015, according to the availability of the hourly GIM-TEC maps during the solar cycles 23 and 24. Comparison of earthquakes with the equatorial ring current disturbances has shown that the earthquakes occurred during the Disturbance Storm Time (Dst) storms comprise 13% of the total number of more than 79,000 earthquakes M5.0C for 1964-2013 (Gulyaeva 2014). While the severity of a geomagnetic storm is defined by the Dst index which serves as a standard measure of the energy transfer from the solar wind to the ring current within the magnetosphere (Sugiura 1963), there are also other geomagnetic indices specifying impact on the ionosphere under quiet or disturbed conditions (Deminov et al. 2013). In this study, the EQ series and relevant Vs quantities on a map are referred to the 'storm' conditions (Gonzalez et al. 1994) if at least one of the following criteria is satisfied assuming the 'non-storm' conditions otherwise: AE max 500 nT; aa max > 45 nT; ap max > 30 nT; apðtÞ > 18 nT; Dst min ¡ 30 nT: The above conditions should be fulfilled both for the nearest UT hour (or 3 h UT interval) following the earthquake and the nearest pre-earthquake hour (or 3 h UT interval) to capture storm or sub-storm impact at the time of EQ event. Here AE max is the auroral electrojet AE value for two near-EQ hours, aa max is the mid-latitude aa index value for a given and preceding 3 h intervals; ap max is the maximum ap value for a given and preceding 3-h intervals. The ap(t) is the mean weighted value of ap index (Wrenn 1987): with the characteristic time T D 11 h or t D exp(¡3/T) % 0.76; ap 0 , ap ¡1 ,… are ap values at a given time of EQ and preceding 3 h intervals. The Dst min is the minimum disturbance storm time value for 2 h near EQ time. All the above indices are expressed in nanoTeslas (nT). The periods of storms and sub-storms are included by Equation (5) which may occur at all latitudes from the pole (AE index) through the mid-latitudes (aa, ap and ap(t) indices) to equator (Dst index). The global effect is confirmed by the correlation found between the variation in two independent processes occurring at widely separated regions in space, namely, the ring current intensity and the behaviour of ionospheric densities at high latitudes (Yadav & Pallamraju 2015). From the total number of 2670 earthquakes of M6.0C during 1999-2015, we have found the majority of events happened under quiet geomagnetic conditions (2205 'non-storm' earthquakes) and 465 'storm' earthquakes (17.4% of the total events list). We note that the per cent of M6.0C storm-time earthquakes for a period of observation during 17 recent years exceeds the storm-time percentage (13%) of M5.0C earthquakes for 50 years of observation (Gulyaeva 2014) due to the extended criteria for the 'storm' classification (Equation 5) than the former specification of the geomagnetic state according only to the ring current Dst storm occurrence. The global spatial distribution of earthquakes is irregular tending to denser earthquake occurrence in the Pacific region (Levin & Sasorova 2012;Gulyaeva 2014). In the present study, we have estimated the spatial percentage distribution of the 'non-storm' earthquakes M6.0C under quiet magnetosphere (Figure 3(a)) and the 'storm' earthquakes ( Figure 3(b)) for 1999-2015. The 'nonstorm' earthquakes distribution (Figure 3(a)) remind that of M5.0C earthquake zones of enhanced seismic activity (Gulyaeva 2014) which are observed along the tectonic plates boundaries at longitudes from 90 to 190 E and magnetic latitudes from 40 S to 40 N, with dominant earthquake occurrence in the sub-equatorial region of the South magnetic hemisphere. The next appreciable zone of enhanced tectonic activity is revealed around the West coast of South America which also corresponds to a tectonic plate boundary. We note that most of the earthquakes are located within the limits of the closed magnetic field lines, which corresponds to L D 4.17 at the magnetic equator for GPS orbit (Lee et al. 2013) so the TEC variability within the low latitude and middle latitude regions represents the area for the co-seismic and post-seismic ionospheric and plasmaspheric effects. Results Temporal-latitudinal graphs of TEC (upper panels) and Vs (lower panels) during three days at the meridian of 85 E are intended to illustrate difference between 'storm' type and 'non-storm' states of the ionosphere parameters under consideration ( Figure 5). Figure 5 An erosion and dissipation of TEC EIA is observed during the geomagnetic super-storm on 18 March (Figure 5(a)) which is normally represented by a two-humps-like latitudinal shape with two peaks at the crests of EIA at about §15 in magnetic latitude with a minimum at magnetic equator which is observed in Figure 5 Though the Nepal EQ happened on the quiet day, the peak TEC at the South crest of EIA (the magnetic conjugate region for Nepal EQ hypocentre area) has been diminished, presumably, due to the EQ impact through the ionosphere conjugation. In particular, TEC at the South EIA peak is decreased from 102 TECU on 24 April to 62 TECU on 25 April and further decreased to 47 TECU on 26 April, i.e. day-to-day TEC depletion is observed after the EQ. More drastic differences between the 'storm' and 'non-storm' co-seismic ionosphere are observed with Vs maps in Figure 5(c,d). In particular, most of the Vs values on map are indicators of positive and negative TEC anomalies for We proceed to statistical evaluation of the Vs signatures under the 'storm' and 'non-storm' conditions in the region of interest. Table 2 presents efficiency Es, in per cent (Equation 4) of EQ impact on TEC anomalies at the nearest integer UT hour after the EQ in several ranges of EQ magnitudes from M6.0 to M10.0 in step of DM D 0.5 M units except for the greatest EQ magnitudes M 8.0 to M10.0. Overall Es value for each subset is also provided in the last row of Table 2. As can be seen in Table 2, the efficiency of EQ impact on TEC anomalies increases as EQ magnitude, M, gets larger for the both negative Vsn occurrence and positive Vsp occurrence around the EQ hypocentres. The total energy emitted by an earthquake (E, in Joules) (Gutenberg & Richter 1956) is in exponential relation with the magnitude (M) represented by the equation: log E D 1.5M C 4.8 which is applied in the present study for calculation of EQ emitted energy for individual EQ events. The mean energy and the standard deviation are provided for each subset in Tables 2 and 4. The increasing efficiency of the EQ impact on TEC anomalies in terms of M (Table 2) is coherent with the amount of energy allocated during an earthquake (Bath 1956;Levin & Sasorova 2012;Swedan 2015) which gets larger with increasing M as presented in Table 2. This result supports numerous studies on seismic-ionospheric associations because it presents straightforward evidence on dependence of co-seismic TEC variability on amount of EQ energy. The EQ allocated energy is the primary reason for Es dependence on M in our results because all EQs of any magnitude (M6.0C) in either subset are analysed with the same algorithm using the derived Vs index in the vicinity of hypocentre under specified level of geomagnetic activity. Also, it follows from Table 2 that the efficiency of positive TEC anomalies is greater than the negative ones which testifies on the dominant EQ-related plasma density enhancements as compared with its depletion. The stormtime efficiency is larger than the non-storm results which bring the evidence that the ionosphericgeomagnetic storms facilitate TEC enhancements or depletion induced by EQs. We specify Vs results for daytime earthquakes (the solar zenith angle x < 90 ) and nighttime conditions (the solar zenith angle x > 90 ) during 12 h after EQ for the both 'storm' and 'nonstorm' classes. The time variation of efficiency Es (Equation (4)) after EQ is provided in Figure 6 for daytime, nighttime and total diurnal variation. Symbol SC in the plots stands for the positive 'storm' Vsp, QC for quiet 'non-storm' Vsp, S-for the negative 'storm' Vsn, and Q-for the negative quiet Vsn. Points on the 'Total' subplot curves at 0 h are those values that are listed in the last row of Table 2. In general, all statistical results for the quiet and storm conditions confirm existence of seismic -ionospheric associations since the efficiency of EQ impact on TEC anomalies is not zero in all cases. For some individual EQs, the TEC anomalies in the sense defined in the present study could be missed in the EQ predefined area within 1000 km radius from the hypocentre but we should keep in mind that the most notable ionosphere variability anomalies are specific for the high latitudes while the EQs regions of occurrence belong to the middle and low latitudes. The most important outcome of results in Figure 6 is that efficiency of EQs on positive Table 2. Efficiency of ionosphere response, Es, %, for the TEC enhancement and depletion, Vs, at the different ranges of EQ magnitude (M), at the nearest integer hour (UT) after EQ. The mean energy (J) and standard deviation std for the earthquakes number m are given for each collection during 1999-2015. TEC anomalies under storm condition is twice as large as those under non-storm anomalies. Peak of Es for storm Vsp occurs by daytime (and total diurnal variation) at the nearest (t D 0) hour after EQ. The value decreases after the EQ to the level of other cases as indicated in Figure 6. When compared with the daytime, the results for nighttime storm Vsp anomalies show an enlargement peak by 6 h after the EQ with a value which is twice as large as the other levels and it decreases during the 6 h after the peak. The mean curves of efficiency of EQ impact on TEC anomalies ( Figure 6) are accompanied by Table 3 depicting the ANOVA (Analysis of Variance) statistical results for Vp and Vn occurrence under quiet and disturbed geomagnetic conditions for post-earthquake hours within the 1000 km radius around the hypocentre. Here F implies Fisher's criteria, and p is the probability of the result assuming the null hypothesis. Analysis of variance (ANOVA) is a collection of statistical models used to analyse the differences among group means and their associated procedures (such as 'variation' among and between groups). In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups. ANOVA is applied here for comparing Figure 6. Efficiency of the seismic impact on the ionosphere for 12 h after earthquakes with Vs index anomalies for nighttime, daytime and the total data-set under quiet conditions and during the geomagnetic storms. Table 4 show that the selected algorithm of Vp and Vn estimates is meaningful according to the variables. To determine the dependence of ionosphere variability on the depth of the EQ hypocentre, the relations of the different magnitudes of EQs with their depth are evaluated. The EQs occurrence for the different ranges of the hypocentre depth in the Pacific region is provided in detail by Levin and Sasorova (2012). The results of evaluation of the earthquake energy and standard deviation for three categories of depths for daytime and nighttime under geomagnetic quiet and storm conditions for 1999-2015 are given in Table 4. Hypocentre depth, D, is grouped into three classes: the shallow depth, D1 70 km; the descent depth, 70 < D2 300 km; the deep depth, 300 < D3 800 km. The occurrence of EQs decreases with increasing depth both for geomagnetic quiet conditions and storms. While the magnitude M is introduced by Gutenberg and Richter (1956) as a measure of energy emitted by EQ, the specification of energy distribution in terms of the depth categories shows the dependence of EQs energy on depth so that the energy of EQs gets larger as the depth increases. Conclusion In this study, the structural changes of ionosphere are investigated with respect to disturbances in the ionization levels and geomagnetic field due to storms and earthquakes using a novel Vs index, which is derived using the variability of GIM-TEC. The seismic-ionospheric associations are analysed during 12 h after each of 2670 earthquakes of Richter magnitude from M6.0 to M10.0 separated to 'storm' class of 465 EQs and 'quiet' or 'non-storm' class of 2205 EQs worldwide from January 1999 to December 2015. The median, m, of 15 days prior to the current day at each cell of GIM-TEC map in 2.5 £ 5 of latitude / longitude grids is computed for each hour UT (0, 1, …, 23 h) as a reference value. The standard deviation s from the median represents a measure of the dispersion of distribution. The deviation of instant TEC from the median normalized by the standard deviation, Ds, is converted into an index, Vs, varying from ¡4 to C4, that corresponds to extreme negative or positive deviations, respectively. Efficiency (Es) of the ionosphere response to impact of earthquakes is estimated as a relative density of the negative indices Vsn ¡2 on the specified fragments of a map (DTEC -1s), or the positive indices Vsp 2 (DTEC C1s), regarding the total number of cells in the fragment(s) of 1000 km radius around the EQ hypocentre(s) on the map or series of EQs on the relevant maps. It is found that the efficiency of EQ impact on the ionosphere is growing with EQ magnitude M at the nearest integer hour UT after EQ both for the storm and non-storm classes. The positive TEC anomalies are more effective than the negative ones for both storm and non-storm subsets which indicate on the EQ post-effects producing rather increased plasma variability in the ionosphere than its decreasing process. The Vs values grouped with respect to storm-time earthquakes and quiet-time earthquakes for nighttime (solar zenith angle x > 90 ) and daytime (x < 90 ) occurrences during 12 h after EQ show that post-seismic TEC positive anomalies occur almost twice as much as compared to the negative anomalies under storm conditions. Twice as many positive TEC anomalies during geomagnetic storm in the near-hypocentre region are observed at the first integer hour in UT after EQ with a subsequent decrease during 12 h afterwards for daytime. The increase of TEC positive anomalies by nighttime is observed during 6 h after EQ followed by a gradual recovery after the peak. Analysis of the EQs energy for three classes of the depth (D 70 km, 70:300 km, 300:800 km) brought an evidence of its dependence on the depth of the tectonic events. While the magnitude M is introduced by Gutenberg and Richter (1956) as a measure of energy E emitted by EQ, M»M(E), the specification of energy distribution in terms of the depth categories shows the energy of EQs growing with the greater depth D, in other words, the EQ magnitude should be represented in a function of two variables: M»M(E,D). The present results suggest that there is a challenge for more sophisticated techniques to be developed in order to distinguish the earthquake effects on the ionosphere happened on the background of geomagnetic activity. The results of this study will be used as a basis for observing and grouping the disturbances in the ionosphere and geomagnetic field and Vs index can be developed further as a storm and/or earthquake precursor.
6,503
2017-12-15T00:00:00.000
[ "Geology", "Physics", "Environmental Science" ]
HDAC6 Enhances Endoglin Expression through Deacetylation of Transcription Factor SP1, Potentiating BMP9-Induced Angiogenesis Histone deacetylase 6 (HDAC6) plays a crucial role in the acetylation of non-histone proteins and is notably implicated in angiogenesis, though its underlying mechanisms were previously not fully understood. This study conducted transcriptomic and proteomic analyses on vascular endothelial cells with HDAC6 knockdown, identifying endoglin (ENG) as a key downstream protein regulated by HDAC6. This protein is vital for maintaining vascular integrity and plays a complex role in angiogenesis, particularly in its interaction with bone morphogenetic protein 9 (BMP9). In experiments using human umbilical vein endothelial cells (HUVECs), the pro-angiogenic effects of BMP9 were observed, which diminished following the knockdown of HDAC6 and ENG. Western blot analysis revealed that BMP9 treatment increased SMAD1/5/9 phosphorylation, a process hindered by HDAC6 knockdown, correlating with reduced ENG expression. Mechanistically, our study indicates that HDAC6 modulates ENG transcription by influencing promoter activity, leading to increased acetylation of transcription factor SP1 and consequently altering its transcriptional activity. Additionally, the study delves into the structural role of HDAC6, particularly its CD2 domain, in regulating SP1 acetylation and subsequently ENG expression. In conclusion, the present study underscores the critical function of HDAC6 in modulating SP1 acetylation and ENG expression, thereby significantly affecting BMP9-mediated angiogenesis. This finding highlights the potential of HDAC6 as a therapeutic target in angiogenesis-related processes. Introduction Acetylation stands as one of the most pivotal post-translational modifications (PTMs) applied to both histone and non-histone proteins.Proteins subjected to acetylation often exhibit altered physical and biochemical characteristics, which are essential for cellular adaptation to environmental fluctuations.The balance of protein acetylation within the cell is delicately controlled by two primary enzyme groups: histone acetyltransferases (HATs) and deacetylases (HDACs) [1]. Within the HDAC family, histone deacetylase 6 (HDAC6) emerges as a distinctive member, playing an integral role in modulating a variety of cellular processes, including cell proliferation [2], migration [3,4], stress response [5,6], and endocytosis [7,8] through its deacetylase and non-deacetylase activity.Unique to HDAC6 is its structural composition, featuring two tandem catalytic domains (CD1 and CD2) [9,10].This dual-domain configuration is an exclusive trait amongst HDACs, endowing HDAC6 with an enhanced and diversified substrate specificity.For instance, DEAD-box helicase 3 X-linked (DDX3X) has been identified as a substrate for both CD1 and CD2 domains [11], while α-tubulin, cortactin, and heat shock protein 90 (HSP 90) are known as specific substrates for the CD2 domain [4,9,10,12,13].Beyond its deacetylase domains, HDAC6 incorporates a cytoplasmic anchoring domain, predominantly localizing the protein within the cytoplasm, as well as a zinc finger motif (ZNF) for ubiquitin recognition which is involved in protein degradation and autophagy processes [14][15][16].Notably, HDAC6 is extensively expressed in various systems and cell types, including a crucial involvement in angiogenesis in the cardiovascular system; yet, its molecular mechanisms in this context remain to be fully elucidated [4,17]. Angiogenesis is a vital physiological process that involves the formation of new blood vessels from pre-existing ones.This process is crucial in embryonic development [18], wound healing [19], inflammation response [20], and tumor development [21] and is orchestrated by receptors on the surface of endothelial cells such as vascular endothelial growth factor receptor (VEGFR), which detects circulating vascular endothelial growth factor (VEGF) and initiates signaling pathways that foster the proliferation and migration of endothelial cells (ECs) [22][23][24].HDAC6 has been reported to influence EC functions via the VEGF pathway [25].Endoglin (ENG), also known as CD105, is another receptor that has been implicated in angiogenesis [26].It is a membrane glycoprotein primarily recognized for its role as a part of the transforming growth factor beta (TGF-β) receptor superfamily [27,28] and is predominantly expressed in endothelial cells and functions as a receptor of bone morphogenetic protein 9 (BMP9) [28,29].Endoglin plays a pivotal role in angiogenesis and in maintaining the structural integrity of the blood vessels; certain mutations of ENG are associated with hereditary hemorrhagic telangiectasia (HHT) [30,31].However, the regulatory mechanisms of ENG protein in BMP9-induced angiogenesis, and whether ENG is regulated by HDAC6, remain to be explored. The present study aims to illuminate the connection between HDAC6 and ENG and investigate HDAC6's role in BMP9-mediated angiogenesis.Furthermore, through transcriptomic and proteomic analyses in endothelial cells, we seek to unravel the intricate mechanisms by which HDAC6 influences this angiogenic process. Cell Culture Human umbilical vein endothelial cells (HUVECs) were obtained from Sciencell (San Diego, CA, USA) and cultured according to suppliers' instructions, with cells being passaged at a 1:3 ratio.Only HUVECs from passages 4 to 6 were utilized for the experiments.Human embryonic kidney 293T cells (HEK 293T) were obtained from ATCC (Manassas, VA, USA) and cultured in DMEM medium (BloomStem, Hainan, China) supplemented with 10% fetal bovine serum at 37 • C with 5% CO 2 . The HDAC6 coding sequences were cloned from a cDNA library derived from HU-VECs by PCR.Point mutations were introduced by site-directed mutagenesis.Different constructs were then inserted into a modified pCDH-CMV vector by seamless cloning.The ENG promoter was cloned from the HUVEC genome by PCR and inserted into a pGL3-basic vector by seamless cloning. Lentivirus-Mediated Gene Knockdown and Overexpression The lentiviral vectors were co-transferred with other packaging plasmids (pREV, pVSV-G, pTAT, and pGAG) into HEK 293T cells.After 48 h and 72 h, cell medium containing the virus were harvested, and these lentiviruses were purified and quantified as previously described [32].The HUVECs from passage 4 were infected with a serious amount of lentivirus, cultured for 48 h, and then selected with 2 µg/mL puromycin for another 48 h, after which the cells underwent one more passage before the subsequent experiments. CRISPR-Cas9-Based HDAC6 Knockout in 293T Cells Single guide RNA sequence 5 ′ ACATGATCCGCAAGATGCGC 3 ′ , which targets the human HDAC6 gene, was ligated into a lentiCRISPR v2 vector.HEK 293T cells were transfected with the plasmid.After 48 h, these transfected cells were diluted and cultured for single clone screening.The positive HDAC6 knockout clones were passaged for subsequent experiments. Wound Healing Assay The HUVECs were counted and seeded into a 6-well plate and cultured until 100% confluence and then synchronized with ECM basal medium containing 0.5% FBS for 12 h.The monolayer cells were scratched with a plastic pipette; detached cells were washed away 3 times by PBS.Images of the wound were taken at 0 h and 24 h after the scratch and subsequently quantified by ImageJ 1.53e software. Tube Formation Assay Growth factor-reduced Matrigel (Sigma-Aldrich, St. Louis, MO, USA) was plated evenly in a 24-well plate and incubated at 37 • C for 60 min before seeding the HUVECs.The HUVECs were pre-synchronized with ECM basal medium containing 0.5% FBS for 12 h.Then, fifty thousand cells were counted and seeded into the solidified Matrigel and incubated for 6 h before photographing.Tube length and branching points were quantified using the ImageJ 1.53e software. The Transwell Assay The transwell insert (BD Bioscience, San Jose, CA, USA) was coated with collagen II (ThermoFisher Scientific, Waltham, MA, USA) before seeding cells.Forty-five thousand pre-synchronized HUVECs were seeded into each 24-well transwell insert, and 1 mL ECM basal medium containing 0.5% FBS was added to the lower chamber of the plate and cultured for 24 h.The remaining inside cells were removed by cotton swipe and washed thoroughly with PBS; the migrated cells were fixed with 4% polyformaldehyde, stained with crystal violet, imaged under the microscope, and counted visually. Immunoprecipitation Cells were lysed with a binding buffer (50 mM HEPES, 150 mM NaCl, 1% Triton X-100, pH 7.5, supplemented with protease inhibitor cocktail) for 30 min on ice.The cell lysate was cleared by centrifugation at 20,000× g for 20 min at 4 • C and quantified by a BCA kit.The cell extracts were then incubated with acetyl-lysine antibody under rotation at 4 • C overnight.The next day, 40 µL of protein A/G agarose beads (ThermoFisher Scientific, Waltham, MA, USA) were added and incubated at room temperature for 1-3 h, and the bound protein was eluted by SDS-PAGE loading buffer heated to 98 • C for 5 min.The HA-tagged proteins were immunoprecipitated with anti-HA affinity agarose beads in a similar way. The Dual-Luciferase Reporter Assay The pGL3 plasmid, which contains ENG promoter and pRL-TK plasmid, was cotransfected to HEK 293T cells at a ratio of 100:1.After 48 h, these transfected cells were lysed and cleared with centrifugation.Subsequent luciferase activities were measured under the manufacturers' instructions in a SpectraMax i3x plate reader (Molecular Devices, Silicon Valley, CA, USA). Immunofluorescence Staining The HUVECs were seeded into a 35 mm glass-bottom culture dish, fixed with 4% paraformaldehyde, and permeabilized with 0.1% Triton X-100.HDAC6 and SP1 were stained with corresponding primary antibodies and fluorescent secondary antibodies (Cell Signaling Technology, Beverly, MA, USA), and cell nuclei were stained with DAPI.Confocal laser scanning microscopy was carried out using an LSM710 confocal microscope (Zeiss, Oberkochen, Germany). SDS-PAGE and Western Blot Total protein was extracted from the treated HUVECs, and protein concentrations were determined by a BCA protein assay kit (Shenneng Bocai Biotechnology Co., Ltd., Shanghai, China).Equal amounts of protein samples were separated by 10% SDS-PAGE gels and transferred onto 0.45 µm PVDF membranes (Millipore-upstate, Billerica, MA, USA).Membranes were blocked with 5% non-fat milk for 1 h at room temperature and incubated with the primary antibodies overnight at 4 • C.After incubation with the corresponding secondary antibodies, the target protein bands were detected with ImageQuant LAS 4000 (General Electric Co., Fairfield, CT, USA) or Touch imager xli (eBLOT, Shanghai China) and quantified using ImageJ 1.53e software.Detailed antibody information is listed in Table S1. High-Throughput Transcriptomic Analysis The HUVECs were lysed with RNAiso plus reagent (Takara, Kyoto, Japan), and RNA was extracted under the manufacturers' instruction, among which the mRNA was enriched by oligo magnetic beads and then fragmented.The first strand of the cDNA template was generated by random hexamers; the second strand was synthesized by PCR.The double-strand DNA was purified with the AMpure XP system (Sigma-Aldrich, St. Louis, MO, USA).The cDNA library quality was assessed on an Agilent Bioanalyzer 4150 system (Agilent Technologies, Santa Clara, CA, USA) and sequenced on an Illumina Novaseq 600 (Illumina, San Diego, CA, USA).Differential expression analysis was performed using the DESeq2 [33]; DEGs with |log 2 FC| > 1 and Padj < 0.05 were considered to be significantly different expressed genes. Reverse Transcription and Real-Time PCR The RNA was isolated from HUVECs and quantified by OD 260, and then 1 µg RNA was used for reverse transcription by ReverTra Ace™ qPCR RT Master Mix with gDNA remover (Toyobo, Shimahama, Osaka, Japan).Gene amplification was performed by realtime PCR using a SYBR Green Real-Time PCR Kit (Toyobo, Shimahama, Osaka, Japan) on a Light Cycler Pro system (Roche, Basel, Switzerland).The sequence of primers used in the Real-time PCR is listed in Table S2. Tandem Mass Tags (TMT) Labeling Proteomic Analysis The HUVECs were lysed with SDT buffer (4% (w/v) SDS, 100 mM Tris-HCl pH 7.6, 0.1 M DTT); cell lysate was cleared by centrifugation; and protein concentration was measured by BCA kit.The protein was digested with trypsin by a filter-aided proteome preparation (FASP), and the peptide was quantified by OD280.For each sample, 100 µg of peptide underwent TMT labeling following the manufacturer's instructions (Thermo Fisher, Waltham, MA, USA).The labeled peptide was graded and loaded onto an Easy nLC HPLC system (Thermo Fisher, USA) for separation, and fragments were then analyzed by a Q-Exactive mass spectrometer (Thermo Fisher, USA).The raw peak data were further analyzed by Mascot 2.2 and Proteome Discoverer1.4software.Bioinformatical Gene Ontology (GO) term enrichment analysis was conducted by the SRplot platform "http://www.bioinformatics.com.cn/(accessed on 28 February 2024)" and the OmicShare tools "https: //www.omicshare.com/tools(accessed on 28th February 2024)". Data Visualization and Statistical Analysis The data were presented with mean ± SEM and visualized by Prism Graphpad 8.0.All experiment data were statistically analyzed by SPSS or Prism Graphpad 8.0.Briefly, the normal distribution of data was determined for each dataset by the Shapiro-Wilk W test; differences between the two groups were compared with Student's t-test; and multiple group comparisons were tested by ANOVA with a Tukey's post hoc test for normally distributed data with equal variance.Otherwise, either the Mann-Whitney U test or the Kruskal-Wallis test, followed by Dunn's post hoc test, was used.Two-sided probability values < 0.05 were considered statistically significant. Knocking down HDAC6 Inhibits the Migration of HUVECs and Reduces ENG Expression HDAC6 knockdown was induced in human umbilical vein endothelial cells (HUVECs) by RNA interference, and Western blot analysis showed that its expression was reduced by ~50% with both shRNA sequences; acetylated α-tubulin (a substrate of HDAC6) was significantly increased as a result (Figure 1G,H,J).HDAC6 knockdown decreased the migration ability of the HUVECs in the wound healing assay (Figure 1A,B).Transcriptomic and proteomic analyses were performed to map out genes with different expressions in HUVECs induced by HDAC6 knockdown.The transcriptomic data revealed that 5836 genes were dysregulated, 2773 were up-regulated, and 3063 were down-regulated (Figure 1C and File S1).At the protein level, 580 proteins were dysregulated, 256 were up-regulated, and 324 were down-regulated (Figure 1D and File S2).An integrated bioinformatics analysis of the datasets identified 273 genes that were differentially expressed at both the transcriptional and the protein levels (Figure S1A; File S3).Subsequent Gene Ontology (GO) enrichment analyses revealed that these genes primarily contribute to angiogenesis-related biological processes (BP), such as response to wounding, regulation of cell migration, and regulation of cell motility, with 22 genes sharing these characteristics (Figure S1B,C; File S3).Additionally, the GO enrichment for cellular components (CC) highlighted a significant association of these genes with cell-substrate junctions and focal adhesions, crucial for endothelial cell motility (Figures 1 and S1D,E; File S3).Combining the findings from both the BP and the CC GO enrichments, we identified 10 key proteins (Figure S1F; File S3).A further review of the literature, along with our preliminary investigations performed on these genes, highlighted endoglin (ENG) as a critical factor in endothelial cell function.ENG's down-regulation at both the mRNA and the protein levels in HUVECs, following HDAC6 knockdown, was confirmed through qPCR and Western blot analyses (Figure 1E-I). BMP9 Promotes HUVECs Wound Healing and SMAD1/5/9 Phosphorylation in a Dose-Dependent Manner To investigate whether ENG contributes to angiogenesis, we administered HUVECs with BMP9, which is an endogenous ligand of ENG.Wound healing results showed that BMP9 facilitated HUVEC migration at concentrations ranging from 1.25-20 ng/mL (Figure 2A,B).SMADs are the main signal transducers for receptors of the transforming growth factor beta (TGF-β) receptor superfamily, and both SMAD2/3 and SMAD1/5/9 complexes are implicated with angiogenesis; we assessed the phosphorylation of these SMAD complexes to determine the signaling pathways triggered by BMP9.Western blot analysis showed that phosphorylation of SMAD2/3 was unchanged with BMP9 treatment (Figure 2E,F).The phosphorylation level of SMAD1/5/9 gradually increased with the rising concentration of BMP9 treatment, reaching a peak at a BMP9 concentration of 5 ng/mL, which is 8.8 folds higher than that of the vehicle group (Figure 2C,D). HDAC6 Mediates BMP9 Effects in Promoting Migration and Tube Formation of HUVECs Considering that BMP9 can also bind to other receptors from the TGF-β superfamily, we performed various in vitro angiogenesis assays to explore whether the BMP9-induced angiogenesis was mediated by ENG and regulated by HDAC6.Our results showed that, similar to HDAC6 knockdown, ENG knockdown itself inhibited HUVEC wound healing and that the effects of BMP9 treatment (5 ng/mL) in promoting HUVEC wound healing were both blunted by ENG and HDAC6 knockdown (Figure 3A,B).The transwell and tube formation assays also confirmed that BMP9 treatment (5 ng/mL) significantly increased the migration and tube formation ability of HUVECs; HDAC6 and ENG knockdown not only reduced cell migration and impeded tube formation but also attenuated the stimulatory effects of BMP9 treatment (Figure 3C-G).Western blot results showed that HDAC6 knockdown reduced the increase in SMAD1/5/9 phosphorylation induced by BMP9 treatment (5 ng/mL), while knocking down ENG completely blunted this effect (Figure 3H,J).Notably, ENG protein expression in HUVECs significantly increased after BMP9 treatment (Figure 3H,I). HDAC6 Regulates ENG Promoter Activity by Interacting with Transcription Factor SP1 Given the fact that HDAC6 knockdown inhibited ENG expression both at mRNA and at protein levels, we investigated whether HDAC6 affected its promoter activity.An HDAC6 knockout 293T cell line was established by the CRISPR-Cas9 gene editing system (Figure 4A), and a subsequent luciferase reporter assay demonstrated a decreased luciferase activity in the ENG promoter region (Figure 4B).Computational analyses, supported by previous studies [34,35], indicated that the transcription factor SP1 could bind to the ENG promoter sequence, thereby modulating ENG expression (Figure 4C).Coimmunoprecipitation (Co-IP) experiments confirmed a direct interaction between HDAC6 and SP1 (Figure 4D).In HUVECs, SP1 predominantly resides in the nucleus, with a smaller presence in the cytoplasm.Conversely, HDAC6 is mainly cytoplasmic and demonstrates partial co-localization with SP1, suggesting that their interaction largely occurs in the cytoplasm.Additionally, control group HUVECs display normal morphology, characterized by evident polarization and directional movement.However, in cells with HDAC6 knockdown, there is a noticeable decline in polarization, leading to a pancake-like cell appearance, and a concurrent decrease in SP1 expression is observed (Figure 5E,F).D,E) Effects of BMP9 treatment (5 ng/mL) on wound healing in HUVECs with or without overexpression of HDAC6 mutants; H216A-CD1 inactive mutant, H611A-CD2 inactive mutant, and WT-wild-type human HDAC6; n = 12; bar =150 µm.(F-H) Effects of BMP9 treatment (5 ng/mL) on protein expression of ENG, phosphorylation of SMAD1/5/9, and acetylation of α-tubulin in HUVECs with or without overexpression of HDAC6 mutants; n = 10-13. HDAC6 CD2 Deacetylates SP1 and Regulates ENG-Mediated Angiogenesis It is reported that post-translational modifications can alter the activity of SP1 [36][37][38]; we evaluated its acetylation level by immunoprecipitation of acetylated lysine.Our results showed that the acetylation of SP1 increased significantly in HUVECs infected with shHDAC6 lentivirus (Figure 5A,B).Since HDAC6 possesses two tandem catalytic domains with deacetylase activity and a ZNF domain that can interact with ubiquitinated proteins (Figure 5C), we explored which catalytic domain was responsible for deacetylating SP1 and whether other kinds of non-enzymatic activity of HDAC6 can also regulate ENG expression.We constructed an overexpression lentivirus of the HDAC6 CD1 inactive mutant H216A, CD2 inactive mutant H611A, and wild-type HDAC6 (Figure 5C).Our results demonstrated that overexpression of the wild-type HDAC6 and H216A mutants in HUVECs enhanced the effects of BMP9 treatment on wound healing, while overexpression of the CD2 inactive mutant H611A blunted the effects of BMP9 treatment on wound healing (Figure 5D,E).Further Western blot analysis revealed that overexpression of the wild-type HDAC6 and H216A mutants increased ENG expression and thereby increased SMAD1/5/9 phosphorylation after BMP9 treatment, while overexpression of the H611A mutant had no impact on ENG expression but did diminish the effects induced by BMP9 treatment (Figure 5F-H), indicating that HDAC6 CD2 enzymatic activity is required for HUVEC response to BMP9 treatment. Discussion Lysine acetylation modification is widely present in various proteins in the cytoplasm and nucleus.The physicochemical properties of acetylated proteins often differ from those of the original proteins, facilitating swift functional modulation without necessitating changes in protein expression levels.Thus, lysine acetylation has garnered significant attention in research due to its pivotal role in protein regulation. Within this context, the HDAC family has become a focus of study due to its crucial role in regulating protein acetylation levels.It has emerged as a potential drug target protein in various diseases, including neurodegenerative diseases [39,40], cancer [41,42], and cardiovascular diseases [43,44].Preliminary studies indicate that pan-HDAC inhibitor trichostatin A (TSA) blocked angiogenesis in vitro and in vivo [45].However, the specific underlying mechanisms remain ambiguous.Subsequent investigations have revealed that both nuclear (HDAC1, 2, 3) and cytoplasmic (HDAC6) isoforms of HDAC can mediate angiogenesis [4,[46][47][48], though their respective mechanisms diverge.Due to its unique structural composition, HDAC6 has complex intracellular functions, and since various extracellular and intracellular signaling molecules regulate angiogenesis [49], the mechanism by which HDAC6 mediates angiogenesis has not been fully elucidated. The present study employs an integrated approach of transcriptomics and proteomics to investigate gene expression changes in vascular endothelial cells following HDAC6 knockdown.This approach led to the identification of ENG, a downstream protein regulated by HDAC6.ENG, a glycosylated membrane protein, is specifically expressed in endothelial cells and is part of the TGF-β receptor superfamily [50].It predominantly binds to the ligands BMP9 and BMP10 [51,52].ENG is known to play a crucial role in maintaining vascular integrity [50,53].Homozygous mutations in mice result in embryonic lethality, and certain point mutations in human ENG are linked to hereditary hemorrhagic telangiectasia (HHT) [50].However, the relationship between ENG, its ligand BMP9, and angiogenesis has been ambiguous, with conflicting reports regarding BMP9 ′ s role in an-giogenesis [54][55][56][57][58].In this study, the effects of BMP9 on angiogenesis were assessed using various models, including scratch wound healing, transwell migration, and tube formation assays in HUVECs.The results indicated that BMP9 treatment (5 ng/mL) significantly promotes endothelial cell scratch wound healing, transmembrane migration, and tube formation, thereby affirming BMP9's role in angiogenesis.Additionally, the study revealed that knockdown of HDAC6 and ENG not only reduced the angiogenic capabilities of HUVECs but also blunted the effects of BMP9 treatment.Western blot analysis showed a significant increase in SMAD1/5/9 phosphorylation following BMP9 treatment, which diminished with HDAC6 knockdown, accompanied by a decrease in ENG expression.This pattern mirrors the effects observed with direct ENG interference, suggesting that HDAC6 modulates BMP9-induced angiogenesis through ENG expression regulation.Intriguingly, BMP9 treatment was found to substantially elevate ENG protein levels, hypothesized to be a compensatory cellular response to prolonged ligand-receptor binding leading to receptor desensitization. Given that HDAC6 knockdown suppressed ENG expression at both the mRNA and the protein levels, it is proposed that HDAC6 influences ENG transcription through modulation of promoter activity.This was proved by luciferase reporter assays in HDAC6 knockout 293T cells, which showed a significant decrease in ENG promoter activity.Computational analysis of transcription factor binding sites identified transcription factor SP1 as a key regulator of ENG promoter activity.Immunoprecipitation experiments confirmed a direct interaction between HDAC6 and SP1.Previous studies suggested that SP1 activity is directly regulated by its acetylation level, with higher acetylation correlating to lower transcriptional activity [36][37][38].This was observed in HDAC6 knockdown HUVECs, where an increase in SP1 acetylation was noted.Considering HDAC6's structural complexity, the study further explored its impact on ENG expression and BMP9 treatment response by overexpressing mutant HDAC6 proteins with CD1 inactive (H216A) and CD2 inactive (H611A).Results showed that overexpression of both the CD1 mutant H216A and wildtype HDAC6 enhanced ENG expression and augmented BMP9 ′ s effects, whereas the CD2 mutant H611A did not produce similar effects, suggesting a possible compensatory decrease in endogenous HDAC6 levels.This finding confirms HDAC6's role in mediating SP1 acetylation regulation through its CD2 enzymatic activity, thereby influencing SP1 activity and ultimately regulating ENG expression. Despite the progress made, our study faced several limitations that warrant further investigation.The integrated analysis of transcriptomic and proteomic data pinpointed several other proteins that were down-regulated in tandem with HDAC6 knockdown and were closely related to endothelial cell functions, including integrin β1, integrin β3, nitric oxide synthase (NOS), etc.The mechanisms through which HDAC6 regulates these proteins, as well as their specific contributions to the angiogenic process, remain to be fully elucidated.Furthermore, additional research is required to precisely determine how HDAC6 influences SP1 acetylation at specific sites and how this modulation affects SP1's interactions with other transcription factors and DNA.Moreover, considering the variable effects of BMP9 on angiogenesis in different endothelial cell types [58,59], it is important to investigate the potential competitive interaction among other types of TGF-β receptors, such as TGF-β receptor 2 (TGFBR2), serine/threonine-protein kinase receptor R3 (ALK1), and ENG in BMP9 binding.Then, the differential responses of endothelial cells to BMP9 might be the results of varying expression levels of these TGF-β receptor subtypes. Conclusions In conclusion, this study demonstrates that HDAC6, through its CD2 catalytic domain, modulates the acetylation level of transcription factor SP1, regulating ENG expression and playing a significant role in BMP9-induced angiogenesis. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cells13060490/s1, Figure S1: Integrated bioinformatics analysis of differentially expressed genes following HDAC6 knockdown in HUVECs; Table S1: Antibodies used in this manuscript; Table S2: Sequences for the reverse and real-time PCR primers; File S1: mRNA expression in HUVECs with or without HDAC6 knockdown; File S2: Protein expression in HUVECs with or without HDAC6 knockdown; File S3: Differentially expressed genes following HDAC6 knockdown and the results of the corresponding bioinformatics analysis. Figure 3 . Figure 3. HDAC6 mediates BMP9 effects in promoting migration and tube formation of HUVECs.(A,B) Effects of BMP9 treatment (5 ng/mL) on wound healing in HUVECs with or without knockdown of HDAC6 and ENG; n = 12; bar = 150 µm.(C,D) Effects of BMP9 treatment (5 ng/mL) on the migration of HUVECs in a transwell model with or without knockdown of HDAC6 and ENG; bar = 150 µm; n = 6-8.(E-G) Effects of BMP9 treatment (5 ng/mL) on the tube formation of HUVECs with or without knockdown of HDAC6 and ENG, cultured on a matrigel matrix; bar = 150 µm; n = 4-7.(H-J) Effects of BMP9 treatment (5 ng/mL) on ENG expression and phosphorylation of SMAD1/5/9 in HUVECs with or without knockdown of HDAC6 and ENG; n = 4. n.s., not significant. Figure 4 . Figure 4. HDAC6 regulates ENG promoter activity by interacting with transcription factor SP1. (A) Representative Western blot image of HDAC6 expression and acetylation of α-tubulin in wild-type (WT) 293T and HDAC6 knockout 293T cells; n = 4. (B) Effect of HDAC6 knockout on the promoter
5,839.4
2024-03-01T00:00:00.000
[ "Medicine", "Biology" ]
Event-Triggered Asynchronous Filter of Positive Switched Systems with State Saturation This paper investigates the event-triggered asynchronous filter of positive switched systems with state saturation using linear programming and multiple Lyapunov functions. First, a filter is constructed for continuous-time positive switched systems. Under the asynchronous switching law, an error system is proposed with respect to positive switched systems and their filters, where the state saturation term is described in a polytopic form by virtue of the saturation property. A novel event-triggering condition is addressed based on a 1-norm inequality. Under the event-triggering condition, the error system is transformed into an interval system with lower and upper bounds. By using multiple Lyapunov functions and linear programming, the positivity and stability of the error system are achieved by considering the corresponding properties of the lower and upper bounds, respectively. Then, the event-triggered l 1 -gain filter and nonfragile filter are also proposed for the systems with disturbances. Moreover, the presented filter framework is extended to the discrete-time case. Finally, two examples are given to verify the effectiveness of the proposed filters. Introduction ere exists a class of positive switched systems whose states and outputs are always nonnegative, for which a switching rule is designed to specify the switching between subsystems. Positive switched systems have attracted much attention over past decades [1,2]. ey have extensive applications in the elds of biology systems and pharmacokinetics [3,4]. In practice, there have been many systems that can be modeled as positive switched systems such as formation ying [5] and network employing TCP [6,7]. In the literature [8], the stability of positive switched linear systems with average dwell time (ADT) was studied using multiple linear copositive Lyapunov functions (MLCLFs). A reverse timedependent linear copositive Lyapunov function was constructed and a novel stability criterion of positive switched system was proposed in [9]. In [10], a matrix decompositionbased control approach was introduced for positive systems. It should be pointed out that linear Lyapunov functions are powerful for solving the control problems of positive switched systems. In practical applications, saturation is a universal phenomenon owing to various restrictions of elements and unexpected environment factors. Zhao et al. [11] investigated the nite-time H ∞ control of a class of Markovian jump delayed systems with input saturation. e literature [12] focused on the constraint control of positive Markovian jump systems with actuator saturation. e stabilization of switched linear systems subject to actuator saturation was solved in [13]. More information about input/actuator and sensor saturation can be seen in [14][15][16]. e literature mentioned above focus on the input/actuator saturation. However, as far as the authors' knowledge, there are few results on state saturation. Indeed, the states of most of the practical systems are subject to constraints due to physical limitations. For example, the limited water storage capacity of pipes will lead to the saturation of the state in water systems, the limited bandwidth in network communication systems will bring the transmission restriction of the data packages, and the road bearing capacity in transportation systems has an upper limit. ese states can be modeled via saturation. e state saturation will not only affect the stability of the systems but also cause other fault phenomena. erefore, it is significant to explore the filter issue of positive systems with state saturation. Derong Liu and Michel [17] analyzed the stability of systems with partial state saturation nonlinearities using the Lyapunov function approach, and Kolev et al. [18] addressed the state saturation nonlinearities for discrete-time neural networks. Ji et al. [19] were concerned with the stability analysis of discrete-time linear systems with state saturation using a saturation-dependent Lyapunov functional. For positive systems, they also have some significant results on saturation [20][21][22][23][24]. Regrettably, these mentioned literature studies are concerned with input/ actuator saturation, and few efforts are devoted to the state saturation issue of positive switched systems. In [25], the filter design of positive systems with state saturation was proposed using linear copositive Lyapunov function and linear programming. However, the filter design problem of positive systems with state saturation has not been completely solved. ere are still many open issues such as the filter of hybrid positive systems with state saturation and the event-triggered filter of positive systems with state saturation. In recent years, event-triggered strategy has attracted much attention owing to its advantages in relaxing on the traditional time-triggered strategy and guaranteeing the safe running of systems [26][27][28][29]. Event-triggered strategy has many advantages such as less computation burden, less sampling time, and lower energy requirement. is strategy has been applied to multiagent systems, networked control systems, etc. [30,31]. e tool of linear matrix inequalities was employed for the event-triggered control of linear systems subject to actuator saturation [32]. In [33], an eventtriggered control framework was introduced for nonlinear systems. An event-triggered filter for networked systems with signal transmission delay was designed by utilizing the Lyapunov-Krasovskii functional and linear matrix inequalities in [34]. In [35], the event-triggered fuzzy filter was applied for nonlinear time-varying systems. More results on the event-triggered filter can refer to [36,37] and references therein. Up to now, the event-triggered filter of positive switched systems with state saturation is still open. Moreover, the synchronization switching is hard to be realized since it needs to take time to detect which subsystem is active. erefore, the asynchronous switching is more important and practical than the synchronous switching. In [38], the problem of fault detection filter for continuous-time switched control systems under asynchronous switching was investigated, and the solution was provided in the form of a mixed H − /H ∞ filter approach. In [39], an asynchronous l 2 -l ∞ filter for stochastic Markovian jump systems with randomly occurred sensor nonlinearities was proposed based on linear matrix inequalities. By applying the average dwell time technique and the piecewise Lyapunov-Krasovskii functional technique, sufficient conditions were obtained in [40] for designing an asynchronous finite-time filter of switched networked systems. In [41], positive L 1 -gain asynchronous filter of positive Markovian systems was designed. ese existing results inspire us to investigate the event-triggered asynchronous filter of positive switched systems with state saturation. An asynchronous filter of positive switched systems with overlapped detection delay was designed in [42]. In [43], a class of clock-dependent Lyapunov function was constructed for positive switched systems to obtain less conservative asynchronous filter design approach. It is necessary to point out several points. First, the event-triggered strategy is still open to positive systems. In the event-triggered control of positive systems, it is not easy to transform the error term into the state term. Meanwhile, how to achieve the positivity of the filter error system is also complex. Second, the introduction of state saturation increases the complexity of the filter design. Positive systems with state saturation contain two classes of constraints: saturation and positivity. Each of them is difficult to be handled. Finally, the filter gain design is open to positive systems. Up to now, there are no tractable filter design framework on positive systems. e literature [41][42][43] proposed some design approaches to the filter of positive systems. ey only considered the filter design for one class of positive systems. ese presented approaches cannot be developed for the filter issues of other hybrid systems. erefore, it is necessary to construct a unified filter design for positive systems. ese points motivate us to present the work. is paper investigates the event-triggered asynchronous filter of the positive switched system. First, a simple linear event-triggering condition is introduced for the systems. By using multiple copositive Lyapunov functions and the eventtriggering condition, the error system between the original systems and the corresponding filters is transformed into an interval system. A polytope is used to deal with the state saturation term. By virtue of the matrix decomposition approach, the filter matrices are designed. An asynchronous switching law is also proposed. e contributions of this paper are as follows: (i) a filter is constructed for positive switched systems with state saturation, (ii) an event-triggered asynchronous switching design is given, and (iii) a linear programming-based approach is presented for designing the filter matrices. e rest of the paper is organized as follows: Section 2 gives the problem formulation and preliminaries, Section 3 addresses main results, Section 4 provides two illustrative examples, and Section 5 concludes the paper. Notations: denote R n (R n + ) and R n×n as the sets of n-dimensional real vectors (or non-negative) and n × n-dimensional real matrices, respectively. e symbols N and N + represent the sets of nonnegative and positive integers, respectively. For a vector x ∈ R n , its 1-norm is defined by ‖x‖ 1 � n i�1 |x i |. e L 1 and ℓ 1 norms of a vector x are defined as ‖x‖ L 1 � ∞ 0 ‖x(t)‖ 1 dt and ‖x‖ ℓ 1 � ∞ t�0 ‖x(t)‖ 1 , respectively. e L 1 and ℓ 1 spaces are denoted as ω(t): . A matrix is Metzler if all its off-diagonal elements are nonnegative. e symbol co · { } stands for the convex hull. Problem Formulation Consider a class of switched systems: where x(t) ∈ R n , y(t) ∈ R m , ω(t) ∈ R r + , and z(t) ∈ R s are the state, output, disturbance input belonging to an ℓ 1 space, and output to be estimated, respectively. e system matrices have appropriate dimensions. e symbol δ denotes the derivative operator in the continuous-time context (δx(t) � (d/dt)x(t), t ≥ 0) and the shift forward operator in the discrete-time context (δx(t) � x(t + 1), t ∈ N). e function σ(t) is the switching signal taking values in a finite set S � 1, 2, . . . , J { } and a switching sequence is given as in the discrete-time case. Assume that the timederivative of disturbance exists and is bounded. In order to estimate the output z(t), an asynchronous filter is designed as follows: is the switching signal of the filter, and Δ σ(t) represents the asynchronous time. e matrices A fσ′(t) , B fσ′(t) , E fσ′(t) , and F fσ′(t) are to be determined. Next, we introduce some definitions and lemmas of positive switched systems to facilitate later development. Definition 1 (see [1,2]). A system is said to be positive if its states and outputs are non-negative for any non-negative initial conditions and any non-negative inputs. Lemma 1 (see [1,2]). A continuous-time system _ x(t) � Ax(t) + Bω(t), y(t) � Cx(t) + Dω(t), and z(t) � Ex(t) + Fω(t) is positive if and only if A is a Metzler matrix, and B≽0, C≽0, D≽0, E≽0, and F≽0. A discrete-time system is positive if and only if A≽0, B≽0, C≽0, D≽0, E≽0, and F≽0. Lemma 2 (see [1,2]). For a positive continuous-time system _ x(t) � Ax(t), the following conditions are equivalent: (i) e system matrix A is Hurwitz (ii) ere exists a vector v≻0 such that A ⊤ v≺0 Lemma 3 (see [1,2]). For a positive discrete-time system x(t + 1) � Ax(t), the following conditions are equivalent: Lemma 4 (see [1,2]). Given vectors ℘ ∈ R n and I ∈ R n , if ‖I‖ ∞ ≤ 1, then where D l is an n × n diagonal matrix with elements either 1 or 0 and in the discrete-time state space, respectively. Let the matrix H with H≽0 be a cone attract domain matrix. A symmetric polyhedron is defined as L(H) � x(t) ∈ R n : |H p x(t)| ≤ 1 in the continuous-time state space and L(H) � x(k) ∈ R n : |H p x(k)| ≤ 1 in the discrete-time state space, respectively, where H p is the p th row of the matrix H and p ∈ 1, 2, . . . , n. Next, we introduce the event-triggering mechanism. Define the event-triggered error function: where y(t) is the output value of the event generator, y(t) � y(t l ′ ), l ∈ N, and t l ′ is the event-triggering time instance. An event-triggering condition is established based on 1-norm: where 0 < β < 1 is called event-triggering constant. Under the event-triggering condition, the asynchronous filter can be written as e filter equation (6) is constructed based on the eventtriggered mechanism, while the filter equation (2) is a timetriggered one. Replacing the output y(t) in the filter equation (2) by the output value of event generator y(t), the filter Mathematical Problems in Engineering equation (6) is obtained. e objective of this paper is to design the filter equation (6). Denote , and e(t) � z f (t) − z(t). en, we have Assume that x(t) ∈ L(H σ (t)). By Lemma 4, it derives that Definition 3 (see [11]). e system equation (8) is said to be L 1 (or ℓ 1 ) gain stable if the following two conditions hold: (i) e system equation (8) with ω(t) � 0 is asymptotically stable (ii) Under the zero initial condition, the relation holds for ω(t) ≠ 0, where c > 0 is the L 1 /ℓ 1 gain value, ϖ > 0, and ρ > 0. Main Results In this section, we first consider the filter design of the continuous-time system with ω(t) ≡ 0. en, the filter of the discrete-time system is proposed. Continuous-Time Case. Given a time interval [t l , t l+1 ), where the asynchronous time interval is [t l , t l + Δ l ) and the synchronous time interval is [t l + Δ l , t l+1 ). e q th original subsystem is active in t ∈ [t l , t l+1 ). e p th filter is active in t ∈ [t l , t l + Δ l ), and then the q th filter is active in t ∈ [t l + Δ l , t l+1 ), where t l (l � 0, 1, · · ·) is the switching time instants and Δ l is the time lag between the subsystem and the filter and Δ l < t l+1 − t l . When t ∈ [t l , t l + Δ l ), the error system can be written as where where 4 Mathematical Problems in Engineering hold for l � 1, 2, . . . , 2 n , then under the event-triggered asynchronous filter equation (6) with and the switching law satisfies The filter error system equation (8) is positive and asymptotically stable, where Φ � I − β1 m×m , Ψ � I + β1 m×m , T − (t 0 , t) denotes the total time length of synchronous, and T + (t 0 , t) denotes the total time length of asynchronous of the switched systems. Moreover, all states starting from Proof. First, consider the positivity of the error system equation (8). For x(t 0 )≽0 and y(t 0 )≽0, it gives ‖e y (t 0 )‖ 1 ≤ β‖y(t 0 )‖ 1 by equation (5). en, we deduce that − β1 m×m y(t 0 ) ≺ e y (t 0 ) ≺ β1 m×m y(t 0 ). By equations (11) and (13), we have Mathematical Problems in Engineering For By equations (15a) and (15e), we have Using equation (16) gives It is easy to know that M q , A fp , and A fq are Metzler matrices. Due to E fp ≽0 and E fq ≽0, _ x(t 0 )≽0 by Lemma 1. Using recursive derivation gives _ x(t)≽0, that is to say, the error system equation (8) is positive. erefore, it holds that Next, consider the stability of the system equation (8). Choose piecewise multiple Lyapunov functions: en, Mathematical Problems in Engineering Using equations (15i) and (15j) and λ > 1, it yields By equations (15c), (15d), (15g), and (15h), it holds that us, Together with equations (15j) and (15k), it yields where ℵ � N σ (T, t 0 ). en, we have From equation (17a), we can get en, where ρ 1 and ρ 2 are the minimal and maximal elements of v (i) ∈ v (p,q) , v (q) , p ∈ S, q ∈ S . erefore, the filter error system equation (8) is stable with equations (17a) and (17b). Finally, we provide the invariance of the state. Given any initial conditions satisfying Input/actuator saturation is frequently investigated since limited implementation ability of elements will lead to the saturation phenomenon. In [20][21][22][23], the input saturation of positive systems had also been explored, where copositive Lyapunov functions and linear programming were employed for coping with the control synthesis of positive systems. In practice, many quantities are subject to Mathematical Problems in Engineering constraints such as the population of animals in a species, the volume of water storage, and the number of vehicles accessing to a circle road. ese refer to state saturation, which is a new kind of saturation. eorem 1 proposes a filter design for positive switched systems in terms of linear programming. e presented design in equation (16) is different from the design approaches in [20][21][22][23]. Remark 2. Positive systems have distinct research approaches from general systems. Existing filter design approaches [34][35][36][37] cannot be developed for positive systems. A direct development will bring some conservatism in describing the positivity condition, the computation, and so on. In [10], it was verified that a matrix decomposition approach is more suitable for dealing with the synthesis of positive systems. In eorem 1, the filter gains are designed as equation (16) by following the approach in [10]. Under equation (16), the corresponding positivity and stability conditions can be solved via linear programming, which is more powerful for positive systems. Remark 3. In [41][42][43], the asynchronous filter was designed for positive Markovian jump systems, positive switched systems, and positive systems, respectively. For different kind of positive systems, the filter design approaches are different. A question is whether a unified filter framework can be constructed. Such a framework is more significant for positive systems. In eorem 1, the filter gains are designed as in equation (16) by following the matrix decompositionbased design in [10]. In [10], it had been pointed out that the matrix decomposition-based design approach can be easily developed for other syntheses of positive systems. e successful application in eorem 1 implies that the filter framework is a unified one and can be applied for the related filter issues of positive systems. Based on this point, eorem 1 can be applied for the design in [41][42][43]. It is clear that condition equation (15a)-(15l) cannot be directly solved in terms of linear programming when the parameters λ, μ 1 , and μ 2 are unknown. How to choose these parameters such that equations (15a)-(15l) is feasible is key to the validity of eorem 1. Considering this point, we give Algorithm 1 to transform equations (15a)-(15l) into a linear programming form. In this algorithm, the range of μ 1 and μ 2 should be selected as large as possible to guarantee the feasibility of equations (15a)-(15l) and the parameter λ should be chosen close to 1 to obtain a lower ADT. Next, we consider the filter design for the case ω(t) ≠ 0. e disturbance satisfies where χ is a given positive constant. Mathematical Problems in Engineering hold for given μ 1 > 0, μ 2 > 0, λ > 1, and any l � 1, 2, . . . , 2 n , then the filter error system equation (8) is positive and L 1 gain is stable under the event-triggered asynchronous filter equation (13) with equation (16), where Φ � I − β1 m×m , Ψ � I + β1 m×m , and the switching law satisfying where T − (t 0 , t) denotes the total time length of synchronous and T + (t 0 , t) denotes the total time length of asynchronous of the switched systems. Moreover, the system states starting from Proof. Using a similar method to eorem 1, we have By equations (40a), (40b), (40h), and (40i), it follows that Mathematical Problems in Engineering By equation (16), it is easy to get and F fq ΦD q − B q ≽0. us, M q , A fp , and A fq are Metzler matrices. Together with B q , E fp , E fq ≽0, and x(t 0 )≽0, it derives that the system equation (8) with ω(t) ≠ 0 is positive. Multiplying both sides of the inequality with It follows from Definition 2 and equation (41b) that Furthermore, Next, we present the invariance of the state. By equation . Using method of the eorem 1, we have that the states will be kept in the set ( ∪ p Λ(v (p) , 1 + cχ)) ⋃( ∪ p,q Λ(v (p,q) , 1 + cχ)) for any initial conditions starting from ∪ i (v (i) , 1 + cχ). In the following, the Zeno behavior is analyzed to avoid the continuous sampling and updating of the state under the Mathematical Problems in Engineering event-triggering mechanism. Under the event-triggering mechanism equation (5) . Without generality, assume that _ w(t) exists and is bounded. en, there exists a positive constant [ such that ‖ _ e y (t) ′ . is means that t > t l ′ . erefore, the Zeno behavior can be avoided. Remark 4. e condition equation (39) is introduced to ensure the bounded property of the state under the saturation restriction. From equation (39), one can find that the L 1 norm of the external disturbance w(t) is bounded. Assume that the L 1 norm is unbounded. From the last part of the proof in eorem 2, it is not hard to find that the bounded property of the state is invalid. In such a case, the saturation phenomenon cannot be handled. In future work, it is interesting to investigate the filter issue of positive systems with unbounded disturbance. Remark 5. In [16], a filter was designed for continuous-time linear systems with sensor saturation. Four aspects should be pointed out: (i) an event-triggering mechanism is employed in this paper to design the filter of positive switched systems while the time-triggering mechanism was used in [16], (ii) the state saturation is handled in this paper, and the sensor saturation control approach in [16] cannot be developed for dealing with the state saturation problem, (iii) the filter gain design in this paper is different from the one in [16], and (iv) positive systems have distinct research approaches from general systems in [16]. Indeed, it is also clear that linear copositive Lyapunov functions and linear programming are employed in this paper, while Lacerda [16] chose a quadratic Lyapunov function and other computation method. Remark 6. e time-triggered filter strategy was proposed for positive systems in [41][42][43]. In most cases, it is hard to reach the continuous sample owing to limited ability of elements and high sample cost. e event-triggered filter is more practicable with respect to the time-triggered filter. us, eorems 1 and 2 design a class of event-triggered filter for positive switched systems. Owing to the positivity requirement, the event-triggered filter of positive systems is more complex than general systems. For general systems, the error term can be easily transformed into the state term. However, such a strategy fails for positive systems. us, an interval estimation approach is introduced to transform the original system into an interval system. Finally, the positivity of the original system can be achieved by guaranteeing the positivity of the lower bound of the interval system. Next, we present the invariance of the state. By equation (98), we have V(∞) ≤ 1 + c M k 0 ‖ω(s)‖ 1 ≤ 1 + cχ. For ω(k) � 0, Λ(v (i) , 1)⊆L(H p ). Using a similar method to eorem 3, we have that the states will be kept in the set ( ∪ p Λ(v (p) , 1 + cχ)) ⋃( ∪ p,q Λ(v (p,q) , 1 + cχ)) for any initial conditions starting from ∪ is paper proposes a unified event-triggered asynchronous filter framework for positive switched with state saturation. It should be pointed out that there are still many open issues to positive systems. On one hand, it is significant to present a low computation burden approach for dealing with the state saturation of hybrid positive systems. is paper employs the convex polytopic approach for dealing with the state saturation term. us, the saturation term is transformed into polytopic, which is dependent on 2 n vertex matrices. When the dimension of the systems is high, the corresponding computation is complex. How to present a simpler approach for reducing the computation burden is interesting. On the other hand, some novel triggering conditions can be introduced to improve the triggered mechanism in this paper. Up to now, the static event-triggering mechanism is usually used for positive systems. However, it has been verified that a dynamic event-triggered mechanism is superior to the static one. It is interesting to investigate the dynamic event-triggered strategy of positive systems in the future work. Remark 8. In [25], the event-triggered filter of positive systems with state saturation was investigated. is paper considers an asynchronous even-triggered filter of positive switched systems. Multiple copositive Lyapunov functions are constructed in this paper, and the ℓ 1 -gain stability for positive switched systems is achieved. Compared with positive systems, the ℓ 1 -gain stability of positive switched systems is more complex. en, an asynchronous filter is established in terms of linear programming, which is more practical than the synchronous filter in [25]. In brief, the filter in [25] is a special case of the filter designed in this paper. Additionally, a continuous-time asynchronous filter framework is also established in this paper. Illustrative Examples In [45,46], a state space-based water system was constructed. It should be pointed out that the water flow and the capacity of pumps is non-negative. erefore, it is more suitable to model water systems via positive systems theory. Moreover, the volume of water in the water system is limited and subject to a certain value. is is a typical saturation phenomenon. Figure 1 shows the diagram of a water supply network. e pumps are used to describe the switched rule. e water system in Figure 1 is modeled by the system equation (1), where x(t) (or, x(k)) denotes the water volume in the tank at time t (or, the k th sample instant), ω(t) (or, ω(k)) represents the wasted water from the pumps and valves, y(t) (or, y(k)) is the measured value by the sensor, and z(t) (or, z(k)) is the output to be estimated. Figure 2 shows the states of the plant and the filter, where V, V k , q in , and q out represent maximum volume, the volume at the k th sample instant, inflow quantities, and outflow quantities in the tank, respectively. Two examples are given to illustrate the effectiveness of the proposed design. Example 2. Consider system equation (8) with Given constants μ 1 � 0.7, μ 2 � 1.9, λ � 1.1, and c � 0.2, the event-triggering threshold is β � 0.1, and the external disturbance is x 2 (k) x 3 (k) x 1 (k) e asynchronous switch signal and the error signal e(k) of z(k) and z f (k) are given in Figures 8 and 9, respectively. e external disturbance input signal is given in Figure 10. Figure 11 shows the event-triggering release interval. Conclusions is paper has proposed an event-triggered asynchronous filter for positive switched systems with state saturation. A polytopic approach is introduced to deal with the state saturation term. An interval estimation approach is presented to transform the original system into an interval system. By using a matrix decomposition technique, the filter gain matrices are designed in terms of linear programming. It is shown that copositive Lyapunov functions and linear programming are more effective for solving the corresponding issues of positive systems. e obtained design is also developed for the discrete-time case. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
6,564.6
2022-06-21T00:00:00.000
[ "Engineering", "Mathematics" ]
The Impact of New Urbanization Policy on In Situ Urbanization—Policy Test Based on Difference-in-Differences Model : Compared with traditional urbanization, new urbanization is more closely aligned with China’s basic national conditions and reflects the basic goal of sustainable development. As the main method of new urbanization, in situ urbanization can make up for the shortcomings of traditional urbanization. The establishment of national new urbanization pilot areas is an important element of the new urbanization policy. This paper tests the policy effect of the National New-type Urbanization Plan (2014–2020) on in situ urban development through the establishment of pilot areas. We found the following: (1) In the central region, the establishment of new urbanization pilot areas has not played a significant role in promoting the process of in situ urbanization. By dividing the central cities into Yangtze River and non–Yangtze River Economic Belt areas, we also find that the effect of the new urbanization policy is not obvious, for these cities are not located in the Yangtze River Economic Belt. (2) The central cities located in the Yangtze River Economic Belt have seen significant policy effects due to their advantages in transportation, resources, industry, labor, etc. The establishment of new urbanization pilot areas has a significant promoting effect on the process of in situ urbanization. Introduction Urbanization is a natural and historical process of non-agricultural industries gathering in urban areas along with the development of industrialization. It is an objective trend of human societal development and an important symbol of national modernization [1]. At present, China is in the late stage of industrialization and accelerated development of urbanization. However, as China is still in the social transformation period, its urban public resources, services, and other supporting facilities are not perfect and do not match the speed of urbanization [2]. This reduces the quality of citizens' lives and makes "urban disease" and other problems more serious. According to the National New-type Urbanization Plan (2014-2020), the urbanization rate of permanent residents in China should have reached 60% by 2020, with the urban population exceeding 800 million. About 100 million rural and other migrant people were expected to be settled in cities and towns [3]. In fact, our cities' resources, energy, and environmental carrying capacity cannot accommodate the large population [4]. Since the reform and opening up, the rural economy has developed by leaps and bounds, and the level of agricultural modernization and mechanization has been greatly improved, which provides a powerful economic and spatial condition for rural urbanization [5]. In this case, it becomes the inevitable choice of new urbanization to change the original mode and implement the in situ urbanization mode, which makes farmers work in native lands [6,7]. Compared with traditional urbanization, new urbanization is more aligned with China's basic national conditions and reflects the basic goal of sustainable development [8]. In situ urbanization is the main way that new urbanization can make up for the shortcomings of traditional urbanization [9]. First, it makes up for the ecological defects. New urbanization focuses on creating a livable environment in harmony between humans and nature [10], which is fundamentally different from traditional urban forms. The development of in situ urbanization can help overcome the ecological defects of traditional cities and lay a foundation for building a beautiful China. Second, it can improve the relationship between urban and rural areas. In the context of balanced urban and rural development, the traditional urbanization model is not conducive to narrowing the gap between urban and rural areas but will widen the gap, forming an urban-rural dichotomy [11]. On the premise of activating rural resources and improving farmers' quality of life, new urbanization plays a positive role in improving urban-rural relations [9]. Third, it helps in alleviating urban diseases, such as traffic congestion and serious environmental pollution. With the continuous speeding up of urban development, such urban diseases bring great troubles to people living in cities and seriously affect sustainable development [12,13]. It is obviously unscientific to pay one-sided attention to the expansion of urban scale and economic development. The excessive agglomeration of large cities will gradually increase the pressure of cities. In order to alleviate the urban pressure, local governments should optimize urban system from the perspective of balancing urban and rural development [14]. In situ urbanization is conducive to the development of small towns and has a certain effect on relieving the pressure of large cities [15]. Next, this paper evaluates the impact of the National New-type Urbanization Plan (2014-2020) on the in situ urbanization process in China. Specifically, our objectives were to (1) quantify the in situ urbanization rate of 87 prefecture-level cities in the central region; (2) test the policy effect of the plan on the in situ urbanization process by using the difference-in-differences (DID) model; and (3) put forward appropriate policy suggestions to effectively promote in situ urbanization. The structure of the remaining sections is as follows: Section 2 is a literature review; Section 3 presents the model and descriptions of variables; Section 4 provides an analysis of in situ urbanization; Section 5 outlines the empirical test; and Section 6 gives the conclusions, limitations, and implications. Literature Review At present, there are relatively few studies on in situ urbanization. The concept of in situ urbanization in China was first proposed by Zhu [16], who defined it as the transformation of a large number of rural populations to towns without a large-scale spatial transfer. Since then, scholars have conduct supplementary studies on the definition of in situ urbanization from the perspectives of spatial scope and conceptual connotations [17][18][19][20][21]. From the perspective of spatial scope, "in situ" is mainly relative to long-distance urbanization, but there are different opinions on the size of "in situ" scope at present. Some scholars believe that when the rural economy develops to a certain extent, the process of in situ urbanization takes rural areas where farmers live as the core, and changes the original natural way of life in rural areas by improving infrastructure construction, so that farmers will no longer move blindly to other places [22]. Some authors thought that if the county is taken as the smallest regional unit of spatial transfer, large populations congregating in towns at the county scale can be regarded as a generalized process of in situ urbanization [23]. In addition, some authors distinguished between in situ and nearby urbanization, believing that the former refers to the close migration of rural residents to towns near their homes, while the latter is centered on prefecture-level cities and countylevel towns [24]. Currently, the research on in situ urbanization mainly focuses on three aspects: the mode, influencing factors, and development path. Regarding the research on the mode, Qian [25] proposed three modes of urbanization from the perspective of spatial scope. The first and second are to implement in situ urbanization of the agricultural population by developing a county economy and promoting the rise of strong towns, and the third is to promote urban-rural integration and in situ urbanization of the whole region with county-level cities as the center. At the same time, as an important model of new urbanization [26], an in-depth exploration of the influencing factors of in situ urbanization can better promote the construction of new urbanization. As the main aspect of in situ urbanization, rural residents' attributes have an important effect. Young and middle-aged farmers, rural residents with a junior or senior high school education, and married rural residents have a strong desire for local urbanization [27,28]. For the agricultural transfer population, in addition to its own characteristics, external factors such as an urban social network, basic public services, urban price level, and improved living conditions also have a significant impact on the willingness for in situ urbanization [29,30]. In addition, the willingness of the migrant agricultural population to urbanize locally also shows regional differences, and is stronger in the central and western regions [31,32]. Although local urbanization has positive effects, such as increasing farmers' incomes and reducing the cost of citizenship, it still faces many difficulties. Therefore, many scholars have put forward development paths and promotion strategies for promoting in situ urbanization further. Zhu (2017) pointed out some problems in China's urbanization and analyzed the relationship between in situ and new urbanization [21]. He believed that the former was the realistic choice for new urbanization at present. Pan et al. (2016). used a DEA model to study the sustainable development of in situ urbanization in Youyang Autonomous County and provided a development path for it [33]. Drawing from ongoing trends and policy potential taken from the New Urbanization Plan (NUP), Xu quantitatively evaluated the level of sustainability of the process in 20 Chinese urban agglomerations and provided some positive suggestions achieving sustainable new urbanization [34]. Although both the government and scholars are aware of the role of in situ urbanization in promoting sustainable urbanization, scholars' attention has not changed from the traditional perspective. They still have evaluated only the implementation and effectiveness of NUP [35][36][37][38] and believe that it presents an opportunity for a new roadmap for the orderly conversion of rural migrants into urban residents and optimizing the patterns of urbanization. Other scholars have differentiated and analyzed the impact of the new policy since the launch of the NUP in 2014. However, what is the driving mechanism of in situ urbanization? Compared with other urbanization patterns, what role does the new urbanization policy play in in situ urbanization? What is the reference significance of enacting effective policy for other regions? The key to answer these questions lies in determining the impact of the new policy on in situ urbanization that has not been addressed. Moreover, by combining the above studies related to local urbanization, it can be found that most of the research still explores the mode, influencing factors, and development path of local urbanization qualitatively, with little research using quantitative or empirical tests. In this paper, we selected the central region as the study area, calculated the in situ urbanization rate of 87 prefecturelevel cities based on the connotation of in situ urbanization and then tested the policy effect of the New Urbanization Plan on the process by using the difference-in-differences (DID) model. Our research has two theoretical contributions: First, the in situ urbanization rate is calculated in the form of non-agricultural employment. Second, an empirical model is used to analyze the effect of the in situ urbanization policy. Model National New Urbanization Pilot Areas represent an important implementation of the National New-type Urbanization Plan (2014-2020). In order to test the effect of the plan in central China, this paper compares the difference between the in situ urbanization level before and after the pilot areas are established to explore the impact of the new policy. At the same time, we consider that before and after the pilot area is established, there are many other factors that will affect the level of in situ urbanization. In addition, other policies issued during the same period may also be beneficial to cities that have not established pilot areas. These factors will undoubtedly have an important impact on the process of in situ urbanization and affect the results of policy evaluation. Therefore, this paper draws on the idea of difference-in-differences [39] and designs a quasi-natural experiment, taking prefecture-level cities (autonomous prefectures) where the policy is implemented as the experimental group and cities that where the policy has not been implemented as the control group. Then, by calculating the double difference in the two groups between before and after the pilot area was established, the net impact of the pilot area on the development of in situ urbanization can be effectively tested. Among the 87 prefecture-level cities (autonomous prefectures) in central China studied in this paper, as of 2018, 52 of them were approved as national new urbanization pilot areas. This provides a good quasi-natural experiment for research. The difference-in-differences method is applicable. Specifically, in the sample of this study, 52 prefecture-level cities were approved as national new urbanization pilot areas, and these constitute the treatment group, while the remaining 35 prefecture-level cities that were not approved naturally constitute the control group. At the same time, the State Council approved new urbanization pilot areas by stages: 40 prefecture-level cities including Anhui Province, Wuhan, and Changsha in 2015 and 12 prefecture-level cities including Xiangtan, Pingxiang, and Ganzhou in 2016. Taking into account the time difference in establishing pilot areas in various regions, this paper constructs the following time-varying differencein-differences (DID) model to test the net effect of new urbanization pilot areas on in situ urbanization. The model is set as follows: represents time; DID is the core explanatory variable, denoted by Variable Descriptions This paper focuses on the role of new urbanization pilot areas in in situ urbanization and analyzes the regional differences of the pilot areas. In addition, considering that other factors will also affect in situ urbanization, we introduce some control variables. In our research, the explained variable is the in situ urbanization level, which is represented by the in situ urbanization rate. The in situ urbanization rate is calculated according to Equation (1) in Section 3. The core explanatory variable is the policy dummy variable (DID). New urbanization pilot areas are not established at the same time but are scattered at different times. According to the time that pilot areas are established, we set all the years before establishment as 0 (DID = 0) and all the years after establishment as 1 (DID = 1). In 2015 and 2016, 40 and 12 prefecture-level cities in central China, respectively, were approved as new urbanization pilot areas, for a total of 52 cities. These constituted the experimental group, while the remaining 35 unapproved prefecture-level cities constituted the control group. The control variables we selected were as follows: (1) Urban-rural income gap (GAP). This paper used the ratio of per capita disposable income of urban residents and per capita net income of rural residents to represent the urban-rural income gap. In order to eliminate the impact of price factors, we adjusted these two based on the consumer price indices in the prefecture-level cities from 2009 to 2018. The estimated coefficient of this variable is expected to be negative. (2) Educational resources (EDU). The allocation system of public educational resources has further intensified the polarization of population and economic resources in central China [40]. The flow of human resources caused by the allocation of educational resources has further emptied necessary intellectual capital in less developed regions of central China, which is not conducive to in situ urbanization in those regions. We chose the number of full-time college teachers as the proxy variable of educational resources (EDU), and the estimated coefficient of this variable is expected to be negative. (3) Government support (GOV). This paper uses government fiscal expenditure in the agriculture sector to represent support for rural economic development. We hold the view that the more government support in rural areas there is, the more beneficial it will be for rural residents to obtain local employment, which plays a role in promoting in situ urbanization. The estimated coefficient of this variable is expected to be positive. (4) Traffic accessibility (TRA). Traffic accessibility will greatly accelerate the movement of the population between urban and rural areas. The developed and convenient traffic conditions between urban and rural areas will strongly attract the migration of residents in rural areas. From the perspective of in situ urbanization, this is conducive to the flow of rural residents to urban areas but not to the process of in situ urbanization to some extent. We use the urban road area ratio (the ratio of total urban road acreage to total acreage of the city) to represent traffic convenience (TRA). (5) Medical level (MED). A higher medical level in cities strongly attracts rural residents to move, which is not conducive to the in situ transfer of rural labor. In this paper, the average number of medical practitioners per thousand residents in each prefecture-level city is used to represent the medical level (MED). Considering the possible fluctuations and heteroscedasticity in the samples, we treated all variables logarithmically. All the data of explanatory variables and control variables were collected from the Statistical Yearbook of Chinese Cities and the statistical yearbooks of the six provinces. Descriptive statistics of the above variables are shown in Table 1. Steps The steps of the empirical analysis are shown as a flowchart in Scheme 1. Scheme 1. Steps of empirical analysis. Step 1. Calculate the in situ urbanization level based on the principle of non-agricultural employment. Step 2. Test the common trend hypothesis. If the hypothesis holds, then do the balance test. If not, introduce propensity score matching (PSM) to solve the problem that samples cannot meet the common trend and randomness. Step 3. Use the nearest neighbor matching method to carry out a balance test on the covariant. Step 4. Use DID method to test the policy effect. Step 5. Adopt counterfactual test and single difference test to strengthen the robustness of results. Calculation of In Situ Urbanization Rate in Central China The central region includes six provinces, Hubei, Hunan, Henan, Anhui, Jiangxi, and Shanxi, covering a land area of 1.03 million km 2 [41]. It is the main area with a large population, high density, and low economic development, undertaking the rise of central China [42]. The central regions rely on 10.7% of the country's land to support 26.51% of the population and create about 21.69% of the gross domestic product (GDP) [43]. In 2019, the urbanization rate of the six central provinces was 57.37%, still lower than the national average of 60.60% [44]. Also, there are still some problems in the central regions, such as unbalanced development between urban and rural areas, large income gap between urban and rural residents, and insufficient industrial promotion in small and medium-sized cities [42]. We selected 87 prefecture-level cities in central China as the research areas. Currently, there are few studies on the calculation of the in situ urbanization rate. Most of them still focus on the qualitative analysis of in situ urbanization. This study calculates the in situ urbanization rate based on the connotation of in situ urbanization. First, the group emphasized by in situ urbanization is the rural population. Non-agricultural employment is the core connotation of in situ urbanization. Accordingly, we introduced rural employees into the calculation. The calculation formula for the in situ urbanization rate is as follows: where IU is the in situ urbanization rate, RNEP is the population of rural non-agricultural employment, and REP is the total population of rural employment. Through Equation (1), this paper calculates the in situ urbanization rate of 87 prefecture-level cities (autonomous prefectures) in central China from 2009 to 2018. Due to the large number of cities in the six central provinces, we first compared the rates of capital cities in the six central provinces (as shown in Figure 1). From the horizontal comparison of the in situ urbanization rates in capital cities, the rate in Wuhan was slightly lower than that in Zhengzhou in 2009 (54.76% and 55.24%, respectively). Since 2009, Wuhan has been in the leading position among the six capital cities, and its in situ urbanization rate is significantly higher than any other capital city in central China. The rates of Zhengzhou, Hefei, and Changsha are relatively close in each period. Figure 1 shows that the three curves interlace each other and maintain stable growth without a big gap. In addition, the in situ urbanization rate of Taiyuan is at a low level among the six capital cities. We can see that the development of in situ urbanization is slow without an obvious improvement in Taiyuan, and the gap between other capital cities is gradually widening. Also, Nanchang's rate was the last among the six capital cities in the early stage, but it developed rapidly and kept narrowing the gap with the other cities. In particular, during 2009-2011 and 2017-2018, there was a substantial increase in 45 Spatial Pattern of Local Urbanization in Central China In order to show the spatial changes of in situ urbanization in the central regions intuitively, we used ArcGIS software to visually display the changes in 87 prefecture-level cities (Figure 2). In Figure 2, the in situ urbanization rate is divided into five levels: low (less than 44.86%), medium-low (44.86-51.19%), medium (51.19-58.12%), medium-high (58.12-68.75%), and high (greater than 68.75%). The basis for classification is as follows: We used the natural break method of ArcGIS to divide the in situ urbanization rate from high to low into five levels each year, then we obtained the new classification interval by weighing the critical value of the five levels each year. Finally, we adopted the manual method to divide the rate into five grades based on the new classification interval. It can be seen that the in situ urbanization rate of most cities in central China was at a low or medium-low level in 2009, especially in the surrounding areas with less traffic accessibility. Among these cities, 41 were at the low level and 23 were at the medium-low level. The five cities at the middle-high level (Huangshi, Suizhou, Jingdezhen, Tongling, and Huainan) are distributed in a dotted pattern. Most cities at the middle level of urbanization are adjacent to the middle-high areas, which indicates that regions at the middlehigh level played a specific leading role in the surrounding areas during this period. However, cities at the middle-high level have not yet fully developed, so no city in the central region has reached a high level of in situ urbanization. By comparison, the situation of in situ urbanization in the central region improved overall in 2018, and the number of cities at the low level reduced significantly (41 cities in 2009, 26 cities in 2018). The number of cities at the middle-low level remained the same as in 2009 (23 cities), and the number at the middle-high level increased significantly (5 cities in 2009, 21 cities in 2018). At the same time, two cities with a high level of in situ urbanization emerged. The areas at the middle-high level are no longer star-shaped but show a continuous strip-shaped distribution, which form a circular urbanization structure with Wuhan, Changsha, Nanchang, Hefei, and Zhengzhou as the core cities. Moreover, the outer circle is expanding, showing apparent hierarchical and diffusive effects on the spatial structure of in situ urbanization. Specifically, the diffusion effect of Hubei Province is the most obvious, forming many cities centered on Wuhan at the middle-high level. Among these cities, Jingmen and Ezhou crossed from the low level in 2009 directly to the middle-high level in 2018. In contrast, there some cities were still at a low level of in situ urbanization in northern Shanxi and southwestern Hunan Province in 2018 (6 and 7 cities, respectively). Generally, problems such as poor public services, inconvenient transportation, and insufficient impetus for economic development exist in these cities. Influence of New Urbanization Pilot Areas on In Situ Urbanization The prerequisite for the difference-in-differences method to be effective is to establish the common trend assumption. If the hypothesis of common trends holds, the impact of new urbanization pilot areas on in situ urbanization will only occur after the pilot area is established; before that, there is no significant difference in the change trend of in situ urbanization level between treatment and control groups. In order to test the common trend hypothesis, we used Stata 15.0 software to draw a diagram. The common trend of in situ urbanization level is shown in Figure 3. It can be seen from Figure 3 that after the policy is implemented (after period t), there is a significant difference in the changes of in situ urbanization level between treatment and control groups. However, before the policy is implemented (before period t), there is a significant difference between the coefficient and 0. This indicates that there was a significant difference in the change of level between treatment and control groups. The common trend hypothesis does not hold. Therefore, we introduced propensity score matching (PSM) [45] to solve the problem that samples cannot meet the common trend and randomness. First, we set the variables that have an impact on the in situ urbanization level as covariant, and we selected the control variables as the matching covariant. In this section, the nearest neighbor matching method is adopted to conduct a balance test on the covariant, and the test results are shown in Table 2. According to the test results, the standardized deviation of the single covariant is less than 10% after matching, which indicates that the single covariate has better balance after matching. The results of the t-test also show that the hypothesis that there would be no systematic difference between treatment and control groups could not be rejected. This further proves that there was no systematic difference between treatment and control groups after matching. Table 3 shows the test of overall balance. From the test results in Table 3, we find that the mean value of the standardized deviation was significantly reduced after matching. For the remaining 783 samples after matching, we performed regression on the matched samples to obtain the results in columns 1 and 2 of Table 4, showing the estimated results without and with adding control variables, respectively. It can be found that whether control variables are added, the estimated coefficient of DID is positive but not significant, which indicates that for the central region, establishing new urbanization pilot areas does not significantly promote the in situ urbanization process. We guessed that this might be caused by the heterogeneity among different regions, so we divided the 87 prefecture-level cities into the Yangtze River and Non-Yangtze River Economic Belt areas. The Yangtze River area includes four provinces (Hunan, Hubei, Jiangxi, and Anhui), and the non-Yangtze River area includes two provinces (Shanxi and Henan). In Table 4, columns 3 and 4 show the estimated results of areas in the Yangtze River Economic Belt, and columns 5 and 6 show the estimated results of areas outside the Yangtze River Economic Belt; and columns 3 and 5 show the estimated results with the addition of control variables. By analyzing the results of different regions, it can be seen that the coefficient of DID in the non-Yangtze River area is similar to the result in the overall central area (coefficient is positive and not significant). This indicates that establishing new urbanization pilot areas outside the Yangtze River Economic Belt is not significant in promoting the in situ urbanization process. However, the DID coefficients of the regressions in columns 3 and 4 in the Yangtze River Economic Belt are positive and significant at the 1% level. This indicates that establishing new urbanization pilot areas in the Yangtze River Economic Belt has a positive stimulating role on the process of in situ urbanization. Note: *, ** and, *** indicate statistical significance at the 10%, 5%, and 1% level, respectively. Columns 1, 2, 4, 5, 7, and 8 show estimated results using difference-in-differences (DID) method. Columns 3, 6, and 9 show estimated results using ordinary least squares (OLS) method for comparison. Then, we analyzed the results of the control variables. As shown in Table 4, we found that the symbols of each control variable in the regressions of columns 2, 4, and 6 are all the same. The coefficients of the rural-urban income gap (GAP) and medical level (MED) are significantly negative. The widening income gap between urban and rural areas is not conducive to the in situ employment of the rural labor force, and the larger income gap will make rural laborers more inclined to seek employment in cities. This is similar to Wang's research [46]. He used the instrumental variable method to conduct an empirical analysis, and believed that the widening of the urban-rural income gap hinders the process of urbanization. Regarding the medical level, most high-quality medical resources and advanced medical technology are concentrated in cities rather than in rural areas. Compared with rural areas, medical conditions have an absolute advantage in cities. This greatly attracts rural laborers to move to cities, which explains why the coefficient of medical level (MED) is significantly negative. Our results are contrary to Cheng's views [47]. Using a random effects model, he found that an improved medical level promotes the process of urbanization. For government support (GOV), the coefficient of this variable is significantly positive at 1% level. Strengthening government support in rural areas strongly improves the living conditions of rural residents and increases their income, which explains why the government's support for rural areas has played a significant role in promoting in situ urbanization. This finding is consistent with our expectations, and with many studies. For example, Ma (2014) believed that the government leads the process of urbanization and explained through a structural equation model that government support directly accelerates the speed of urbanization [48]. For traffic accessibility (TRA), this variable is also positive, but it is not significant in non-Yangtze Economic Belt areas. Compared with existing studies, our results are similar. Zhang and Liu [49,50], using double fixed-effects and generalized method of moments (GMM) models, found that less urbanized areas need better infrastructure and transportation conditions, and the convenience of transportation will accelerate the urbanization process. Finally, we found that the variable coefficient of educational resources (EDU) is positive but not significant, which is not consistent with our expectation. However, Li's (2013) study, using a Solow model, found that educational resources is the main influencing factor of the urbanization process [51]. The explanation given in this paper is that this variable is expressed by higher educational resources, and there is a huge difference in the impact of higher education and primary education resources on population migration. Higher education resources have no apparent influence on most people to choose rural or urban employment. Therefore, the impact of educational resources on in situ urbanization is not significant. Robustness Test As shown in Table 4, the estimated results obtained by adding control variables have a certain reliability. In order to further test the robustness of the results in the Yangtze River Economic Belt region with significant policy effects, we referred to existing studies [52,53] that conducted counterfactual tests by changing the policy implementation time. In addition to establishing new urbanization pilot areas, other policies or random factors could also lead to regional economic differences. This difference is not related to the establishment of pilot areas and ultimately leads to the failure of previous conclusions. In order to rule out the influence of these factors, we assumed that each region would set up new urbanization pilot areas two or three years in advance. If the policy variable coefficient  is still significantly positive, then the promoting role of in situ urbanization is likely to come from other random factors or policies, rather than the establishment of pilot areas. If  is not significant at this time, it indicates that improving in situ urbanization is affected by the establishment of new pilot areas. In Table 5, the results in columns 1 and 2 show that the time for establishing a new pilot area is assumed to be two years in advance. The results shown in columns 3 and 4 suggest that the time is assumed to be three years in advance. From the estimated results of columns 1-4, it can be seen that whether the policy is implemented two or three years in advance, the values of coefficient  are not significant. This reveals that for the cities in the Yangtze River Economic Belt, the improved level of in situ urbanization is not caused by other factors but is promoted by the establishment of pilot areas. Note: *, and *** indicate statistical significance at the 10% and 1% level, respectively. Besides constructing a counterfactual test to enhance the robustness of the results, we also adopted the single difference to test the policy effect by controlling only the regional effect but not the time effect. The estimated results are shown in Table 5, columns 5 and 6. After controlling other variables and regional effects, the estimated coefficients of the policy variable (DID) are positive and significant at 1%, which also indicates that establishing new urbanization pilot areas in the Yangtze River Economic Belt has a certain promoting effect on in situ urbanization. However, through further analysis, we found that the coefficient of the policy variable estimated by the single difference method is significantly larger than the difference-in-differences method in Table 3, which means that the results estimated by the former overestimate the policy effect. Therefore, we believe that the conclusion obtained by using the difference-in-differences method has certain credibility. Conclusions and Limitations Taking the panel data of 87 prefecture-level cities in central China from 2009 to 2018 as samples, this paper tested the impact of establishing new urbanization pilot areas on in situ urbanization through the time-varying difference-in-differences model, and enhanced the robustness of the estimated results with the counterfactual and single difference tests. The following conclusions were drawn from the above studies: (1) In the central region, the establishment of new urbanization pilot areas has not played a significant role in promoting the process of in situ urbanization. By dividing the central cities into the Yangtze River and non-Yangtze River Economic Belt areas, we also find that the effect of the new urbanization policy is not obvious for cities outside the Economic Belt. (2) The central cities located in the Yangtze River Economic Belt have significant policy effects due to their advantages in transportation, resources, industry, labor, etc. The establishment of new urbanization pilot areas has a significant promoting effect on the process of in situ urbanization. (3) Control variables such as urban-rural income gap, medical level, government support, and traffic accessibility have different influences on local urbanization, among which the first two hinder the process and the second two have a positive effect on the process of in situ urbanization. The main limitation of this research is the time limitation of the new urbanization policy. This policy was implemented from 2014 to 2020. It is impossible to analyze its impact on in situ urbanization after 2020. We do not know whether this policy has a lagging effect on the process after 2020. In addition, another limitation is that there may have been omissions or unknown factors in the selection of control variables. In the next step, we will analyze the level of in situ urbanization in the eastern and western regions and study its development mechanism from an industry-led perspective. Then we will look for in situ urbanization development models that suit different regions of our country, which will effectively improve some existing problems in the current stage. Implications Based on the above analysis and conclusions, we offer some suggestions: (1) Promote rural industrialization and expand employment channels. The development of in situ urbanization should be based on employment and industry, combining residential construction, land use, and industrial development. The goal of residential construction is to solve the problem of wasted land resources and public infrastructure caused by the scattered settlement in traditional villages. Residential construction in the process of in situ urbanization should improve infrastructure, community planning, and housing construction, to help farmers enjoy a life with a good ecological environment and complete infrastructure. The industrialization of agriculture is conducive to the connection between scattered small-scale and socialized large-scale production. The key to agricultural industrialization is to be market-oriented, use advanced science and technology, develop characteristic industries, and optimize rural production factors, which can realize mechanization and specialization in agricultural production. In addition, it is necessary to give full play to the supporting role of industrial development for in situ urbanization. Actions such as accelerating industrial agglomeration, combining local resource characteristics, and rationally determining industrial layouts will also provide farmers with more employment opportunities. (2) Promote institutional reform to ensure sustainable in situ urbanization. From the perspective of rural economic development, the urban-rural dual structure has become a constraint to the current development of in situ urbanization. Therefore, it is necessary to reform the existing household registration, land, and social security systems to provide an institutional guarantee for the sustainable and healthy development of in situ urbanization. In the dual household registration system, there are many inequalities between rural and urban residents, so it is necessary to register residents in a unified way that reflects the principles of fairness and justice. Regarding the reform of the land system, local governments should accelerate the confirmation of land ownership and actively explore new forms of land circulation. Greater coverage of the social security system and equal infrastructure construction and services between urban and rural areas will create a healthy and sustainable environment for in situ urbanization. (3) Build a beautiful countryside and create a livable environment. Against the background of "Beautiful China," the difference between in situ and traditional urbanization is that the former can create a better ecological environment and focuses on the balanced development of economic and ecological benefits. The construction of a beautiful countryside is an important way to realize agricultural modernization and in situ urbanization. The construction of a beautiful countryside should be accomplished in three aspects. First is to protect the ecological environment and explore the natural beauty. In the process of community construction, attention should be paid to protecting the original rural ecological environment and organically integrating pastoral scenery and rural charm. Second, we should create a good living environment through reasonable design and layout. Narrowing the urban-rural gap of infrastructure and public services and strengthening the management of new communities are significant initiatives. Third is to pay attention to the construction of cultural towns. In addition to creating a good natural environment, we should pay attention to enriching farmers' spiritual lives and improving their cultural and moral quality. All in all, the traditional urbanization model can no longer meet the needs of current urbanization development. Against the background of rural economic development, the advantages of in situ urbanization, as the main approach and development trend of new urbanization, are gradually highlighted. In view of the current situation, the time is not yet ripe for comprehensively promoting in situ urbanization; we must overcome difficulties and remove multiple obstacles in the process. The construction of in situ urbanization is a systematic project that needs a scientific and reasonable layout, emancipation of peasants' ideology, and reform of unreasonable systems.
8,953
2021-02-09T00:00:00.000
[ "Economics", "Environmental Science" ]
Rectangular partition for n-dimensional images with arbitrarily shaped rectilinear objects Partitioning two- or multidimensional polygons into rectangular and rectilinear components is a fundamental problem in computational geometry. Rectangular and rectilinear decomposition have multiple applications in various fields of arts as well as sciences, especially when dissecting information into smaller chunks for efficient analysis, manipulation, identification, storage, and retrieval is essential. This article presents three simple yet elegant solutions for splitting geometric shapes (particularly non-diagonal ones) into non-overlapping and rectangular sub-objects. Experimental results suggest that each proposed method can successfully divide n-dimensional rectilinear shapes, including those with holes, into rectangular components containing no background elements. The proposed methods underwent testing on a dataset of 13 binary images, each with 1 … 4 dimensions, and the most extensive image contained 4096 elements. The test session consisted of 5 runs where starting points for decomposition were randomized where applicable. In the worst case, two of the three methods could complete the task in under 40 ms, while this value for the third method was around 11 s. The success rate for all the algorithms was 100 %. For metrology and building design, rectangular partitioning can be suitable for splitting an object or a part thereof for more accurate measurements.Applications in data compression, dimensionality reduction, and component analysis could involve decomposing a complex shape or object into rectangles or cuboids and saving only their coordinates and dimensions instead of recording the whole geometry.In computer vision, rectangular partition techniques can be used for feature extraction and as part of other algorithms, such as skeletonization or thinning.Höschl and Flusser [6] mention rectangular decomposition as a tool to speed up feature calculation for describing and recognizing 3D shapes.In game design and simulations, rectangular partition or decomposition has potential applications in, for example, level design, artificial intelligence (navigation and pathfinding), or physics modeling.Four-dimensional rectangular partition could be used to study changes in buildings or other environments over time, or it might be utilized in games or simulations featuring interlinked 3D worlds.E-mail address<EMAIL_ADDRESS>results of rectangular partition: even a simple L-shaped 2D object (first shape from the left) can be decomposed in at least three unique ways (second, third, and fourth shape from the left).Dashed lines denote sub-object edges. V Overview In this article, three solutions for rectangular partition are presented.The first is a solution for cases where the entire image or volume must be iterated.The second and third solutions are more generalizable; they offer random access to the image data and the option to process only part(s) of the image.The proposed methods are named Special, General I, and General II. The proposed methods use the highest common neighborhood denominator when determining which parts belong in which connected component: lines for 2D, faces for 3D objects, et cetera.The idea is like the concept of hypercubic meshes in Ref. [14].This type of neighborhood connection is used to ensure there are no holes in the extracted shapes.The output of each solution can be presented as an image of the same shape and size as the input, with a unique label assigned to each extracted rectangular and rectilinear component. In essence, when using Special and General I methods, the process of rectangular partition consists of three steps: finding the starting point; expanding the selection until background, visited foreground elements, or array edges are reached; and finding holes in the selection and shrinking it until there are no more holes in the region.Without the last step, the extraction result would resemble a convex hull, covering all holes and possibly even dents of the object.The General II method differs from the other two because it relies on template matching. The proposed techniques have a few limitations.The input must be provided as bitmaps, rather than n-dimensional meshes or vector graphics.The solutions are designed for dimensionally optimized input arrays, meaning all dimensions must have a length greater than one.In addition, while extracted shapes are rectangular, the outcome may not be optimal or fit for a particular purpose.In other words, even though the output only has rectangular objects, the objective of the partitioning process is not to explicitly find or use the smallest, the largest, the most, or the fewest shapes.However, it was still considered beneficial for potential applications that the partitioned shapes have more than one unit of length in all dimensions whenever possible.An optimization algorithm was, therefore, developed to minimize the number of extracted objects.Since the generalizable solutions rely on randomness, the results may vary on each execution.Shapes inside or overlapping bounding boxes of other shapes may impose another limitation due to the V. Pitkäkangas Fig. 2. Dataset.Each array has its name written above it.The "Hypershape" array is four-dimensional but split into three-dimensional slices along the w-axis for visualization."Squares" and "Cross 2D" are two-dimensional, and "Lines" is one-dimensional.All the other arrays are threedimensional.The purple color in 1D and 2D shapes represents the background.([[[1,2], [3,4]],[ [5,6], [7,8] [ [3,4,4,3], [5, [5,25,30], [10,2,1]]) b = list(np.ndindex(a.shape))b is [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2)] product cartesian product from the itertools Python package; the "repeat" argument can be used to define the number of repetitions for each element possibility of these separate but close objects being mistakenly recognized as parts of each other.This problem can be solved by applying connected-component labeling on the image before attempting to partition it, copying each labeled object to a blank image, and performing the rectangular partition on each new picture, thus processing each object separately. The coordinate system used is generally that of the array (image), meaning almost no transforms to local coordinates of the shapes or regions are done, thus resulting in the extracted shapes being always axis-aligned and never tilted according to the orientations of the source objects. All solutions were implemented in the Python programming language [15] with ease of implementation, high precision and reliability, and dependency on as few software libraries as possible in mind.The only software package the solution implementations depend on is [16], which is used for array operations. Matplotlib [17] and Pillow [18] were used for visualizing the datasets and test results for this article.Three-dimensional arrays and slices are shown from two sides; this was realized by saving the view, recording its azimuth and elevation angles, adding 180 • to the former and multiplying the latter with − 1, redrawing the view using the new angle parameters, and saving the second view.This technique was used for Figs. 2, 8 and 9. Algorithms for the proposed methods are presented in Python-like pseudocode in Sections 2.3 and 2.4.For readers unfamiliar with the Python language, a glossary of some terms and symbols used in the algorithms is shown in Table 1.In arrays and other iterable variables, indexing starts from zero. The mathematical justification for all three decomposition methods lies in array decomposition, block identification, and efficient Fig. 3. Test process.The same optimization procedure is used regardless of the selected partition method. storage.To elaborate, by decomposing the input array into uniquely labeled contiguous blocks containing only foreground elements, the solutions allow for efficient array manipulation and analysis because the contents are broken down into smaller, more manageable blocks that are constructed around foreground elements and can be identified and represented by their position and dimensions.This provides for easy access and manipulation of image data.Put differently, by performing this decomposition, the methods facilitate efficient storage and retrieval of information, theoretically simplifying and speeding up data analysis. Test procedure The dataset the developed methods were made with and tested on consists of 13 synthetic n-dimensional binary images, where n ∈ {1, 2, 3, 4} (Fig. 2).In each test image, zero-value elements represent the background, and all the other elements belong to the foreground, thus being parts of objects.Although most use cases of rectangular partition use 2D and 3D data, a few examples of 1D and 4D arrays exist in the set to test and demonstrate the capabilities of the developed algorithms.The "Randomly Generated" shape in Fig. 2 was generated randomly, and all the other arrays were created manually."Squares" and "Cubes" test the capability of the algorithms to handle data with multiple objects.All the arrays with "Holed" in their names are used to see if shape orientation affects partition results. The tests were done on a Dell Precision 7520 laptop. The test procedure (Fig. 3) is as follows: each array in the input data (Fig. 2) is given individually to each partition method.The starting point is picked randomly from all object elements for both General solutions; after a rectangular shape has been successfully extracted, a new point is chosen randomly until there are no more unvisited object elements.The results of both Special and General solutions are optimized so that extracted objects are merged whenever possible, and their assigned labels are recalculated so there are no skipped values.Each input array is partitioned five times using each method.This is to investigate the effect of randomness in object participation and to improve the testing process thoroughness.The execution times of testing and its phases are measured, and key performance indicators used for evaluating the methods and their effectiveness are calculated from the results (Section 3).The partitioning is considered successful if the extracted shape only consists of foreground elements of the same label, unique to this subobject. Special Solution Arguably, the simplest way to implement rectangular partitioning for n-dimensional images containing only rectilinear shapes that may or may not have holes is to iterate through the image sequentially in one pass.The advantages of this method include ease of implementation, predictability of results, and repeatability of the process.The most obvious disadvantage is the lack of random access; the entire array must be traversed even when analyzing only a part of the image would be desirable. The procedure for the Special solution for rectangular partition is presented in Algorithm 1.It takes one argumentthe image (array) that will be processedand returns an array containing rectangular subobjects with unique labels and a list of starting-point coordinates for each subobject.The output and input arrays are of equal shapes and sizes. In short, the Special method works as follows: The initial step is discovering the first unvisited object element.After that, its position is used as the starting point for the new sub-object.The algorithm moves along every axis from this location until it hits the array boundary, a visited object element, or the background.The coordinates found this way are used as region edges and potential endpoints of the sub-object.The last step is hole detection, which is realized simply by finding the lowest coordinates of background elements within the region and cropping such elements out of the sub-object if they are found; otherwise, the original region start point and end points are used.The process starts from discovering the first unvisited object element until there are no more such objects. The formulae at the basis of Algorithm 1 focus on finding the coordinates of zero (background) and non-zero (foreground) elements, block slicing, and block size calculation.We can view the method presented in the algorithm as an application of set theory, linear algebra, and graph theory: The decomposition procedure constructs element sets from matrices (images) by using specific conditions to find coordinates.Linear algebra concepts, particularly matrix operations including element-wise comparison and slicing, are applied to represent and manipulate n-dimensional arrays (images).Additionally, the method utilizes graph traversal and iterative graph exploration to identify the connected components (extracted foreground subregions) within the imagethe foreground elements serve as graph nodes. General Solution I This method can access array data randomly, meaning images need not be processed fully, and their shapes or objects can be partitioned in arbitrary order. The sub-object extraction algorithm that forms the most crucial part of the solution is described as pseudocode in Algorithm 2. It is based on finding an enclosing (n-dimensional equivalent of) cuboid for the chosen object element and shrinking the enclosing cuboid until it no longer contains background elements.To simplify the process, the algorithm works on a copy of the array where it assigns an otherwise unused label to the target element (or "marks the target pixel with a special color") and uses the "where" function of [16] to locate it every time the geometry of the enclosing cuboid changes.This minimizes the need for coordinate transforms. Mathematically, Algorithm 2 involves an iterative refinement process and n-cuboid boundary adjustment to extract accurately a sub-shape from an image.The formulae used in the algorithm primarily focus on bounds adjustment such that the method obtains the region of interest with iterative boundary search (partly done in Algorithm 3) and a foreground cross-out operation (Algorithm 4).The shape and content of this extracted area contribute to determining the n-cuboid bounds while also involving setting new slices to refine said boundaries.The background of the algorithm lies in array manipulation, iterative optimization, set theory, and computational geometry.The procedure manipulates image indices and slices to extract and refine regions of interest and n-cuboid boundaries, leveraging concepts including subsets, minimum and maximum values, and Euclidean distances in locating background and foreground elements.The goal is to tightly enclose the extracted n-cuboidal sub-object encompassing the target element while excluding background elements, thus enhancing the accuracy and completeness of the partitioning. The initial step is saving the location of the original target element, finding the enclosing cuboid (using Algorithm 3) in an uncropped array, cropping the image to the resulting coordinates, and storing the new shape to a variable. After completing the above step, the algorithm checks if there are background elements in the cropped region.If this condition is met, the position information of the target element is updated to reflect the new coordinates.The cropped area is copied, and a cleanup operation (Algorithm 4) is performed on the copy to eliminate holes, concavities, angles, and intersections. Next, the element located in the copy in the coordinates corresponding to the target position is checked.If the object in it was erased in the cleanup (i.e., is now zero), an element-wise subtraction is performed between the copy and the original cropped region with the former as the minuend and the latter as the subtrahend.The negative is taken from the result and used as the image array in subsequent steps, starting from finding new bounds (Algorithm 3) and cropping the region accordingly. If the object in the target position in the copy was not erased, the copied array (post-cleanup) is used as the new image array in the remaining steps.New bounds are discovered (Algorithm 3), and the region is cropped to reflect these. If the cleanup did not help crop the region (its shape did not change), another procedure is attempted: the cropped area is split into 2 n sub-regions, where n is the number of dimensions in the array.The position of the target element is used as the pivot and is included in every sub-region.For each split array, the minimum and maximum coordinates of background elements are saved to MNS and MXS, respectively, and used as a foundation of two new lists: one with minima of maxima and the other with maxima of minima (Equation ( 5) and Equation ( 6)): and where m is the number of dimensions in the cropped region.These two lists now consist of per-axis minimum and maximum background element coordinates compiled from each sub-region and are combined into a single list of lists where minima are the first and maxima are the second elements.This list, whose length is equal to the number of dimensions in the cropped region, is iterated through, and target element coordinates and a constant value of one are added to the maxima.Three values are checked on each iteration: minimum, target element coordinate, and the adjusted maximum.The smallest of these is chosen as the start and the largest as the end for a slice of the current axis.A tuple is formed from the slices created when processing the list and used to clip the cropped region further. If splitting the cropped region and using a combination of minimum and maximum background coordinates could not reduce the area, the position of the target element is compared against the largest coordinate in the cropped region (for example, the bottom right corner in 2D arrays).If the two are equal, the distances between this point and all the background elements are computed by summing the absolute values of the per-axis coordinate differences and taking the square roots of the results.The background element with the longest distance to the target point is used as a reference (the shortest distance would often result in too small areas): the region is further cropped by using its coordinates as the starting point, while the target element serves as the endpointbut only if all V. Pitkäkangas dimensions of the resulting area are one or higher.This step is meant to crop the region so that edges that only consist of object elements ideally get cut out, and the resulting clipped array can be processed further in the next iteration. The final recourse (when the region still has the same size as in the second step or the previous process would have caused it to have zero in its dimensions) is to pick the coordinates of the target element and nothing else, that is, a rectangular sub-object one unit long in every dimension. The variable containing the shape stored in the first step is updated with that of the cropped region. The steps above, starting from checking the presence of background elements in the region of interest, are repeated until this condition is not met. Once the sub-object has been extracted, its coordinates are transformed back by subtracting the location of the new target (in the sub-shape) from the old position (in the original array) and constructing a tuple of slice objects with the coordinate differences as the starts and the sum of them and sub-object dimensions as the ends.This tuple now contains the global coordinates of the extraction result. The code in Algorithm 2 is executed with randomly chosen foreground elements as the target pixels until the whole array is processed.Each time an enclosing cuboid is obtained, the region of the array corresponding to its coordinates is set to the background.This step reduces the possible foreground (or target) elements on every iteration. The boundary-finding procedure (Algorithm 3) traverses the array in both directions along every dimension until it hits an array bound or a background element.This is like the procedure in the Special solution except bidirectional and does not consider visited elements. The mathematical justification of Algorithm 3 comprises n-cuboid bound identification and iterative search along each dimension.The first is essential for isolating sub-objects within the image, and the second ensures the n-cuboid bounds the foreground elements in all directions.The formulae behind the algorithm include coordinate update and n-cuboid representation.In this context, the former adapts the minimum and maximum coordinates of the n-cuboid by foreground element positions, whereas the latter builds a structured representation of the n-cuboid.Algorithm 3 uses coordinate manipulation (processing array indices to access specific image elements and ensuring the accuracy of the n-cuboid bounds) and iterative search strategy (iterating over each dimension to search for foreground and background elements around the target element, guaranteeing the completeness and efficiency of the bounding box calculation).In summary, the algorithm employs concepts from linear algebra, iterative search strategies, and geometry. The cleanup operation (Algorithm 4) works as follows: the locations of background elements in the array are gathered, and for each such element, n regions of interest are formed, with n being the number of dimensions in the array.Each region spans across the entire array along one axis but is only one dimension long along the rest.The array parts corresponding to the formed areas are set to background elements.An example using the "Holed" image (with the surrounding background already cropped out) from the test dataset is given in Fig. 4. Algorithm 4 has a mathematical background in set theory and matrix manipulation techniques, specifically in-place modification: the function efficiently uses array indices and slices to identify and modify background-associated elements.The manipulation operations ensure these elements do not affect subsequent partition steps, which is crucial for accurate decomposition and isolating relevant regions of interest (foreground sub-objects).A critical step in the process is slicing for cross-out, where the algorithm constructs slices to cover entire axes containing background elements, efficiently identifying and modifying foreground elements associated by axis with each background element.Although creating a copy of the image in Algorithm 2 for Algorithm 4 to work on requires additional memory, in-place manipulation still reduces mathematical complexity and enhances efficiency, especially for large images.In short, the justification for Algorithm 4 lies in its efficiency in modifying elements associated with the background so that even shapes with concave angles can be decomposed accurately. General Solution II Like General Solution I, this method can access data randomly.However, the most vital difference to the other solution is that this second procedure uses template-matching to find the sub-object boundaries.A template here is a rectangular shape with the same number of dimensions as the array.Each element in the region covered by the template must be part of an object, and the largest template area enclosing no background elements is chosen as the region of the extracted sub-shape. The templates are formed by calculating all possible permutations of shapes up to the dimension set based on the maximum number of consecutive foreground elements found on each axis.For example, for a 2D array where this value is two on both axes, the largest template size would be 2*2 elements, and the outcomeexcluding templates with zero in their shapewould be four permutations: (1, V. Pitkäkangas 1), (1, 2), (2, 1), and (2, 2).A pseudocode representation of this calculation is in Algorithm 6.The product function from the itertools software package that is part of the standard Python distribution is used to get all permutations. The upper bound for the template sizes is calculated as follows: as a pre-processing step, the array is essentially sliced along every axis into pieces with the length of the axis as one dimension and zeros as others.For a 3D array having 3*3*3 elements, this would produce slices of 3*1*1, 1*3*1, and 1*1*3 units, and there would be 9 (3*3) slices of each type.However, the slices are formed as 1D arrays, so the example input would produce nine 3-unit-long slices along each axis, so 27 arrays in total.The slicing process is in pseudocode in Algorithm 9, and the code for calculating dimension and shape permutations for it is in Algorithm 7.Each slice returned by the process has its axis ID attached.Once the slices have been formed, the template boundary calculation can resume.A list l of the maximum number of consecutive foreground elements on every axis is created, and each value is initialized to zero.Next follows the processing of the slices.The axis ID is read, and the positions of elements of the background elements are discovered.If the slice only has foreground elements, its maximum number is set to its length.If there is one background element, the maximum number of the slice is set to one.Otherwise, the maximum of the absolute values of differences between the foreground element positions is used.This would be three for a slice a = [0,1,0,1,1,1,0].The maximum value of the slice is compared to that saved to the index corresponding to the axis ID in the list l created before processing the slices.If this maximum exceeds the stored value, l is updated to hold the new value instead.After processing all the slices, the function returns a list containing the lengthiest sequence of foreground elements for each axis.A pseudocode representation of this procedure is in Algorithm 8. A simple example of permutations is provided in Fig. 5, where a 4*4-element 2D image with an object of 2*2 elements (gray squares) is surrounded by the background (white squares).The bottom-right part of the object has been chosen for the starting point; this is highlighted in darker gray than the rest of the shape.The dotted lines show the possible locations of the 2*2 template when the starting point is included in the region of interest.The leftmost part of Fig. 5 displays all possibilities at once; the result looks like a 3*3 grid.The other four parts show each template position separately.The second part from the left is the ideal solution because the area covered by the template only contains elements belonging to the object. Once the maximum number of consecutive foreground elements on each axis has been computed, the output has been used to set the upper bounds for template creation, and the templates have been created, the sub-object extraction begins.For each template, possible starting positions are computed so that the target position is included, and secondly, there are no reads out of the array.The first criterion reduces processing time by avoiding unnecessary analysis of regions that may not be part of the currently targeted (sub-) object.The template is fitted for each remaining starting position: a sub-array is produced by using the coordinates of the location and template dimensions, and if all elements in this sub-array belong in the foreground, the sub-array size is compared to that of the best fit so far.If the size is larger, the best fit is updated to reflect this: it contains the new size, the coordinates and dimensions of the sub-array, and the template shape.Once the list of templates has been exhausted, the best fit is returned (Algorithm 5.). The core of the partition process is described in Algorithm 5. Like the principal part of General Solution I, the code in Algorithm 5 is executed with randomly chosen foreground elements as the target pixels until the whole array is processed.Each time a sub-shape has been extracted, the region of the array corresponding to its coordinates is set to the background.This reduces the possible foreground (or target) elements on every iteration. From a mathematical perspective, the rationale behind Algorithm 5 lies in contiguous block identification to detect the most sizable such region (a foreground object or sub-object).The method employs a systematic search to achieve this goal by iterating over possible dimensions and starting indices, using formulae that manage block size computation (based on array analysis, matrix manipulations, and set theory) to check if the block contains only foreground elements.The iteration techniques ensure computational efficiency, especially for large images: using permutations (Algorithms 6 and 7) and the lengthiest per-axis non-zero (foreground) element sequences (Algorithm 8) as references when computing the n-cuboid sizes and positions and using the fast n-dimensional index operator implemented in NumPy.Additionally, Algorithm 5 uses Algorithm 9 and mathematical concepts related to it in some of its steps. The mathematical background of Algorithms 6 and 7 is primarily in combinatorial analysis and related techniques and secondarily in matrix operations (namely, obtaining dimensionality and creating slices for the permutations).Using cartesian products of the inputs, the objective of the algorithms is to systematically generate comprehensive and unique permutations that explore different array shapes, structures, and element subsets along various dimensions.The methods ensure all possible combinations, either of array dimensions within specified limits (Algorithm 6) or slices along dimensions (Algorithm 7), are explored.The algorithm outputs later find application in array (image) subset analysis and manipulation. Algorithm 8 relies on algorithmic techniques, numerical analysis, set theory, and linear algebra (matrix operations) to identify sequences of consecutive non-zero (foreground) elements along each axis of the array (image) and to compute the maximum length of V. Pitkäkangas the resulting lines through axis-wise systematic analysis and mathematical techniques.These methods make image slice computations used in the sequence identification task efficient and accurate.In addition to matrix operations and set theory, finding runs of foreground elements and computing differences between their indices are utilized.The result provides information about the spatial distribution of the foreground elements within the image. Algorithm 9 is based mathematically on combinatorial analysis, which generates all possible dimension options and slice permutations, and an iterative approach to explore possible choices and rapidly generate comprehensive combinations for slicing along each dimension.Enumerating these slice permutations ensures a thorough slicing combination coverage, making the exploration of array element subsets along different dimensions possible.The objective is to facilitate the array subset analysis and manipulation necessary for polygon partitioning in General Solution II.Algorithm 5. "General" solution II for rectangular partition.Algorithm 6. Auxiliary method "array_permutations". Optimization Partition results can be optimized by merging adjacent rectangular shapes.This, however, can only be done if two conditions are met: Firstly, the shapes must share their centroid coordinates in all dimensions except one, and secondly, their boundaries corresponding to shared coordinates must have equal length.The merging process is repeated until no more shapes that meet the conditions are found.A non-exhaustive list of examples of shapes that can and cannot be merged is shown in Fig. 6: three pairs of 2D shapes and three pairs of 3D shapes. Results can be further cleaned after optimization by reassigning unused labels of extracted shapes.For example, if the optimized array only has labels 0 (background), 1, 3, and 4, these can be reassigned to 0, 1, 2, and 3. The optimization process is presented in pseudocode in Algorithm 10, and the evaluation of the partition, optimization, and label reduction procedures is described similarly in Algorithm 11. Mathematically, the justification of Algorithm 10 is in processes that guarantee precise label comparison and reduction.It involves comparing the centroids of labeled sub-objects within their bounding boxes and the dimensions of these boxes to confirm a distinct separation that allows for merging only when regions are adjacent and proportionate in related dimensions.Additionally, it can minimize label values without compromising the integrity of the optimization.The foundation of Algorithm 10 exists in calculating and comparing centroids, determining bounding box sizes, and optionally reducing labels.Its mathematical framework includes analyzing centroids and bounding box dimensions to evaluate the merging suitability of labeled subregions.The analysis ensures spatial coherence and remaps labels to maintain segmentation accuracy while preserving the spatial relationships between regions.This dual approach of centroid analysis and label remapping ensures both the accuracy of segmentation and the efficiency of label use. A critical aspect of Algorithm 10 is its dependency on an evaluation function (Algorithm 11) to check that the optimization process has not introduced errors or inaccuracies.The mathematical justification and background of the evaluation (Algorithm 11) consist of region contiguity and validity realized through iterative region analysis for studying each labeled region and ensuring comprehensive coverage of all areas, the combination of array operations and set theory for efficient region extraction and manipulation while also ensuring computational efficiency, and Boolean logic to determine the success status of the decomposition evaluation, indicating clearly whether the outcome is valid.Therefore, the function verifies that the regions have neither gaps nor disjoint components and ensures that each labeled region contains only one unique non-zero label value, indicating that all regions solely consist of foreground elements and that the optimized image contains no mixed nor overlapping sub-objects. Experimental results All three solutions processed the test dataset successfully, with neither overlapping shapes nor background elements in any of the extracted rectangular shapes. Six metrics were chosen to evaluate the methods, and three operations were applied to each, resulting in 18 key performance indicators.The operationsminimum, maximum, andwere performed for each array and method by finding the smallest and largest values and computing the arithmetic mean from the corresponding metrics (every array is processed five times with all methods).For example, the first minimum time for General II is 0.321 s.Out of the five test runs, this is the shortest processing time of Fig. 6.Conditions for merging adjacent shapes. V. Pitkäkangas the method for the first array (named "Box 9 × 9 × 9").In addition to execution time, the number of labels in the results, region size, and reduction count are used.The first two of these are calculated before and after optimization.Region size is determined solely by the number of elements in the extracted shape.Reduction count is the set difference between the labels in optimized and unoptimized results. In addition to the 18 per-array and per-method metrics above, some indicators for the entire dataset were computed and used in the evaluation (Equations ( 7)-( 9)).These are per-method averages for times, optimizations for shape sizes and labels, and reductions for every method.The first of these is simply the arithmetic mean of execution times.Average per-method optimization for shape sizes o s is calculated by using o s = 100 * s − 100, (7) where s denotes the arithmetic mean of size ratios for the respective method (Special, General I, or General II).Conversely, average per- The first General solution performs fastest on experimental data, but its results are the least predictable.The second General solution is a tradeoff between speed, accuracy, and predictability of results.Despite its shortcomings, particularly in execution time, this method is still expected to perform reasonably effectively in the future due to advances in computational power.Some other criteria for General solutions in place of random selection might prove more effective solutions with more predictable results for partitioning shapes into rectangular subcomponents.One potential alternative is basing the choice of the next unvisited "target" element on the distance to the last extracted sub-object and values of neighbors of possible "target" elements.These criteria could be a basis for solutions that enable more optimal or application-appropriate sub-object extraction.Furthermore, the General methods might be sped up V. Pitkäkangas Fig. 1. Fig. 1.Possible results of rectangular partition: even a simple L-shaped 2D object (first shape from the left) can be decomposed in at least three unique ways (second, third, and fourth shape from the left).Dashed lines denote sub-object edges. Fig. 4 .Algorithm 3 .Algorithm 4 . Fig. 4.An example of the non-zero cutout process.From left to right: Input, cutout regions computed from background elements (said elements excluded from the picture for clarity), cutout regions superimposed on input, output with cutout regions set to background elements. Fig. 5 . Fig. 5. Example of permutations for a template size of 2x2 elements and on a 2D image with a target element.The leftmost array has all possible permutations stacked, while each possible permutation is presented individually in the others. Algorithm 10 .Algorithm 11 . Optimization procedure of partitioned shapes.V. Pitkäkangas Evaluation of partition and optimization results. Table 1 Glossary of Python terms and symbols used in pseudocode representation of the proposed algorithms. Table 2 Test results for each array in the dataset.
7,956.2
2024-08-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Quantization-Mitigation-Based Trajectory Control for Euler-Lagrange Systems with Unknown Actuator Dynamics In this paper, we investigate a trajectory control problem for Euler-Lagrange systems with unknown quantization on the actuator channel. To address such a challenge, we proposed a quantization-mitigation-based trajectory control method, wherein adaptive control is employed to handle the time-varying input coefficients. We allow the quantized signal to pass through unknown actuator dynamics, which results in the coupled actuator dynamics for Euler-Lagrange systems. It is seen that our method is capable of driving the states of networked Euler-Lagrange systems to the desired ones via Lyapunov’s direct method. In addition, the effectiveness and advantage of our method are validated with a comparison to the existing controller. Introduction Recent decades have witnessed that the research on networked control systems is also one of the most important topics in the current academic and industrial field [1][2][3]. Different pieces of equipment or devices are connected through the network, provides more flexibility and resilience to different working conditions and environments. Please note that as the number of connected devices increases, or as the size of the networked system expands, it becomes a challenge as to how to use the limited computational resources for communicating [4]. From a perspective of the information flow, a problem of the limited computational resources can be regarded as a problem of the bandwidth limitations, wherein the transmitted signal is quantized through the networked systems. Thus, it leads to a challenge that how to control a quantized system. Along this line, some results of handling the quantization phenomena have been reported in the control community. Results on limited data rates were documented in [2]. The work of [3] addressed the communication constraint issue by placing the encoder and decoder in the control diagram. In contrast to the linear system in [5], works such as [6,7] extended the quantized feedback problem to the non-linear systems. Since the quantization was sector-bounded, Fu and Xie [8] changed the quantized feedback design into the robust control design. Considering that the input logarithmic quantizer may result in oscillation, Hayakawa et al. [9] gave a remedy using a hysteretic quantizer. It was reported in [10] that the input quantization problem can be handled using the backstepping-based design. Xing et al. [11] considered an output-feedback design problem for unknown nonlinear systems with the quantization of the system input. Xie et al. [12] proposed a neural-network-based asymptotic control algorithm to study unknown input quantization control problems for nonlinear systems using backstepping control design. Zhou et al. [13] extended [10] and showed that the Lipschitz condition was not necessary for the nonlinear functions with the quantization at the input. Please note that Euler-Lagrange systems have significant advantages in modelling the dynamical processing for industrial applications, such as [14][15][16][17][18][19]. However, it is non-trivial to apply the linear system-based results to control the Euler-Lagrange systems. Difficulties include not only Euler-Lagrange systems themselves are nonlinear but also they might involve different kinds of nonlinearities [20,21]. Please note that actuators have been regarded as an essential unit in the control systems. When actuators are not working in perfectly linear phenomena, actuators nonlinearities [22] including deadzone, backlash, friction, and hysteresis would be imposed on the Euler-Lagrange systems. Therefore, how to model the nonlinearized actuator dynamics and how to tackle the problem of unknown actuator dynamics appear to be crucial problems that need further research. The work of [23] considered the motion control problem in linear resonant actuators by using an estimator for estimating the position and a motion controller. As documented in [24], the control problem of actuator nonlinearities was modelled for a class of nonlinear systems. Later, actuator failures, which can be regarded as a specific but difficult form of actuator nonlinearities, were investigated in [25]. Recently, Chen et al. [26] proposed a dynamic gain-based approach for multi-input and multi-output system with unknown input coefficients, which turned out to be related to the problem of actuator nonlinearities. The work of [27] studied the control problem of friction and hysteresis on the geared drives, and proposed a sensorless torsion control for elastic-joint robots. Results are obtained for multi-agent systems with external disturbances and unknown input nonlinearities in [28]. Lin et al. [29] considered the cooperative navigation control of mobile robots in an unknown environment using a neural fuzzy controller. The work of [30] considered unknown input Bouc-Wen hysteresis control problem and handled it using adaptive control. Although advances have been reported, the majority of the existing works are not designed for Euler-Lagrange systems with unknown quantization. This becomes a key question to investigate practical networked systems such as unmanned vehicle systems and teleoperated manipulators [31,32] with the capability of modelling the nonlinearities that happen in the communication networks [33]. In this paper, we aim to study a control problem of Euler-Lagrange systems with unknown actuator dynamics. In our case, the designed control signal is first endowed with the quantization to capture the limited bandwidth in the networked control system. After that, the quantized signal is allowed to pass through unknown actuator dynamics, which results in an unknown coupled dynamics control problem for Euler-Lagrange systems. We solve such a problem by proposing a quantization-mitigation-based adaptive control method. It is seen that our controller, together with its estimation laws, is capable of changing unknown time-varying input coefficients caused by the quantization and actuator dynamics into a problem of unknown constant input coefficients. Then, we solve the unknown constant input using an adaptive dynamic gain-based approach. Using our control scheme, the analysis shows that one estimation law is sufficient to handle unknown quantized actuator dynamics problem, regardless of the number of the actuator channels in the network. It is seen that our quantization-mitigation-based trajectory control works effectively for networked Euler-Lagrange systems both in the theoretic stability analysis and in the simulation case study. The proposed system model and its control result are important for the Euler-Lagrange systems that are operating in a networked workplace. In the networked control systems, the issue of the limited bandwidth has been studied and modelled by the quantization phenomenon [1]. Through our control design, we show that the tracking performance of the Euler-Lagrange systems is ensured even under the quantized actuator dynamics. The organization of the remaining parts of this paper is briefly introduced as follows. In Section 2, we first model the networked Euler-Lagrange systems with unknown actuator dynamics. In Section 3, we review the structural property of Euler-Lagrange systems, propose an adaptive control-based method to handle unknown actuator dynamics and quantization for Euler-Lagrange systems, and proved its stability via Lyapunov's direct method. In Section 4, the proposed method is tested through a case study, and its effectiveness is confirmed. In Section 5, a conclusion of this paper is attained. Problem Formulation In this section, we will consider a class of unknown Euler-Lagrange systems that contains unknown quantization on the input and unknown actuator dynamics. The dynamics for such Euler-Lagrange systems are modelled as the following nonlinear equation [34]: where χ ∈ R L×1 is a system state vector, V(χ) ∈ R L×L denotes a positive definite inertial matrix; H(χ,χ) ∈ R L×L representees the Coriolis and centrifugal matrix of the ith robotic arm; W(χ) ∈ R L×1 denotes the gravitational force vector; and τ ∈ R L×1 means the actual actuator signal applied to the Euler-Lagrange systems and plays a role of driving the state variable χ of Euler-Lagrange systems (1) to follow a predetermined trajectory reference χ d . Here, the actuator dynamics are unknown to the designer. It is noted that the term τ in our case denotes a control or command signal received from the wireless networks. The mathematical model of the actual actuator dynamics τ is detailed as where the function G i (·) denotes unknown actuator dynamics driven by the signal (·) and i (u i (t)) implies an unknown nonlinear function that changes the designed controller u i (t) to i (u i (t)) caused by the wireless communications in the networked systems. Now, we model the dynamical procedure of the wireless communication. Although wireless communication provides a flexible way of controlling the system, the communication quality strongly relies on bandwidth. Under the limited bandwidth of wireless networks, we consider the designed actuator signal u i (t) in (2) subjected to the quantization [10], which can be regarded as a model to investigate practical systems such as telecommunicated vehicles or devices. For the simplification and convenience, we show the quantization with the input signal u i (t) and its output quantized signal Figure 1, where it is clear that the quantizer i (u i (t), t) is nonlinear and discontinuous severely twisting the designed signal u i (t). The mathematical model of the quantization is expressed in a form as where the symbol ρ i denotes the density of the quantized phenomena with 0 represents the quantized output, the value of which holds at the previous time, To better capture the influence from the wireless communication, we assume that parameters in (3) are unknown to the controller design, which implies that the quantization is unknown. Now, we give the modeling the actuator dynamics as where v i is a designation for a general input variable, and g v i (t) denotes an unknown input coefficient with its sign being positive and g u i (t) denotes an unknown bounded disturbance with its upper and lower bounds satisfying For the control purpose, it is assumed that g v i (t) has a lower bound that is strictly greater than the zero satisfying g v i ≥ g v i > 0, which ensures that the controller signal is always effective in acting on the considered system. This assumption is standard and it follows from the literature such as [34]. (4) can be employed to capture actuator nonlinearities including the deadzone, hysteresis, and backlash, which are frequently found in the practical systems and are important to the quality of the control systems. Take the deadzone nonlinearities for example. Let the input and output be v and dz(v) with the system dimension one. From the definition of deadzone phenomena, one has Remark 1. The model in where m r , m l , k l , and k r are bounded variables. Then, one changes the last row of deadzone model into the in the last row is bounded. Therefore, the deadzone phenomena can be reexpressed by the form of (4). This clarifies that (4) has the capability of modelling certain types of actuator nonlinearities. After combining the quantization (3) and the actuator dynamics (4), one has the coupled dynamics for the actuator of (1) as Considering that the parameters including g v i (t), g u i (t), δ i and u i,min in (5) are unknown to the designer, we call (5) unknown quantized actuator dynamics. It is noted that Equation (1), together with (5), is capable of modelling several equipment and devices communicating through networks including the teleoperated robot systems, unmanned vehicular systems, and sensor networks. The quantized actuator dynamics for networked Euler-Lagrange systems are shown in Figure 2. We are ready to define the problem to be studied. The Problem of Networked Euler-Lagrange Systems with Unknown Actuator Dynamics is to design a quantization-mitigation-based controller u i for the Euler-Lagrange systems (1) so that the controlled state χ converges to the predetermined state χ d , i.e., χ(t) → χ d (t) andχ(t) →χ d (t) as t → ∞ under unknown quantized actuator dynamics (5). Control Design for Networked Euler-Lagrange Systems with Unknown Actuator Dynamics In this section, we will give our method to solve the Problem of Networked Euler-Lagrange Systems with Unknown Actuator Dynamics by the following three parts. To this end, we first review the structural properties of Euler-Lagrange systems. Then, a dynamic loop gain function-based control method is reviewed. Lastly, we give our main controller design and prove its stability analysis by using Lyapunov's direct method. Structural Properties of Euler-Lagrange Systems In this subsection, we provide the structural properties of Euler-Lagrange systems as follows, which can also be found in the literature such as [34]: where || · || 2 denotes a norm operator, and ξ max (V(χ)) and ξ max (V(χ)) are defined as the maximum and minimum eigenvalues of the matrix V(χ), respectively. The above-mentioned three properties will be used for designing the quantization-mitigationbased controller. Adaptive Method for Adjusting the Control Gain As shown in the Problem Formulation Section, unknown quantized actuator dynamics (5) results in two parts, namely (1) input coefficients multiplying the controller and (2) unstructured disturbances. To address such two terms, we consider an adaptive method for adjusting the input coefficients and for tolerating the unstructured disturbances. For the control purpose, we introduce a class of adaptive dynamic gains as [26] A(φ) = φe φ 2 , where φ is a real scalar. We summarize the adaptive dynamic gain-based result in the following lemma. Lemma 1 ([26]). Smooth functions U(t) and φ(t) are defined over the interval [0, t d ) with U(t) nonnegative and φ(t) monotonic. Let φ(0) be bounded. The adaptive control gain A is given in (9). Then φ(t) and where γ(t) is an upper-bounded variable and µ and g µ are positive constants. Quantization-Mitigation-Based Trajectory Control Design This section aims to provide a quantization-mitigation-based trajectory control design for Euler-Lagrange systems with unknown quantized actuator dynamics. It follows from adaptive control in [34] that the coordinate transformation is defined as =χ −χ r , where M r ∈ R L×L is set to be a positive definite matrix. Now, the quantization-mitigation-based controller for Euler-Lagrange systems with unknown quantized actuator dynamics is given as with whereκ denotes an estimation and its estimation law is specific later, the function of ℵ(·) is given in (8) by replacing variables χ,χ, α,α with χ,χ,χ r ,χ r , M e is a positive-definite matrix, and I denotes an identity matrix with an appropriate dimension. The estimation laws for (14) and (15) are designed aṡ where η > 0, σ > 0, and σ 1 > 0 are design constants, κ(t) and φ(t) are initially chosen as κ(0) ≥ 0 and φ(0) ≥ 0. For better clarification, we depict the proposed quantization-mitigation-based trajectory control in Figure 3. Stability Proof via Lyapunov's Direct Method In this subsection, we aim to give the main result of this paper and also present the stability analysis for the proposed quantization-mitigation-based controller. The following theorem summarizes our main result. Proof. Under the coordinate transformations in (11)-(13), one substitutes (14) into (1) to obtain that where ℵ and Ω are defined in (8) of Property 3 with ℵ being a known regression matrix and Ω being an unknown constant vector. To analyze the networked Euler-Lagrange systems with unknown quantized actuator dynamics, it follows the Lyapunov's direct method that we define an auxiliary function as where is given in (13), V(χ) is a positive definite matrix satisfying (6) as shown in Property 1, andκ is an estimation error defined asκ withκ being given in (17) and κ being specific later. Now, the derivative of (19) with respect to (18) is changed intoU where κ = ||Ω|| F . It is clear that κ is an unknown positive scalar given that Ω is unknown. Considering that the matrixV(χ) − 2H(χ,χ) is skew-symmetric in Property 2, one changes (21) intȯ where Young's inequality is employed. Again, it follows from Young's inequality that −κκ in (22) satisfies −κκ = −κ(κ +κ) where both the result in (20) and Young's inequality are used. From (5), it is clear that δ i is a positive scalar. Therefore, it follows from (5) that Moreover, i (u i (t − )) on the last row of (5) is bounded. It is thus define the maximum of i (u i (t − )), i = 1, 2, . . . , L as g max . Taking the similar technique for Deadzone analysis in Remark 1, the last two rows of (5) can be remodelled as forms like the first two rows. Substituting (16) and (23) into (22) yieldṡ The result in (25) is further changed intȯ where Remark 2. We pause to highlight how to handle unknown quantized actuator dynamics as indicated in (26). Specifically, the time-varying input coefficients caused by the quantization and nonlinear actuator dynamics are multiplied with the adaptive gain A(φ(t)) in (9), the sign of which is always non-negative. This is ensured by the estimation law given in (16) with C A in (15). To this end, we are capable of handling the problem of time-varying input coefficients into a lower bounded input gain problem as g min shown in (29). It will be seen that the designed estimation law plays a key role in achieving the asymptotic control for networked Euler-Lagrange systems under the quantized actuator dynamics. Now, we continue the proof of the proposed quantization-mitigation-based trajectory control design. Solving (26) with respect to the time over the interval [0, t] yields where Recalling the definition of U(t) in (19), it is reasonable to assume that V(0) is bounded. The boundedness of V(0), together with the boundedness of µ and γ 0 , leads that γ on the right-hand side of (30) is bounded. It is clear that Lemma 1 can be applied to (30) so that both U(t) and φ(t) are ensured bounded. This implies that all the signals in the closed-loop system are bounded after using the proposed quantization-mitigation-based trajectory control (14). The following analysis will show that the asymptotic control is also ensured even in the presence of unknown quantized actuator dynamics. Integrating the estimation law (16) over the time interval [0, t] together with the controller design (15) yields Now, we focus on the boundedness of the terms on the right-hand side of (32). From the above-mentioned analysis, one obtains that the signals φ(t), φ(0), and η are bounded. As an immediate result, the integral term t 0 η T (τ) (τ)dτ on the right-hand side of (32) must exist and be finite. In addition, it is further obtained that the derivative of the signal is bounded. Subsequently, it follows from Barbalat's Lemma that lim t→∞ T (t) (t) = 0, which ensures that lim Recalling the definition of (t) in (13), one obtains thatχ(t) →χ d (t) as t → ∞. Now, recall the definitions ofχ in (11) andχ in (12), one rewrites (t) in (13) as Since (t) converges to zero and M r is a positive definite matrix defined in (12), thenχ in (33) converges to zero so that χ(t) → χ d (t) as t → ∞. This completes the proof. Remark 3. In Theorem 1, it is proved that one estimation law is sufficient to handle the time-varying input coefficients caused by the quantized actuator dynamics, regardless of the number of the control channels in the network. The less estimation law is used in the control system, the more computational resources are saved for real-time performance. Therefore, the proposed quantization-mitigation-based result is important to the networked Euler-Lagrange systems from the perspective of the computation saving, especially for multiple devices sharing the common computational resources. Simulation and Experiment In this scenario, we consider a Euler-Lagrange system as a robotic system, and control the robotic system through networks modelled by the quantization. The proposed quantization-mitigation-based controller in the previous section will be applied to the robotic system to test the system performance in the presence of unknown quantized actuator dynamics. We follow the literature of [34] to give the dynamics of the robotic system as where V 11 = Ω 1 + 2Ω 3 cos(χ 2 ), V 12 = Ω 2 + Ω 3 cos(χ 2 ) + Ω 4 sin(χ 2 ), V 21 = V 12 , V 22 = Ω 2 , Z 11 = −Ψχ 2 , Z 12 = −Ψ(χ 1 +χ 2 ), Z 21 = Ψχ 1 , Z 22 = 0, and Ψ = Ω 3 sin(χ 2 ) − Ω 4 cos(χ 2 ). Here, unknown constants Ω i for i = 1, 2, 3, 4 are stacked into a vector as Ω = [Ω 1 , Ω 2 , Ω 3 , Ω 4 ] T . In this simulation, we let the designed controller first pass through the quantization (3). After that, the quantized signal passes through unknown actuator dynamics (4). This procedure leads that the actual input signal strictly follows (5) to set up the unknown quantized actuator dynamics problem for the Euler-Lagrange system. Considering that (34) is a multi-input and multi-output system, one designs the parameters of the quantization and the actuator dynamics in each control channel are the same as δ i = 0.8 and u i,min = 0.1 for i = 1, 2. Please note that the same parameters for each control channel are only for the simplification, and can also be set different using the proposed quantization-mitigation-based method. Initial states of the robotic system are randomly chosen. To implement our method, two estimation laws are built as required in (16) and (17) with the initials of such estimations being zeroes. That is, The results are plotted in Figures 4-9, including the adaptive variables φ, A(φ),κ, the control signal u, and trajectory performance χ andχ. In particular, the estimation law φ is given in Figure 4 and its adaptive dynamic gain A(φ) is given in Figure 5. It is seen that the estimation law for handling unknown quantized actuator dynamics reaches a steady value after the adaptation. The estimation law ofκ for tunning the robotic parameters is presented in Figure 6. The actual actuator signal applied to the robot τ is plotted in Figure 7. It follows from Figures 4-7 that the proposed quantization-mitigation-based trajectory controller is capable of driving the control and estimation laws in the Euler-Lagrange system to be bounded. As for the trajectory performance, we plot χ andχ, respectively, in Figures 8 and 9, where the desired trajectories are also plotted for better clarification. From the results in Figures 8 and 9, we conclude that the proposed quantization-mitigation-based trajectory controller works effectively under unknown quantized actuator dynamics. For the comparison, we test a fuzzy controller in [35], which uses the fuzzy logic systems to handle the system dynamics but does not contain an adaptive mechanism to reject unknown quantized actuator dynamics. Please note that we only change the controller from the proposed one to the fuzzy controller. The system dynamics, as well as the parameters for the quantization, are the same as that in the previous scenario. Under the existing fuzzy controller, the actual controller signal after the quantized actuator dynamics is plotted in Figure 10 and the actual trajectories of χ andχ are depicted in Figures 11 and 12. From Figure 11, there exists a steady error in controlling quantized actuator dynamics if the existing fuzzy controller is used. This confirms that the strong nonlinearities arising from the quantized actuator dynamics severely affect the system performance. Comparing the results in Figures 8 and 9 and in Figures 11 and 12, one concludes that the proposed quantization-mitigation-based method works better than the existing controller for the trajectory control of networked Euler-Lagrange systems. To quantitatively describe the difference between the proposed controller and the existing controller, we define the index of the average error as follows where N τ i , N χ i , and Nχ i , for i = 1, 2, denote the total numbers of the signals τ i , χ i , andχ i , respectively. The comparative results of I τ i , I χ i , and Iχ i are given in Table 1. It can be seen that the index of the control input I τ i under the proposed method is larger than that under the existing controller. This means that more control effort is used in the proposed method. However, the tracking performance I χ i and Iχ i under the proposed method is better than that under the existing controller. Conclusions In this paper, we investigated a trajectory control problem for wireless Euler-Lagrange systems with unknown actuator dynamics. We considered a control problem of trajectory control under the input quantization with unknown parameters. Subsequently, we derive the coupled dynamics that combine the quantization and actuator dynamics. To address such coupled dynamics, we proposed a quantization-mitigation-based trajectory control method, wherein adaptive control is employed to handle the time-varying input coefficients caused by the quantization nonlinearities. It was proved that the proposed method is capable of driving the states of networked Euler-Lagrange systems to the desired states, asymptotically. We tested our method and compare it with the conventional controller in the simulation and experiment section, wherein the effectiveness and advantage of our method are confirmed. In the realistic scenario, there are more complex dynamics in the networked control systems such as interference, packet delays, and unreliability. We will extend the result in this paper to consider the complex dynamics in the future work. The future research topic will include the learning-based control for unknown system dynamics such as [36,37]. Author Contributions: Y.L. and Q.Y. conceived of the original idea of the paper. Y.L. and Q.Y. performed the experiments. Y.L. and Q.Y., and P.K. wrote the paper. All authors have read and agreed to the published version of the manuscript.
5,902.4
2020-07-01T00:00:00.000
[ "Engineering", "Mathematics" ]
Social Commerce for Success: Evaluating Its Effectiveness in Empowering the Next Generation of Entrepreneurs This research paper aims to evaluate the effectiveness of social commerce in empowering the next generation of entrepreneurs. With the rapid growth of social media and e-commerce, social commerce has emerged as a promising avenue for young entrepreneurs to establish and grow their businesses. This paper explores the benefits, challenges, and strategies associated with social commerce for young entre-preneurs. It examines the impact of social commerce on their business performance, as well as the role of social media skills and support mechanisms in their success. By analyzing existing literature, conducting surveys, and studying real-life case studies, this research provides valuable insights into the opportunities and implications of social commerce for the next generation of entrepreneurs. INTRODUCTION The rapid growth of social media and e-commerce has transformed the way businesses operate, creating new opportunities and challenges for entrepreneurs. One significant outcome of this digital revolution is the emergence of social commerce, an innovative fusion of social media and e-commerce platforms. Social commerce enables entrepreneurs to leverage the power of social networks and engage directly with customers, presenting a promising avenue for the next generation of entrepreneurs to establish and grow their businesses. This research paper aims to evaluate the effectiveness of social commerce in empowering the next generation of entrepreneurs. Background and Significance: Social commerce refers to the utilization of social media platforms as a means to conduct commercial activities, such as product discovery, promotion, and purchase. It represents a shift from traditional ecommerce models, where businesses primarily interact with customers through dedicated online stores. In contrast, social commerce capitalizes on the interactive and community-driven nature of social media, enabling entrepreneurs to leverage user-generated content, influencer marketing, and social endorsements to drive sales and enhance brand awareness. For young entrepreneurs, social commerce offers unique advantages. It provides them with a costeffective and accessible platform to reach a wide customer base, allowing them to bypass traditional barriers to entry and establish their brands with minimal resources. Moreover, social commerce facilitates direct communication and engagement with customers, fostering a sense of authenticity and trust, which is crucial for young entrepreneurs seeking to build customer loyalty and establish a strong brand identity. However, the effectiveness of social commerce in empowering young entrepreneurs and driving business success warrants further investigation. While anecdotal evidence and success stories abound, a comprehensive evaluation of the benefits, challenges, and strategies associated with social commerce for the next generation of entrepreneurs is necessary. Understanding the impact of social commerce on their business performance, the role of social media skills, and the support mechanisms available to young entrepreneurs will provide valuable insights for aspiring entrepreneurs, policymakers, and industry stakeholders. Research Objectives: The primary objective of this research paper is to evaluate the effectiveness of social commerce in empowering the next generation of entrepreneurs. To achieve this, the following specific objectives will be addressed: 1. To assess the awareness and adoption of social commerce among young entrepreneurs. 2. To identify the benefits and challenges faced by young entrepreneurs in utilizing social commerce. 3. To analyze the impact of social commerce on the business performance of young entrepreneurs. 4. To explore the role of social media skills and digital literacy in the success of young entrepreneurs in social commerce. 5. Identify the support mechanisms and resources required for young entrepreneurs engaging in social commerce. By examining these objectives, this research aims to contribute to the understanding of how social commerce can serve as a valuable tool for young entrepreneurs to achieve success, while identifying the factors that contribute to their success and the challenges they may encounter in leveraging this emerging business model. Definition and Evolution of Social Commerce: Social commerce is an evolving concept that encompasses the integration of social media and ecommerce platforms, enabling businesses to engage with customers, drive sales, and enhance brand awareness. According to Constantine's and Fountain (2008), social commerce leverages user-generated content, social recommendations, and social interactions to facilitate online transactions. It represents a shift from traditional e-commerce models by harnessing the power of social networks and capitalizing on the social influence of users. The evolution of social commerce can be traced back to the emergence of social media platforms such as Facebook, Instagram, and Twitter. These platforms have transformed from being solely social networking sites to becoming prominent channels for commercial activities. Users now actively participate in product discovery, reviews, and recommendations, blurring the lines between socializing and shopping. The integration of transactional features, such as in-platform purchasing and shoppable posts, has further facilitated the growth of social commerce (Wang et al., 2021). Benefits of Social Commerce for Young Entrepreneurs: Social commerce offers unique benefits to young entrepreneurs in establishing and growing their businesses. First, it provides a low-cost entry point, allowing entrepreneurs to set up virtual storefronts and reach a wide audience without significant investments in physical infrastructure (Sashi, 2012). This level playing field allows young entrepreneurs to compete with established brands and gain visibility in the market. Second, social commerce enables direct engagement and communication with customers, fostering a sense of authenticity and trust. By leveraging social media platforms, young entrepreneurs can build personal connections with their target audience, share stories behind their brands, and receive real-time feedback (Wang et al., 2019). This direct interaction enhances customer relationships and loyalty, which are crucial for the success of young businesses. Third, social commerce provides extensive opportunities for targeted marketing and personalized advertising. The data collected from social media platforms allow entrepreneurs to segment their audience and deliver tailored messages, resulting in higher conversion rates and increased sales (Nambisan et al., 2017). The ability to leverage social influencers and user-generated content also enhances the reach and impact of marketing campaigns. Challenges in Social Commerce for Young Entrepreneurs: While social commerce offers significant benefits, young entrepreneurs face several challenges in effectively utilizing this business model. One challenge is the intense competition in the social commerce space. With low entry barriers, numerous entrepreneurs vie for customers' attention, making it challenging to stand out and differentiate their offerings (Zhang et al., 2014). Another challenge is the selection and management of appropriate social commerce platforms. The rapidly changing landscape of social media platforms requires young entrepreneurs to stay updated and adapt their strategies accordingly. The choice of platforms should align with the target audience and product offerings to ensure maximum reach and engagement (Zeng et al., 2019). Additionally, managing multiple platforms and staying consistent with content creation can be resource-intensive for young entrepreneurs. Furthermore, building and maintaining customer trust is crucial in social commerce. As transactions occur within social networks, potential risks related to privacy, security, and authenticity can deter customers from making purchases (Xu et al., 2014). Young entrepreneurs must establish robust security measures, transparent return policies, and leverage social proof elements to alleviate consumer concerns. 2.4.Strategies for Success in Social Commerce: To succeed in social commerce, young entrepreneurs must adopt effective strategies. Content creation plays a vital role, as engaging and visually appealing content attracts and retains customers. Young entrepreneurs should leverage storytelling techniques, visually appealing images, videos, and usergenerated content to create an emotional connection with their audience (Hajli, 2014). Influencer marketing is another valuable strategy. Collaborating with social media influencers who align with the brand values and target audience can significantly enhance brand exposure and credibility (Chae et al., 2019). Influencers can amplify product reach, generate authentic user reviews, and drive customer engagement. Additionally, young entrepreneurs should actively engage with customers through social media platforms. Promptly responding to inquiries, providing personalized recommendations, and acknowledging feedback helps foster a strong customer relationship and boosts brand loyalty (Dwivedi et al., 2020). Moreover, leveraging analytics and data-driven insights can enable young entrepreneurs to optimize their social commerce strategies. Monitoring metrics such as conversion rates, customer engagement, and social media reach helps identify areas for improvement and refine marketing tactics (Alalwan et al., 2017). Overall, understanding and implementing these strategies can empower young entrepreneurs to leverage social commerce effectively, maximize their online presence, and drive business growth. IMPACT OF SOCIAL COMMERCE ON BUSINESS GROWTH In the digital age, social commerce has emerged as a significant driver of business growth, providing entrepreneurs with new opportunities to expand their reach, engage customers, and drive sales. This section explores the impact of social commerce on business growth, highlighting the various ways in which it contributes to the success of entrepreneurial ventures. Increased Reach and Exposure Social commerce platforms, such as social media networks and online marketplaces, offer entrepreneurs the ability to connect with a vast audience of potential customers. By leveraging social media channels and utilizing targeted advertising tools, entrepreneurs can effectively reach and engage their target market. This expanded reach enables businesses to increase brand visibility, attract new customers, and drive traffic to their online stores or websites. 3.2.Enhanced Customer Engagement One of the key advantages of social commerce is its ability to facilitate direct and real-time interaction between businesses and customers. Through social media platforms, entrepreneurs can engage with their audience, respond to queries, address concerns, and build relationships. This level of engagement fosters customer loyalty and satisfaction, leading to repeat purchases and positive word-of-mouth recommendations. 3.3.Improved Conversion Rates Social commerce provides entrepreneurs with powerful tools to optimize conversion rates. By leveraging social media advertising, targeted promotions, and personalized recommendations, businesses can create a seamless and tailored shopping experience for customers. Social proof, in the form of customer reviews, ratings, and user-generated content, further enhances trust and confidence, thereby increasing the likelihood of conversions and driving sales growth. 3.4.Fostered Innovation and Collaboration Social commerce platforms serve as hubs for entrepreneurial innovation and collaboration. Entrepreneurs can gather insights about consumer preferences, market trends, and competitors through social listening and data analytics. This information empowers businesses to identify new opportunities, refine their product offerings, and stay ahead of the competition. Additionally, social commerce facilitates collaborations with influencers, content creators, and other businesses, enabling entrepreneurs to tap into new markets and amplify their brand reach. 4.1.Synthesis of Findings: The findings from this research indicate that social commerce holds significant potential in empowering the next generation of entrepreneurs. Young entrepreneurs benefit from the low entry barriers, direct customer engagement, and targeted marketing opportunities offered by social commerce platforms. The ability to establish a virtual storefront and reach a wide audience with minimal resources levels the playing field for young entrepreneurs, enabling them to compete with established brands. The direct interaction and personalized communication with customers foster trust, brand loyalty, and customer satisfaction, which are crucial for long-term success. Additionally, the use of social influencers and usergenerated content amplifies brand exposure and credibility. However, it is important to acknowledge the challenges that young entrepreneurs face in utilizing social commerce effectively. The intense competition in the social commerce space requires entrepreneurs to differentiate themselves and constantly adapt their strategies to stand out. Selecting the right platforms and managing multiple channels can be resource-intensive. Building and maintaining customer trust is critical, as privacy, security, and authenticity concerns can hinder purchase decisions. Young entrepreneurs must implement robust security measures, transparent policies, and social proof elements to mitigate these challenges. 4.2.Implications and Recommendations for Young Entrepreneurs: Based on the findings, several implications and recommendations emerge for young entrepreneurs seeking success in social commerce: a. Develop compelling and visually appealing content: Young entrepreneurs should focus on creating engaging content that resonates with their target audience. Storytelling techniques, visual elements, and user-generated content can create an emotional connection and increase customer engagement. b. Leverage influencer marketing: Collaborating with social media influencers who align with the brand values and target audience can significantly enhance brand exposure, credibility, and customer engagement. Building partnerships with influencers can amplify product reach and generate authentic user reviews. c. Foster strong customer relationships: Actively engage with customers through social media platforms by responding promptly to inquiries, providing personalized recommendations, and acknowledging feedback. These interactions help build trust, foster loyalty, and enhance the overall customer experience. d. Monitor and analyze data: Utilize analytics and data-driven insights to monitor key performance metrics, such as conversion rates, customer engagement, and social media reach. This information can guide decision-making, identify areas for improvement, and optimize social commerce strategies. e. Stay updated and adapt: Continuously monitor the evolving landscape of social media platforms, as trends and user preferences can change rapidly. Young entrepreneurs should adapt their strategies to leverage emerging features and functionalities, ensuring they remain relevant and maximize their online presence. 4.3.Limitations of the Study: While this research paper provides valuable insights into the effectiveness of social commerce for young entrepreneurs, there are certain limitations to consider. Firstly, the research primarily relies on existing literature, surveys, and case studies, which may not capture the full spectrum of experiences and nuances associated with social commerce for young entrepreneurs. Additionally, the study focuses on a specific demographic of young entrepreneurs and may not fully represent the diverse experiences and contexts of all young entrepreneurs engaging in social commerce. Furthermore, the rapidly evolving nature of social commerce and the dynamic nature of social media platforms pose challenges in maintaining up-to-date information. The findings of this research are based on the available literature and data up until the time of this study's completion. As social commerce continues to evolve, further research is needed to keep pace with the changing landscape and provide more comprehensive insights. 4.4.Future Research Directions: This research opens avenues for future investigations into social commerce and its impact on young entrepreneurs. Future research can explore the long-term sustainability and scalability of social commerce ventures, examining the growth trajectories and profitability of young entrepreneurs over time. Additionally, in-depth qualitative studies can provide deeper insights into the strategies, challenges, and success stories of young entrepreneurs in social commerce. CONCLUSION Social commerce has proven to be an effective tool in empowering the next generation of entrepreneurs. Through an evaluation of its impact on various entrepreneurial aspects, this research paper has shed light on the potential benefits and challenges associated with adopting social commerce strategies. The findings have highlighted that social commerce enables entrepreneurs to reach wider audiences and expand their business growth. By leveraging social media platforms and online marketplaces, entrepreneurs can increase their brand visibility, attract new customers, and drive sales. Moreover, the direct and real-time interaction facilitated by social commerce enhances customer engagement, fostering loyalty and repeat purchases. The implementation of effective social commerce strategies has also shown to improve conversion rates. By utilizing targeted advertising, personalized recommendations, and social proof, entrepreneurs can create a seamless shopping experience that increases the likelihood of conversions and drives sales growth. Additionally, social commerce platforms serve as hubs for innovation and collaboration, enabling entrepreneurs to gather insights, refine their product offerings, and form strategic partnerships. However, it is important to acknowledge the challenges associated with social commerce. Privacy and data security concerns, information overload, market saturation, and the need for adaptation to evolving technologies pose obstacles that entrepreneurs must address in their social commerce endeavors. To maximize the benefits of social commerce, entrepreneurs should develop comprehensive strategies that align with their target audience and consumer behavior. Leveraging social listening tools, analytics, and integrating with e-commerce platforms and emerging technologies can further enhance the effectiveness of social commerce initiatives.
3,549
2023-06-22T00:00:00.000
[ "Business", "Economics" ]